paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:9156d551adff4ed16ba1be79014188caefc901c7
[ "the paper proposes to learn parametric form of optimal quantum annealing schedule. Authors construct 2 versions of neural network parameterizations mapping problem data onto an optimal schedule. They train these networks on artifically generated sets of problem of different size and test final models on the Grover search problem as well as 3SAT. Experiments demonstrate improved performance in comparison to existing approaches." ]
Adiabatic quantum computation is a form of computation that acts by slowly interpolating a quantum system between an easy to prepare initial state and a final state that represents a solution to a given computational problem. The choice of the interpolation schedule is critical to the performance: if at a certain time point, the evolution is too rapid, the system has a high probability to transfer to a higher energy state, which does not represent a solution to the problem. On the other hand, an evolution that is too slow leads to a loss of computation time and increases the probability of failure due to decoherence. In this work, we train deep neural models to produce optimal schedules that are conditioned on the problem at hand. We consider two types of problem representation: the Hamiltonian form, and the Quadratic Unconstrained Binary Optimization (QUBO) form. A novel loss function that scores schedules according to their approximated success probability is introduced. We benchmark our approach on random QUBO problems, Grover search, 3-SAT, and MAX-CUT problems and show that our approach outperforms, by a sizable margin, the linear schedules as well as alternative approaches that were very recently proposed.
[ { "affiliations": [], "name": "Eli Ovits" } ]
[ { "authors": [ "Dorit Aharonov", "Wim van Dam", "Julia Kempe", "Zeph Landau", "Seth Lloyd", "Oded Regev" ], "title": "Adiabatic quantum computation is equivalent to standard quantum computation", "venue": "SIAM Review,", "year": 2008 }, { "authors": [ "Tameem Albash", "Daniel A. Lidar" ], "title": "Adiabatic quantum computation", "venue": "Reviews of Modern Physics,", "year": 2018 }, { "authors": [ "Sergio Boixo", "Troels F. Rønnow", "Sergei V. Isakov", "Zhihui Wang", "David Wecker", "Daniel A. Lidar", "John M. Martinis", "Matthias Troyer" ], "title": "Evidence for quantum annealing with more than one hundred qubits", "venue": "Nature Physics,", "year": 2014 }, { "authors": [ "Yu-Qin Chen", "Yu Chen", "Chee-Kong Lee", "Shengyu Zhang", "Chang-Yu Hsieh" ], "title": "Optimizing quantum annealing schedules: From monte carlo tree search to quantumzero", "venue": "arXiv preprint arXiv:2004.02836,", "year": 2020 }, { "authors": [ "William Cruz-Santos", "Salvador E. Venegas-Andraca", "Marco Lanzagorta" ], "title": "A QUBO formulation of minimum multicut problem instances in trees for d-wave quantum annealers", "venue": "Scientific Reports,", "year": 2019 }, { "authors": [ "Edward Farhi", "Jeffrey Goldstone", "Sam Gutmann", "Michael Sipser" ], "title": "Quantum computation by adiabatic evolution", "venue": "arXiv preprint quant-ph/0001106,", "year": 2000 }, { "authors": [ "Fred Glover", "Gary Kochenberger", "Yu Du" ], "title": "A tutorial on formulating and using qubo models", "venue": null, "year": 2018 }, { "authors": [ "Hayato Goto", "Kosuke Tatsumura", "Alexander R. Dixon" ], "title": "Combinatorial optimization by simulating adiabatic bifurcations in nonlinear hamiltonian systems. Science Advances, 5(4):eaav2372, apr 2019", "venue": "doi: 10.1126/sciadv.aav2372", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Günter Klambauer", "Thomas Unterthiner", "Andreas Mayr", "Sepp Hochreiter" ], "title": "Self-normalizing neural networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Richard Liboff" ], "title": "Introductory quantum mechanics", "venue": "Addison-Wesley, San Francisco,", "year": 2003 }, { "authors": [ "Jian Lin", "Zhong Yuan Lai", "Xiaopeng Li" ], "title": "Quantum adiabatic algorithm design using reinforcement learning", "venue": "Physical Review A,", "year": 2020 }, { "authors": [ "Catherine C. McGeoch" ], "title": "Adiabatic quantum computation and quantum annealing: Theory and practice", "venue": "Synthesis Lectures on Quantum Computing,", "year": 2014 }, { "authors": [ "A.T. Rezakhani", "W.-J. Kuo", "A. Hamma", "D.A. Lidar", "P. Zanardi" ], "title": "Quantum adiabatic brachistochrone", "venue": "Physical Review Letters,", "year": 2009 }, { "authors": [ "Jérémie Roland", "Nicolas J. Cerf" ], "title": "Quantum search by local adiabatic evolution", "venue": "Physical Review A,", "year": 2002 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "A general reinforcement learning algorithm that masters chess, shogi, and go through self-play", "venue": null, "year": 2018 }, { "authors": [ "Yuki Susa", "Yu Yamashiro", "Masayuki Yamamoto", "Hidetoshi Nishimori" ], "title": "Exponential speedup of quantum annealing by inhomogeneous driving of the transverse field", "venue": "Journal of the Physical Society of Japan,", "year": 2018 }, { "authors": [ "Lishan Zeng", "Jun Zhang", "Mohan Sarovar" ], "title": "Schedule path optimization for quantum annealing and adiabatic quantum computing", "venue": "arXiv preprint arXiv:1505.00209,", "year": 2015 }, { "authors": [ "Marko Žnidarič" ], "title": "Scaling of the running time of the quantum adiabatic algorithm for propositional satisfiability", "venue": "Physical Review A,", "year": 2005 }, { "authors": [ "Lin" ], "title": "A set of 250 test problems, randomized in a similar fashion to the training dataset of QUBO problems with n = 8, was generated and its optimal path s∗", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many of the algorithms developed for quantum computing employ the quantum circuit model, in which a quantum state involving multiple qubits undergoes a series of invertible transformations. However, an alternative model, called Adiabatic Quantum Computation (AQC) (Farhi et al., 2000; McGeoch, 2014), is used in some of the leading quantum computers, such as those manufactured by D-Wave Systems (Boixo et al., 2014). AQC algorithms can achieve quantum speedups over classical algorithms (Albash & Lidar, 2018), and are polynomially equivalent to the quantum circuit model (Aharonov et al., 2008).\nIn AQC, given a computational problem Q, e.g., a specific instance of a 3SAT problem, a physical system is slowly evolved until a specific quantum state that represents a proper solution is achieved. Each AQC run involves three components:\n1. An initial Hamiltonian Hb, chosen such that its ground state (in matrix terms, the minimal eigenvector of Hb) is easy to prepare and there is a large spectral gap. This is typically independent of the specific instance of Q.\n2. A final Hamiltonian Hp designed such that its ground state corresponds to the solution of the problem instance Q.\n3. An adiabatic schedule, which is a strictly increasing function s(t) that maps a point in time 0 ≤ t ≤ tf , where tf is total computation time, to the entire interval [0, 1] (i.e., s(0) = 0, s(tf ) = 1, and s(t1) < s(t2) iff t1 < t2 and vice versa).\nThese three components define a single time-dependent HamiltonianH(t), which can be seen as an algorithm for solving Q:\nH(t) = (1− s(t)) · Hb + s(t) · Hp (1)\nAt the end of the adiabatic calculation, the quantum state is measured. The square of the overlap between the quantum state and ground state of the final Hamiltonian, is the fidelity, and represents the probability of success in finding the correct solution. An AQC algorithm that is evolved over an insufficient time period (a schedule that is too fast) will have a low fidelity. Finding the optimal\nschedule, i.e., the one that would lead to a high fidelity and would keep the time complexity of the algorithm minimal is, therefore, of a great value. However, for most problems, an analytical solution for the optimal schedule does not exist (Albash & Lidar, 2018).\nAttempts were made to optimize specific aspects of the adiabatic schedule by using iterative methods (Zeng et al., 2015) or by direct derivations (Susa et al., 2018). Performance was evaluated by examining characteristics of the resulting dynamic (e.g. the minimum energy gap) and no improvement was demonstrated on the full quantum calculation.\nPrevious attempts to employ AI for the task of finding the optimal schedule have relied on Reinforcement Learning (Lin et al., 2020; Chen et al., 2020). While these methods were able to find schedules that are better than the linear path, they are limited to either learning one path for a family of problems (without considering the specific instance) or to rerunning the AQC of a specific instance Q multiple times in order to optimize the schedule.\nIn our work, supervised learning is employed in order to generalize from a training set of problems and their optimal paths to new problem instances. Training is done offline and the schedule our neural model outputs is a function of the specific problem instance. The problem instance is encoded in our model either based on the final HamiltonianHp or directly based on the problem. The suggested neural models are tested using several different problem types: Grover search problems, 3SAT and MAX-CUT problems, and randomized QUBO problems. We show that the evolution schedules suggested by our model greatly outperform the naive linear evolution schedule, as well as those schedules provided by the recent RL methods, and allow for much shorter total evolution times." }, { "heading": "2 BACKGROUND", "text": "The goal of the scheduling task is to find a schedule s(t) that maximizes the probability to get the correct answer for instance Q, using Hb and Hp over an adiabatic quantum computer. The solution to Q is coded as the lowest energy eigenstate ofHp. In order to achieve the solution state with high probability, the system must be evolved “sufficiently slowly”. The adiabatic theorem (Roland & Cerf, 2002; Albash & Lidar, 2018; Rezakhani et al., 2009) is used to analyze how fast could this evolution be. It states that the probability to reach the desired state at the end of the adiabatic calculation is 1− ε2 for ε << 1 if∣∣〈E1(t)| ddtH(t) |E0(t)〉∣∣\ng2(t) ≤ ε (2)\nwhere the Dirac notation (Tumulka, 2009) is used1, E0(t) (E1(t)) is the ground state (first excited state) of the time dependent Hamiltonian H(t), i.e., the eigenstates that corresponds to the lowest (2nd lowest) eigenvalue, and g(t) is the time dependent instantaneous spectral gap between the smallest and second smallest eigenvalues ofH(t). Let tf be the total calculation time. let s(t) be an evolution schedule , such that s(0) = 0, s(tf ) = 1. Applying the adiabatic condition for s(t), we get∣∣〈E1(s(t))| dsdt ddsH(s(t)) |E0(s(t))〉∣∣\ng2(s(t)) ≤ ε⇒ ds dt ≤ ε g 2(s)∣∣〈E1(s)| ddsH(s) |E0(s)〉∣∣ (3) we could solve for t(s) by integration to get\nt(s) = 1\nε s∫ 0 ∣∣〈E1(s)| ddsH(s) |E0(s)〉∣∣ g2(s) ds (4)\nand the total required evolution time is\ntf = t(s = 1) = 1\nε 1∫ 0 ∣∣〈E1(s)| ddsH(s) |E0(s)〉∣∣ g2(s) ds (5)\n1See appendix A for the conventional matrix notation.\nWe note that finding a numerical solution for eq 4 requires calculating the full eigenvalue decomposition ofH(x)." }, { "heading": "2.1 MOST-RELATED WORK", "text": "Two recent contributions use deep learning in order to obtain, for a given tf , a schedule that outperform the linear schedule. Lin et al. (2020) suggest using deep reinforcement learning in order to find an optimal schedule for each specific class of problems (e.g., 3SAT problems of a certain size). In contrast, we study the problem of finding schedules for generic problem instances. They train and benchmark their performance by simulating an adiabatic quantum computer, and scoring the computation results for randomly chosen problem instances. Their results are generally better than the naive linear schedule, and the solution produced by their neural network is somewhat transferable for larger problem sizes.\nChen et al. (2020) also use RL to construct, given a tf , a schedule for 3SAT problems. The most successful technique suggested is a Monte Carlo Tree Search (MCTS, Silver et al. (2016)), which produces results that significantly outperform the linear schedule. This technique requires running the adiabatic evolution process many times for each problem, in order to find a successful schedule. An approach inspired by alpha-zero (Silver et al., 2018) is used to adapt the generic MCTS solution to specific problem class, while requiring only a few additional rounds of the adiabatic process for each new instance. In our method, we do not require any run given a new problem instance." }, { "heading": "3 METHOD", "text": "We consider two types of deep neural models. The first model is designed to get the problem Hamiltonian Hp as an input. For an n qubit problem, the problem Hamiltonian is generally of size 2n×2n. In this work, we consider problem Hamiltonians which are diagonal and can be represented by vector of size 2n. This scenario covers both the Grover search problem and the 3SAT problem we present in Sec. 4.\nThe second model is designed to get a quadratic unconstrained binary optimization (QUBO) problem as an input. The QUBO problem has the following form:\nx̄ = argminx(x TQx) , (6)\nwhere x is a vector of binary variables and Q ∈ Rn×n defines the specific QUBO instance. The QUBO problem is NP-Complete, and many types of common problems can be reduced to QUBO (Glover et al., 2018). The QUBO formulation is of special interest in the context of adiabatic quantum computing, since it allows a relatively easy mapping to real quantum annealing devices that do not possess full qubit connectivity (Cruz-Santos et al., 2019).\nA QUBO problem Q can be converted to the Hamiltonian form in the following fashion:\nHp = n∑ i=1 Qii( I + σiz 2 ) + ∑ i6=j Qij( I + σiz 2 )( I + σjz 2 ) , (7)\nwhere σiz is the Pauli matrix σz operating only on qubit i (Liboff, 2003). The resultingHp is of size 2n × 2n and is diagonal. The prediction target of our models is the desired normalized schedule ŝ(t), which is defined over the range [0, 1] as ŝ(t) = s(t/tf ). For the purpose of estimation, it is sampled at 100 points in the interval t = [0, 1]. The representation of this schedule is given as a vector d ∈ [0, 1]99, which captures the temporal derivative of the schedule. In other words, d is trained to hold the differences between consecutive points on the path, i.e., element i is given by di = ŝ((i+ 1)/100)− ŝ(i/100). Note that the sum of d is one." }, { "heading": "3.1 UNIVERSALITY OF THE OPTIMAL SCHEDULE", "text": "The reason that we work with the normalized schedule is that the optimal evolution schedule is not dependent upon the choice of tf . As shown next, for every time budget tf , the same normalized schedule would provide the highest fidelity (neglecting decoherence).\nLet s1(t) : [0, tf ] → [0, 1] be a suggested evolution schedule, which outperforms a different suggested schedule s2(t), for a specific tf = τ1, i.e. it achieves a greater fidelity at the end of the schedule for a specific problem instance Q. Then, Thm. 1 shows that s1(t) outperforms s2(t) for every possible choice of tf for the same problem Q.\nTheorem 1. Let s1(t) and s2(t) be two monotonically increasing fully differentiable bijective functions from [0, tf = τ1] to [0, 1]. Let Q be an optimization problem, and assume that s1(t) achieves a greater fidelity than s2(t) at the end of a quantum adiabatic computation for Q with total evolution time tf = τ1. Then, for any other choice tf = τ2, the scaled schedule s1( τ2τ1 t) will achieve a greater fidelity than s2( τ2τ1 t) for an adiabatic computation over the same problem Q with total evolution time tf = τ2.\nThe proof can be found in appendix B." }, { "heading": "3.2 ARCHITECTURE", "text": "The model architectures are straightforward and no substantial effort was done to optimize them. The Hamiltonian as input model has seven fully connected layers, with decreasing sizes: 4096, 2048, 2048, 1024, 512, and finally the output layer, which, as mentioned, is of size 99.\nFor the QUBO model, in which the input is a matrix, a two part architecture was used. In the first part, five layers of 2D convolution was employed, with kernel size of 3 × 3, for 64 kernels. The output from the convolution layers was then flattened to a vector of size 64n2, and fed to the second part of the network, consisted of five fully connected layers, with decreasing dimensions of 2048, 1024, 1024, 512, and finally the output layer of size 99.\nThis output layers in both models are normalized to have a sum of one. For both models, the SELU activation function Klambauer et al. (2017) was used for all layers, except the final layer, which used the sigmoid (logistic) function." }, { "heading": "3.3 A FIDELITY BASED LOSS FUNCTION", "text": "Let |ψ(t)〉 is the state of the quantum system at time t = stf . The fidelity of the QAC is given by (Farhi et al., 2000) psuccess = |〈E0(s = 1) |ψ(t = tf )〉|2 , (8) where 〈E`(s = 1)| is the `-th eigenstate of the parameter dependent evolution Hamiltonian H(s), such that 〈E0(s = 1)| is the ground state of the final HamiltonianHp. Finding 〈E0(s = 1)| requires performing eigenvalue decomposition for Hp, which is equivalent to solving the original optimization problem, and is done for the training set.\nThe quantum state |ψ(t)〉 is evolving according to the Schrödinger equation\ni d\ndt |ψ(t)〉 = H(t) |ψ(t)〉 (9)\nA brute force approach for finding psuccess is to numerically solve the Schrödinger equation, see appendix C. This full numerical calculation is, however, too intense to be practical. We next develop an approximate method that would be easier to compute and still be physically meaningful. It is based on the adiabatic local evolution speed limit from Eq. 3:\n∣∣∣∣dsdt ∣∣∣∣ ≤ ε g2(s)∣∣〈E1(s)| ddsH(s) |E0(s)〉∣∣ (10)\nThis inequality could be used as a local condition for convergence of any suggested path. We define\ng2E(s) = g2(s)∣∣〈E1(s)| ddsH(s) |E0(s)〉∣∣ (11)\nWe would like to use the local condition to create a global convergence condition for a full suggested path s(t), 0 ≤ t ≤ tf . To do so, we integrate both sides of Eq. 10 over the suggested schedule s.\nThis integral represents a mean value of the local adiabatic condition, for every point in the suggested schedule.\nε = 1∫ 0 ds dt g2E(s) ds (12)\nWe note that integrand is always positive (assuming s(t) is monotonically increasing). Recall that the adiabatic theorem ties ε to the fidelity: ε = √ 1− psuccess. By defining the right hand side of Eq.12 as our loss function, we ensure that any training process that minimizes Eq. 12 will maximize the fidelity. Recall that the vector d that the network outputs is a vector of differences, therefore, it approximates the local derivatives of the obtained path. Let ŝ∗ be the optimal normalized path, which we estimate for each training sample. The loss function is, therefore, defined as:\nL(d, ŝ∗) = 99∑ i=1 d2i g2E(ŝ ∗(i/100)) (13)\nThe values of gE are precomputed along the optimal path ŝ∗ for efficiency. While the denominator is obtained on points that do not correspond to the estimated path (the commutative sum of d), the approximation becomes increasingly accurate at the estimated path appraoches the optimal one." }, { "heading": "3.4 TRAINING DATA AND THE TRAINING PROCESS", "text": "In order to train the QUBO problem model, we produced a training dataset of 10,000 random QUBO instances for each problem size: n = 6, 8, 10. The QUBO problems were generated by sampling independently, from the normal distribution, each coefficient of the problem matrix Q. The entire matrix Q was then multiplied by a single random normal variable.\nWe approximated an optimal evolution schedule for each problem, by calculating the full eigenvalue decomposition ofHt as described in Sec 2. We also calculated the value of g(s(t)) for each problem. For the model that uses the problem Hamiltonian as input, we used the same prepared QUBO problems, converted to the Hamiltonian form. In addition, we added another 500 cases of randomized Hamiltonians with randomized values around distinct energy levels. For each Hamiltonian, We first randomized an energy level between the following values: 0.5, 1, 1.5 or 2, and then randomized uniformly distributed values around the selected energy level. To each Hamiltonian we added a single ground state with energy 0. This type of Hamiltonian is not commonly created by the random QUBO creation process described above, but is more representative of binary optimization problems, and specifically more closely resembles problem Hamiltonians for the Grover problem and the 3SAT problem, which we later use to benchmark our model performance. We note that the Hamiltonian for these specific problems in our test set are nevertheless different from our randomized problem Hamiltonians, which highlights the generalization capability of our method.\nThe training was performed using the Adam optimizer (Kingma & Ba, 2014), with batches of size 200. Batch normalization (Ioffe & Szegedy, 2015) was applied during training. A uniform dropout value of 0.1 is employed for all layers during the model training." }, { "heading": "4 RESULTS", "text": "As a baseline to the loss L (Eq. 13) we use, we employed the Mean Squared Error (MSE) loss, for which the model output was compared to the known optimal schedule from the dataset, which was calculated in advance." }, { "heading": "4.1 GROVER SEARCH", "text": "The Grover algorithm is a well-known quantum algorithm that finds with high probability the unique input to a black box function that produces a particular output value, using just √ N evaluations of the function, where N is size of the search space. For an n qubit space, the search is over the set {0, 1, .., 2n− 1}, making N = 2n. It is possible to reproduce the Grover speedup using an adiabatic formulation, with the following problem Hamiltonian:\nHp = I − |m〉 〈m| , (14)\nwhere |m〉 is the state that represents the value we search. Roland & Cerf (2002) showed that for this problem, a linear schedule does not produce quantum speedup over a classical algorithm, but for a specific initial Hamiltonian Hb = I − |ψ0〉 〈ψ0|, for ψ0 as the maximal superposition state (a sum of the states representing all values from 0 to N − 1), an optimal schedule could be derived analytically to achieve a quadratic speedup. The optimal path is given by\nŝ(t) = 1\n2 +\n1\n2 √ N − 1\ntan [ (2s− 1) tan−1 √ N − 1 ] (15)\nIn practice, the proposedHb is hard to physically realize, and a simpler initial Hamiltonian is used:\nHb = 1\n2 n∑ i=1 I − σix , (16)\nwhere σix is the Pauli matrix σx operating only on qubit i (Liboff, 2003).\nWe test our model’s performance by using the Grover problem Hamiltonian Hp as input for several problem sizes. Different Grover problems are completely symmetrical, and are identical after changing variables, so it is sufficient to use a single test case to test our model.\nWe benchmark our model’s performance by simulating AQC for multiple values of tf , and calculating the fidelity by measuring the overlap between the quantum state at the end of the adiabatic evolution and the solution state.\nWe also show the convergence pattern for the fidelity (i.e. the overlap with the solution state, measured during the adiabatic evolution) for a single specific tf . For each problem size, we chose a different tf , for which a full convergence (p > 0.95) is achieved with the evolution schedule suggested by our model. We compare several suggested schedules: the path produced by training our model using our novel loss function, the path produced by training our model using the MSE loss, the linear path, and a numerically calculated optimal path. We also include the results reported by Lin et al. (2020) for the same problem.\nThe results are reported in Fig. 1 for n = 6, 10, see appendix for n = 8. It is evident that our model produces paths that are significantly superior to the linear path, and also outperforms Lin et al. (2020). The advantage of the new loss function over the MSE loss is also clear.\nRecall that for a Grover search with a certain n, Hp is a diagonal matrix of size 2n × 2n. To check whether the model trained on n = 10 generalizes to larger search problems, we view the diagonal ofHp for n′ > n as a 1D signal. This signal is smoothed by a uniform averaging mask of size 6 2 n′\n2n , and subsampled to obtain a diagonal of size 2n.\nThe results are presented in Fig. 2. Evidently, the network trained for n = 10 achieves much better results than the linear baseline for sizes n′ = 12, 14, 16. We also trained a network for n = 16. As can be seen in Fig. 2(c), this network does achieve better fidelity than the smaller network. We note that no significant changes were made to the network architecture, and the only difference is in the size of the input layer. Appendix D presents results for the n = 16 network on n′ = 17, .., 20. Our L-trained model achieves a much better fidelity than the linear schedule and the MSE baseline." }, { "heading": "4.2 3SAT", "text": "In the 3-SAT problem, the logical statement consists of m clauses, Ci, such that each clause contain a disjunction over three variables out of n binary variables. A solution to the 3SAT problem is an assignment for the n variables that satisfies all m clauses. It is possible to construct a problem Hamiltonian for each 3SAT problem, by taking a sum over all clauses\nHp = 1\n2 m∑ i=1 I + σFiz , (17)\nwhere σFiz is the Pauli matrix σz operating only on the state that represents the assignment |a = {0, 1}, b = {0, 1}, c = {0, 1}〉 which produces False value for clause i. This Hamiltonian counts the number of clauses which are not satisfied by each assignment, and its ground state corresponds to the eigenvalue 0 and represents the solution of the problem, for which all clauses are satisfied.\nWe test our model’s performance, by randomizing 3SAT problems, and converting them to Hamiltonian form. Following Chen et al. (2020), we focus on 3SAT problems with a single solution, and a number of clauses m = 3n. This type of 3SAT problems is considered difficult to solve with adiabatic algorithms (Žnidarič, 2005).\nWe benchmark our model’s performance by simulating the adiabatic computation for multiple values of tf and calculating the fidelity by measuring the overlap between the quantum state at the end of the adiabatic evolution and the solution state.\nIn addition to the linear path and the paths obtained by training with eitherL or MSE, we also include for n=11, the results for the schedules designed by MCTS (Chen et al., 2020). For this purpose, we used the test data obtained by Chen et al. As can be seen in Fig. 3, our method outperform all baselines. Note that the MCTS methdod was optimized, for each problem instance and for each tf using tens AQC of runs on the specific test problem, while our method does not run on the test data.\nAs stated in Sec. 3.4, the Hamiltonian model is trained on 10,000 random QUBO problems and 500 random Hamiltonian problems. In Appendix E, we study the performance when the 500 random samples are removed from the training set and when employing fewer training samples." }, { "heading": "4.3 MAX-CUT", "text": "To further demonstrate the generalization capability of the trained model, our Hamiltonian model for size n=10 is tested on random MAX-CUT problems. In a graph, a maximum cut is a partition of the graph’s vertices into two complementary sets, such that the total edges weight between the sets is as large as possible. Finding the maximum cut for a general graph is known to be an NP-complete problem (MAX-CUT).\nTo generate random MAX-CUT problem instances, we choose a random subset of edges that contains at least half of the edges of the fully connected graph. We then sample the weights of each edge uniformly. When converting a MAX-CUT problem to the Hamiltonian form, n is the number of vertices in the graph (Goto et al., 2019).\nFig. 4 presents the results of our our method for both with L and MSE, as well as the linear path. The results were averaged over 50 runs and conducted for n = 10. As can be seen, our complete method outperforms the baselines." }, { "heading": "4.4 QUBO", "text": "To test our models with general QUBO problems, sets of random QUBO test problems of varying difficulty are generated . Since the final energy gap of the corresponding problem Hamiltonian is a critical parameter that determines the difficulty of the problem at hand (problems with a small energy gap require much longer evolution schedules), we generated two sets of test problems. The first has an energy gap of g ∼ 10 and the second has an energy gap of g ∼ 0.1. Varying the gap was obtained by multiplying the random Q matrix by the required values of the gap.\nWe benchmark the model’s performance as in previous problems. However, in this case, we have two alternatives models: the one the receives the matrix Q as input and the one that receives the HamiltonianHp.\nWe noticed that the process of creating samples of varying spectral gaps creates a mismatch in scale with the training set problems. To compensate, we pre-process the inputs to the network models. Specifically, for the model that has Q as input, we normalize the Frobenius norm of each Q such that if it is larger than 60, we scale Q to have a norm of 60. Similarly for the model that accepts Hp as input, we clip every value that is larger than 90 to be 90 (Q with high norms translate to Hamiltonians with specific coeefieicents that are high). To clarify, this preprocessing is only applied to the input to the models and does not change the problem we solve.\nOur Results are presented at Fig. 5. As can be seen, our dedicated QUBO model (Q as input) constructs successful schedules, outperforming all other models. The Hamiltonian model trained with our loss obtains the second highest results. The advantage of the fidelity-based loss term is evident in all cases.\nFor a further comparison between the L loss term and MSE, please refer to Appendix F." }, { "heading": "5 CONCLUSIONS", "text": "Optimal scheduling of AQC tasks is the main way to reduce the time complexity for an emerging class of quantum computes. While recent work has applied RL for this task, it either provided a generic schedule for each class of problems or required running the exact computation that needs to be solved multiple times. Our solution employs a separate training set, and at test time provides a schedule that is tailored to the specific instance, without performing any runs. Remarkably, although our training was performed for one type of problem (QUBO), it generalizes well to completely different instances: Grover search, 3-SAT, and MAX-CUT. At the heart of our method lies a new type of loss that maximizes the fidelity based on a new approximation of the success probability. Our experiments demonstrate the effectiveness of our method, as well as its advantage over the recent contributions." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant ERC CoG 725974)." }, { "heading": "A CONVENTIONAL MATRIX NOTATION", "text": "For those not familiar with the Dirac notation, we repeat the equations in conventional matrix multiplication notation.\n∣∣E1(t)>( ddtH)E0(t)∣∣ g2(t) ≤ ε (2)\n∣∣∣E1(s(t))> dsdt ddsH(s(t))E0(s(t))∣∣∣ g2(s(t)) ≤ ε⇒ ∣∣∣∣dsdt ∣∣∣∣ ≤ ε g2(s)∣∣∣E1(s)> ddsH(s)E0(s)∣∣∣ (3)\nt(s) = 1\nε s∫ 0 ∣∣∣E1(s)> ddsH(s)E0(s)∣∣∣ g2(s) ds (4)\ntf = t(s = 1) = 1\nε 1∫ 0 ∣∣∣E1(s)> ddsH(s)E0(s)∣∣∣ g2(s) ds (5)\npsuccess = ∣∣E0(s = 1)>ψ(t = tf )∣∣2 (8)\ni d\ndt ψ(t) = H(t)ψ(t) (9)\n∣∣∣∣dsdt ∣∣∣∣ ≤ ε g2(s)∣∣∣E1(s)> ddsH(s)E0(s)∣∣∣ (10)\ng2E(s) = g2(s)∣∣∣E1(s)> ddsH(s)E0(s)∣∣∣ (11)\nHp = I −mTm (14)" }, { "heading": "B PROOF OF THM. 1", "text": "Theorem 1. Let s1(t) and s2(t) be two monotonically increasing fully differentiable bijective functions from [0, tf = τ1] to [0, 1]. Let Q be an optimization problem, and assume that s1(t) achieves a greater fidelity than s2(t) at the end of a quantum adiabatic computation for Q with total evolution time tf = τ1. Then, for any other choice tf = τ2, the scaled schedule s1( τ2τ1 t) will achieve a greater fidelity than s2( τ2τ1 t) for an adiabatic computation over the same problem Q with total evolution time tf = τ2.\nProof. The adiabatic condition from Eq. 3 defines a local speed limit over the evolution schedule. We define:\ng2E(s) = g2(s)∣∣〈E1(s)| ddsH(s) |E0(s)〉∣∣ (18)\nThen, for both schedules si (t) , i = 1, 2 the local adiabatic speed is 1\ng2E (si (t))\ndsi (t)\ndt = εi(t), 0 ≤ t ≤ τ1 (19)\nWe now consider a new tf = τ2. We use the same suggested schedules with a scaling factor a = τ1τ2 :\nsscaledi (t) = si (at) (20)\nIt is clear that sscaled1 (t = 0) = 0 and s scaled 1 (t = τ2) = s1 ( τ1 τ2 τ2 ) = s1 (t = τ1) = 1, and the same is true for sscaled2 (t). We calculate the new derivative\ndsscaledi (t)\ndt = a\ndsi (at)\ndt (21)\nBy multiplying Eq. 19 by factor a we can get for the new time axis 0 ≤ t ≤ τ2\na 1\ng2E (si (at))\ndsi (at)\ndt = a · εi(t) (22)\nthen, we can switch to the scaled schedules and finally\n1 g2E ( sscaledi (t) ) dsscaledi (t) dt = a · εi(t) = εscaledi (t) (23)\nWe now consider the fidelity for each evolution schedule. According to the adiabatic theorem, the fidelity achieved at the end of the adiabatic evolution for each schedule is dependent solely on the local adiabatic speeds εi (t). The resulting fidelity for the full path is then bounded by some functional F : L2 → R which transforms all of the local adiabatic speeds to a single number.\npi ≥ 1−F ( ε2i (t) ) (24)\nFollowing Roland & Cerf (2002), we assume a global maximum value F [f (t)] = max (f (t)) (25) pi ≥ 1−max ( ε2i (t) ) (26) it is clear that for this choice of F , F [af (t)] = aF [f (t)] (27)\nFor any positive scalar a. It follows that the new values for fidelity for the scaled schedules are bounded by pnewi ≥ 1−F ( a2ε2i (t) ) = 1− a2F ( ε2i (t) ) (28)\nWe assumed p1 > p2, so F ( ε21(t) ) < F ( ε22(t) ) , and for any a it remains true that\na2F ( ε21(t) ) < a2F ( ε22(t) ) (29)\nand therefore pnew1 ≥ pnew2 (30) We note that this holds true for many choices for F [f (t)], as long as F [af (t)] = q (a)F [f (t)] (31)\nfor some monotonically increasing function q." }, { "heading": "C SOLVING THE SCHRÖDINGER EQUATION FOR THE ADIABATIC EVOLUTION", "text": "It is possibly numerically integrate and solve differential equation in Eq. 9, using the explicit evolution HamiltonianH(s) for every 0 < s < 1, and the boundary condition |ψ(t = 0)〉 = |E0(s = 0)〉, where |E0(s = 0)〉 is the known ground state of the initial Hamiltonian Hb. This first order differential equation, could be solved numerically to obtain |ψ(t = tf )〉 in the following fashion:\n1. Divide the time axis to M slices 1..M\n2. For every time slice, findHm = H(s(t = tfM ·m)) 3. Calculate the eigenvalue decomposition ofHm: eigenvectors Vi and eigenvalues Ei 4. Find the projection of the last quantum state |ψm−1〉 onto the eigenvectors space\n|ψm−1〉 = N∑ i=1 aiVi (32)\nai = 〈Vi, ψi−1〉 (33)\n5. Evolve the quantum state according to\n|ψm〉 = N∑ i=1 eiEi· tf M · ai · Vi (34)\n6. Repeat steps 2-5 until reaching tf" }, { "heading": "D ADDITIONAL GROVER SEARCH RESULTS", "text": "The results of Grover search for n = 8 qubits, for our model, as well as the method of Lin et al. (2020) and other baselines are presented in Fig. 6.\nD.1 n′ > n EXPERIMENTS FOR THE n = 16 MODEL\nTo demonstrate our approach’s ability to employ a model of a certain size for larger problems, we present result for sizes n′ = 17, 18, 19, 20 using the model trained for n = 16.\nThe results are shown in Fig. 7. Our predicted schedule greatly outperforms the baseline linear schedule, with even greater advantage for larger problem sizes. We also compare to the same Hamiltonian model, trained with the MSE loss. As can be seen, this model outperforms the linear model, but is less effective than the model trained with L." }, { "heading": "E ALTERNATIVE TRAINING SETS", "text": "The training set of the Hamiltonian model contains 10,000 random QUBO problems and 500 random Hamiltonian problems, see Sec. 3.4. Fig. 8 depicts the effect of training on the first group only, i.e., on the Hamiltonian forms for the QUBO problems. This is shown for both the 3SAT problem and the Grover problem. As can be seen, there is a relatively small drop in performance for the 3SAT problems and a signifcant one for the Grover problem. Note that in both cases, we cannot compare to the QUBO model. For the 3SAT problem, there is a polynomial overhead in size, when using the QUBO form (Glover et al., 2018). For the Grover problem, the QUBO problem is undefined.\nIn another set of experiments, we varied the size of the training dataset. In addition to the 10000+500 samples (of the two types mentioned above), we employed sets of size 1000+62, 2500+125, and 5000+250. Fig. 9 presents the results for 3SAT problems. Evidently, adding more training samples helps. However, there is, as expected, a diminishing returns effect. Note that the 3SAT problem is not captured by neither the random Hamiltonians nor by the random QUBO problems. Therefore, the success on these instances indicates a generalization capability." }, { "heading": "F COMPARING THE ALTERNATIVE LOSS TERMS", "text": "In this work, a novel loss function was presented, that allowed training neural networks with better performance than standard losses. The suggested loss function is justified by our derivation in Sec. 3.3. It is further supported by all experiments conducted and for both the Hamiltonian and the QUBO networks.\nTo visually demonstrate the advantage of our loss function, we present a specific example. We consider for a single 3SAT problem the optimal path and three variants of it. In the first, we add random noise to s(t). in the second, we shift the optimal path by a constant. The third variant adds a linear function of t to it. We also consider the path that was obtained by employing L or MSE, see Fig. 10(a).\nAs can be seen in Fig. 10(b), the best path is the optimal one, followed by the path of our full method, our method with MSE, and the optimal path with the added linear factor. As can be seen in Tab. 1,\nour loss is predictive of the success probability, while the MSE is less so. Specifically, the MSE loss assigns a relatively low loss to the optimal path with the added Gaussian noise, while our method predicts that it would result in a low success probability.\nTo generalize this sample, we compare the ability of the two loss terms to identify the path that would obtain a fidelity of 0.8 faster. This discrimination ability is visualized via a receiver operating characteristic (ROC) curve.\nA set of 250 test problems, randomized in a similar fashion to the training dataset of QUBO problems with n = 8, was generated and its optimal path s∗ was computed. For each problem, two possible schedules, s1 and s2, were randomized. Following Lin et al. (2020), who showed that the Fourier spectrum is an effective representation for paths, we sample the coefficients of the paths in the Fourier domain.\nFor each loss, we compute the score of the two paths with respect to the optimal path, and compute the ratio of the score associated with s1 and the one associated with s2 . For our loss, this is given as L(d1,s\n∗) L(d2,s∗) , where d1, d2 are the difference vectors obtained form the paths s1, s2, respectively. We simulate both paths, and assign a label of 1 if s1 leads to the probability threshold on 0.8 faster than s2, 0 otherwise.\nWe compare the resulting ROC curve for L and for the MSE loss in Fig. 11. It is evident that the suggested loss function is more discriminative of better paths than the the MSE loss." }, { "heading": "G TRAINING DYNAMICS", "text": "In Fig. 12, we present the evolution of the training and validation losses during model training. This is shown both for L and for the MSE loss for the Hamiltonian model of size n = 10." } ]
2,021
FIDELITY-BASED DEEP ADIABATIC SCHEDULING
SP:13fb6d0e4b208c11e5d58df1afac2921c02be269
[ "The paper builds upon previous lines of research on multi-task learning problem, such as conditional latent variable models including the Neural Process. As shown by the extensive Related Work section, this seems to be an active research direction. This makes it difficult for me to judge originality and significance, but it is well-written and clear." ]
Formulating scalable probabilistic regression models with reliable uncertainty estimates has been a long-standing challenge in machine learning research. Recently, casting probabilistic regression as a multi-task learning problem in terms of conditional latent variable (CLV) models such as the Neural Process (NP) has shown promising results. In this paper, we focus on context aggregation, a central component of such architectures, which fuses information from multiple context data points. So far, this aggregation operation has been treated separately from the inference of a latent representation of the target function in CLV models. Our key contribution is to combine these steps into one holistic mechanism by phrasing context aggregation as a Bayesian inference problem. The resulting Bayesian Aggregation (BA) mechanism enables principled handling of task ambiguity, which is key for efficiently processing context information. We demonstrate on a range of challenging experiments that BA consistently improves upon the performance of traditional mean aggregation while remaining computationally efficient and fully compatible with existing NP-based models.
[ { "affiliations": [], "name": "Michael Volpp" }, { "affiliations": [], "name": "Fabian Flürenbrock" }, { "affiliations": [], "name": "Lukas Grossberger" }, { "affiliations": [], "name": "Christian Daniel" }, { "affiliations": [], "name": "Gerhard Neumann" } ]
[ { "authors": [ "Takuya Akiba", "Shotaro Sano", "Toshihiko Yanase", "Takeru Ohta", "Masanori Koyama" ], "title": "Optuna: A Next-generation Hyperparameter Optimization Framework", "venue": null, "year": 2019 }, { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gomez Colmenarejo", "Matthew W. Hoffman", "David Pfau", "Tom Schaul", "Nando de Freitas" ], "title": "Learning to Learn by Gradient Descent by Gradient Descent", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Bart Bakker", "Tom Heskes" ], "title": "Task Clustering and Gating for Bayesian Multitask Learning", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Rémi Bardenet", "Mátyás Brendel", "Balázs Kégl", "Michèle Sebag" ], "title": "Collaborative Hyperparameter Tuning", "venue": "International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Philipp Becker", "Harit Pandya", "Gregor H.W. Gebhardt", "Cheng Zhao", "C. James Taylor", "Gerhard Neumann" ], "title": "Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Samy Bengio", "Jocelyn Cloutier" ], "title": "Learning a Synaptic Learning", "venue": "Rule. International Joint Conference on Neural Networks,", "year": 1991 }, { "authors": [ "Christopher M. Bishop" ], "title": "Pattern Recognition and Machine Learning", "venue": null, "year": 2006 }, { "authors": [ "R. Calandra", "J. Peters", "C.E. Rasmussen", "M.P. Deisenroth" ], "title": "Manifold Gaussian Processes for Regression", "venue": "International Joint Conference on Neural Networks,", "year": 2016 }, { "authors": [ "Benjamin Seth Cazzolato", "Zebb Prime" ], "title": "On the Dynamics of the Furuta Pendulum", "venue": "Journal of Control Science and Engineering,", "year": 2011 }, { "authors": [ "Yutian Chen", "Matthew W. Hoffman", "Sergio Gómez Colmenarejo", "Misha Denil", "Timothy P. Lillicrap", "Matt Botvinick", "Nando de Freitas" ], "title": "Learning to Learn without Gradient Descent by Gradient Descent", "venue": "International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Andreas Damianou", "Neil Lawrence" ], "title": "Deep Gaussian Processes", "venue": "International Conference on Artificial Intelligence and Statistics,", "year": 2013 }, { "authors": [ "MP. Deisenroth", "CE. Rasmussen" ], "title": "PILCO: A Model-Based and Data-Efficient Approach to Policy Search", "venue": "International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Harrison A. Edwards", "Amos J. Storkey" ], "title": "Towards a Neural Statistician", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Li Fei-Fei", "R. Fergus", "P. Perona" ], "title": "One-shot Learning of Object Categories", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2006 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks", "venue": "International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Kelvin Xu", "Sergey Levine" ], "title": "Probabilistic Model-Agnostic Meta-Learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "K. Furuta", "M. Yamakita", "S. Kobayashi" ], "title": "Swing-up Control of Inverted Pendulum Using PseudoState Feedback", "venue": "Journal of Systems and Control Engineering,", "year": 1992 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", "venue": "International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Daniel Golovin", "Benjamin Solnik", "Subhodeep Moitra", "Greg Kochanski", "John Karro", "D. Sculley" ], "title": "Google Vizier: A Service for Black-Box Optimization", "venue": "International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [ "Jonathan Gordon", "John Bronskill", "Matthias Bauer", "Sebastian Nowozin", "Richard E. Turner" ], "title": "Meta-Learning Probabilistic Inference for Prediction", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Erin Grant", "Chelsea Finn", "Sergey Levine", "Trevor Darrell", "Thomas L. Griffiths" ], "title": "Recasting Gradient-Based Meta-Learning as Hierarchical Bayes", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tom Heskes" ], "title": "Empirical Bayes for Learning to Learn", "venue": "International Conference on Machine Learning,", "year": 2000 }, { "authors": [ "Geoffrey E. Hinton", "Russ R. Salakhutdinov" ], "title": "Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes", "venue": "Advances in Neural Information Processing Systems,", "year": 2008 }, { "authors": [ "Geoffrey E. Hinton", "Drew van Camp" ], "title": "Keeping the Neural Networks Simple by Minimizing the Description Length of the Weights", "venue": "Annual Conference on Computational Learning Theory,", "year": 1993 }, { "authors": [ "Sepp Hochreiter", "A. Steven Younger", "Peter R. Conwell" ], "title": "Learning to Learn Using Gradient Descent", "venue": "International Conference on Artificial Neural Networks,", "year": 2001 }, { "authors": [ "Taesup Kim", "Jaesik Yoon", "Ousmane Dia", "Sungwoong Kim", "Yoshua Bengio", "Sungjin Ahn" ], "title": "Bayesian Model-Agnostic Meta-Learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "International Conference on Learning Representations,", "year": 2013 }, { "authors": [ "Gregory Koch", "Richard Zemel", "Ruslan Salakhutdinov" ], "title": "Siamese Neural Networks for One-shot Image Recognition", "venue": "International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Neil Lawrence", "Matthias Seeger", "Ralf Herbrich" ], "title": "Fast Sparse Gaussian Process Methods: The Informative Vector Machine", "venue": "Advances in Neural Information Processing Systems,", "year": 2002 }, { "authors": [ "M. Lazaro-Gredilla", "A.R. Figueiras-Vidal" ], "title": "Marginalized Neural Network Mixtures for LargeScale Regression", "venue": "IEEE Transactions on Neural Networks,", "year": 2010 }, { "authors": [ "Tuan Anh Le", "Hyunjik Kim", "Marta Garnelo" ], "title": "Empirical Evaluation of Neural Process Objectives", "venue": "Third Workshop on Bayesian Deep Learning,", "year": 2018 }, { "authors": [ "Ke Li", "Jitendra Malik" ], "title": "Learning to Optimize", "venue": "International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization", "venue": "Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Christos Louizos", "Max Welling" ], "title": "Multiplicative Normalizing Flows for Variational Bayesian Neural Networks", "venue": "International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Christos Louizos", "Xiahan Shi", "Klamer Schutte", "M. Welling" ], "title": "The Functional Neural Process", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "David J.C. MacKay" ], "title": "A Practical Bayesian Framework for Backpropagation Networks", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Radford M. Neal" ], "title": "Bayesian Learning for", "venue": "Neural Networks. Springer-Verlag,", "year": 1996 }, { "authors": [ "Valerio Perrone", "Rodolphe Jenatton", "Matthias W Seeger", "Cedric Archambeau" ], "title": "Scalable Hyperparameter Transfer Learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Joaquin Quiñonero-Candela", "Carl Edward Rasmussen" ], "title": "A Unifying View of Sparse Approximate Gaussian Process Regression", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Carl Edward Rasmussen", "Christopher K.I. Williams" ], "title": "Gaussian Processes for Machine Learning", "venue": null, "year": 2005 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a Model for Few-Shot Learning", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic Backpropagation and Approximate Inference in Deep Generative Models", "venue": "International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Andrei A. Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-Learning with Latent Embedding Optimization", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Adam Santoro", "Sergey Bartunov", "Matthew M. Botvinick", "Daan Wierstra", "Timothy P. Lillicrap" ], "title": "Meta-Learning with Memory-Augmented", "venue": "Neural Networks. International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Evolutionary Principles in Self-Referential Learning. On Learning how to Learn: The Meta-Meta-Meta...-Hook", "venue": "Diploma Thesis, Technische Universitat München,", "year": 1987 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Alex J. Smola", "Peter L. Bartlett" ], "title": "Sparse Greedy Gaussian Process Regression", "venue": "Advances in Neural Information Processing Systems,", "year": 2001 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard S. Zemel" ], "title": "Prototypical Networks for Few-shot Learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Edward Snelson", "Zoubin Ghahramani" ], "title": "Sparse Gaussian Processes Using Pseudo-Inputs", "venue": "International Conference on Neural Information Processing Systems,", "year": 2005 }, { "authors": [ "Jasper Snoek", "Hugo Larochelle", "Ryan P. Adams" ], "title": "Practical Bayesian Optimization of Machine Learning Algorithms", "venue": "Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning Structured Output Representation using Deep Conditional Generative Models", "venue": "Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Flood Sung", "Yongxin Yang", "Li Zhang", "Tao Xiang", "Philip H.S. Torr", "Timothy M. Hospedales" ], "title": "Learning to Compare: Relation Network for Few-Shot Learning", "venue": "Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Sebastian Thrun", "Lorien Pratt" ], "title": "Learning to Learn", "venue": "Kluwer Academic Publishers,", "year": 1998 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is All you Need", "venue": "Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ricardo Vilalta", "Youssef Drissi" ], "title": "A Perspective View and Survey of Meta-Learning", "venue": "Artificial Intelligence Review,", "year": 2005 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching Networks for One Shot Learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Michael Volpp", "Lukas P. Fröhlich", "Kirsten Fischer", "Andreas Doerr", "Stefan Falkner", "Frank Hutter", "Christian Daniel" ], "title": "Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Christopher K.I. Williams" ], "title": "Computing with Infinite Networks", "venue": "Advances in Neural Information Processing Systems,", "year": 1996 }, { "authors": [ "Dani Yogatama", "Gideon Mann" ], "title": "Efficient Transfer Learning Method for Automatic Hyperparameter Tuning", "venue": "International Conference on Artificial Intelligence and Statistics,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Estimating statistical relationships between physical quantities from measured data is of central importance in all branches of science and engineering and devising powerful regression models for this purpose forms a major field of study in statistics and machine learning. When judging representative power, neural networks (NNs) are arguably the most prominent member of the regression toolbox. NNs cope well with large amounts of training data and are computationally efficient at test time. On the downside, standard NN variants do not provide uncertainty estimates over their predictions and tend to overfit on small datasets. Gaussian processes (GPs) may be viewed as complementary to NNs as they provide reliable uncertainty estimates but their cubic (quadratic) scaling with the number of context data points at training (test) time in their basic formulation affects the application on tasks with large amounts of data or on high-dimensional problems.\nRecently, a lot of interest in the scientific community is drawn to combinations of aspects of NNs and GPs. Indeed, a prominent formulation of probabilistic regression is as a multi-task learning problem formalized in terms of amortized inference in conditional latent variable (CLV) models, which results in NN-based architectures which learn a distribution over target functions. Notable variants are given by the Neural Process (NP) (Garnelo et al., 2018b) and the work of Gordon et al. (2019), which presents a unifying view on a range of related approaches in the language of CLV models.\nInspired by this research, we study context aggregation, a central component of such models, and propose a new, fully Bayesian, aggregation mechanism for CLV-based probabilistic regression models.\n∗Correspondence to: Michael.Volpp@de.bosch.com\nTo transform the information contained in the context data into a latent representation of the target function, current approaches typically employ a mean aggregator and feed the output of this aggregator into a NN to predict a distribution over global latent parameters of the function. Hence, aggregation and latent parameter inference have so far been treated as separate parts of the learning pipeline. Moreover, when using a mean aggregator, every context sample is assumed to carry the same amount of information. Yet, in practice, different input locations have different task ambiguity and, therefore, samples should be assigned different importance in the aggregation process. In contrast, our Bayesian aggregation mechanism treats context aggregation and latent parameter inference as one holistic mechanism, i.e., the aggregation directly yields the distribution over the latent parameters of the target function. Indeed, we formulate context aggregation as Bayesian inference of latent parameters using Gaussian conditioning in the latent space. Compared to existing methods, the resulting aggregator improves the handling of task ambiguity, as it can assign different variance levels to the context samples. This mechanism improves predictive performance, while it remains conceptually simple and introduces only negligible computational overhead. Moreover, our Bayesian aggregator can also be applied to deterministic model variants like the Conditional NP (CNP) (Garnelo et al., 2018a).\nIn summary, our contributions are (i) a novel Bayesian Aggregation (BA) mechanism for context aggregation in NP-based models for probabilistic regression, (ii) its application to existing CLV architectures as well as to deterministic variants like the CNP, and (iii) an exhaustive experimental evaluation, demonstrating BA’s superiority over traditional mean aggregation." }, { "heading": "2 RELATED WORK", "text": "Prominent approaches to probabilistic regression are Bayesian linear regression and its kernelized counterpart, the Gaussian process (GP) (Rasmussen and Williams, 2005). The formal correspondence of GPs with infinite-width Bayesian NNs (BNNs) has been established in Neal (1996) and Williams (1996). A broad range of research aims to overcome the cubic scaling behaviour of GPs with the number of context points, e.g., through sparse GP approximations (Smola and Bartlett, 2001; Lawrence et al., 2002; Snelson and Ghahramani, 2005; Quiñonero-Candela and Rasmussen, 2005), by deep kernel learning (Wilson et al., 2016), by approximating the posterior distribution of BNNs (MacKay, 1992; Hinton and van Camp, 1993; Gal and Ghahramani, 2016; Louizos and Welling, 2017), or, by adaptive Bayesian linear regression, i.e., by performing inference over the last layer of a NN which introduces sparsity through linear combinations of finitely many learned basis functions (Lazaro-Gredilla and Figueiras-Vidal, 2010; Hinton and Salakhutdinov, 2008; Snoek et al., 2012; Calandra et al., 2016). An in a sense complementary approach aims to increase the data-efficiency of deep architectures by a fully Bayesian treatment of hierarchical latent variable models (“DeepGPs”) (Damianou and Lawrence, 2013).\nA parallel line of research studies probabilistic regression in the multi-task setting. Here, the goal is to formulate models which are data-efficient on an unseen target task by training them on data from a set of related source tasks. Bardenet et al. (2013); Yogatama and Mann (2014), and Golovin et al. (2017) study multi-task formulations of GP-based models. More general approaches of this kind employ the meta-learning framework (Schmidhuber, 1987; Thrun and Pratt, 1998; Vilalta and Drissi, 2005), where a model’s training procedure is formulated in a way which incentivizes it to learn how to solve unseen tasks rapidly with only a few context examples (“learning to learn”, “few-shot learning” (Fei-Fei et al., 2006; Lake et al., 2011)). A range of such methods trains a meta-learner to learn how to adjust the parameters of the learner’s model (Bengio et al., 1991; Schmidhuber, 1992), an approach which has recently been applied to few-shot image classification (Ravi and Larochelle, 2017), or to learning data-efficient optimization algorithms (Hochreiter et al., 2001; Li and Malik, 2016; Andrychowicz et al., 2016; Chen et al., 2017; Perrone et al., 2018; Volpp et al., 2019). Other branches of meta-learning research aim to learn similarity metrics to determine the relevance of context samples for the target task (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2017), or explore the application of memory-augmented neural networks for meta-learning (Santoro et al., 2016). Finn et al. (2017) propose model-agnostic meta-learning (MAML), a general framework for fast parameter adaptation in gradient-based learning methods.\nA successful formulation of probabilistic regression as a few-shot learning problem in a multi-task setting is enabled by recent advances in the area of probabilistic meta-learning methods which allow a quantitative treatment of the uncertainty arising due to task ambiguity, a feature particularly\nrelevant for few-shot learning problems. One line of work specifically studies probabilistic extensions of MAML (Grant et al., 2018; Ravi and Larochelle, 2017; Rusu et al., 2018; Finn et al., 2018; Kim et al., 2018). Further important approaches are based on amortized inference in multi-task CLV models (Heskes, 2000; Bakker and Heskes, 2003; Kingma and Welling, 2013; Rezende et al., 2014; Sohn et al., 2015), which forms the basis of the Neural Statistician proposed by Edwards and Storkey (2017) and of the NP model family (Garnelo et al., 2018b; Kim et al., 2019; Louizos et al., 2019). Gordon et al. (2019) present a unifying view on many of the aforementioned probabilistic architectures. Building on the conditional NPs (CNPs) proposed by Garnelo et al. (2018a), a range of NP-based architectures, such as Garnelo et al. (2018b) and Kim et al. (2019), consider combinations of deterministic and CLV model architectures. Recently, Gordon et al. (2020) extended CNPs to include translation equivariance in the input space, yielding state-of-the-art predictive performance.\nIn this paper, we also employ a formulation of probabilistic regression in terms of a multi-task CLV model. However, while in previous work the context aggregation mechanism (Zaheer et al., 2017; Wagstaff et al., 2019) was merely viewed as a necessity to consume context sets of variable size, we take inspiration from Becker et al. (2019) and emphasize the fundamental connection of latent parameter inference with context aggregation and, hence, base our model on a novel Bayesian aggregation mechanism." }, { "heading": "3 PRELIMINARIES", "text": "We present the standard multi-task CLV model which forms the basis for our discussion and present traditional mean context aggregation (MA) and the variational inference (VI) likelihood approximation as employed by the NP model family (Garnelo et al., 2018a; Kim et al., 2019), as well as an alternative Monte Carlo (MC)-based approximation.\nProblem Statement. We frame probabilistic regression as a multi-task learning problem. Let F denote a family of functions f` : Rdx → Rdy with some form of shared statistical structure.\nWe assume to have available data sets D` ≡ {(x`,i, y`,i)}i of evaluations y`,i ≡ f`(x`,i) + ε from a subset of functions (“tasks”) {f`}L`=1 ⊂ F with additive Gaussian noise ε ∼ N ( 0, σ2n ) . From this data, we aim to learn the posterior predictive distribution p (y`|x`,Dc`) over a (set of) y`, given the corresponding (set of) inputs x` as well as a context set Dc` ⊂ D`.\nThe Multi-Task CLV Model. We formalize the multitask learning problem in terms of a CLV model (Heskes, 2000; Gordon et al., 2019) as shown in Fig. 1. The model employs task-specific global latent variables z` ∈ Rdz , as well as a task-independent latent variable θ, capturing the statistical structure shared between tasks. To learn θ, we split the data into context sets Dc` ≡ {(xc`,n, yc`,n)}N`n=1 and target sets Dt` ≡ {(xt`,m, yt`,m)}M`m=1 and maximize the posterior predictive likelihood function\nL∏ `=1 p ( yt`,1:M` ∣∣xt`,1:M` ,Dc` , θ) = L∏ `=1 ∫ p (z` | Dc` , θ) M∏̀ m=1 p ( yt`,m ∣∣ z`, xt`,m, θ)dz` (1) w.r.t. θ. In what follows, we omit task indices ` to avoid clutter.\nLikelihood Approximation. Marginalizing over the task-specific latent variables z is intractable for reasonably complex models, so one has to employ some form of approximation. The NP-family of models (Garnelo et al., 2018b; Kim et al., 2019) uses an approximation of the form\nlog p ( yt1:M ∣∣xt1:M ,Dc, θ) ' Eqφ( z|Dc∪Dt) [ M∑ m=1 log p ( ytm ∣∣ z, xtm, θ)+ log qφ (z| Dc)qφ (z| Dc ∪ Dt) ] .\n(2)\nBeing derived using a variational approach, this approximation utilizes an approximate posterior distribution qφ (z| Dc) ≈ p (z| Dc, θ). Note, however, that it does not constitute a proper evidence lower bound for the posterior predictive likelihood since the intractable latent posterior p (z| Dc, θ) has been replaced by qφ (z| Dc) in the nominator of the rightmost term (Le et al., 2018). An alternative approximation, employed for instance in Gordon et al. (2019), also replaces the intractable latent posterior distribution by an approximate distribution qφ (z| Dc) ≈ p (z| Dc, θ) and uses a Monte-Carlo (MC) approximation of the resulting integral based on K latent samples, i.e.,\nlog p ( yt1:M ∣∣xt1:M ,Dc, θ) ≈ − logK + log K∑ k=1 M∏ m=1 p ( ytm ∣∣ zk, xtm, θ) , zk ∼ qφ (z| Dc) . (3) Note that both approaches employ approximations qφ (z| Dc) of the latent posterior distribution p (z| Dc, θ) and, as indicated by the notation, amortize inference in the sense that one single set of parameters φ is shared between all context data points. This enables efficient inference at test time, as no per-data-point optimization loops are required. As is standard in the literature (Garnelo et al., 2018b; Kim et al., 2019), we represent qφ (z| Dc) and p (ytm|z, xtm, θ) by NNs and refer to them as the encoder (enc, parameters φ) and decoder (dec, parameters θ) networks, respectively. These networks set the means and variances of factorized Gaussian distributions, i.e.,\nqφ (z| Dc) = N ( z|µz, diag ( σ2z )) , µz = encµz,φ (Dc) , σ2z = encσ2z,φ (D c) , (4)\np ( ytm ∣∣ z, xtm, θ) = N (ytm∣∣µy, diag (σ2y)) , µy = decµy,θ (z, xtm) , σ2y = decσ2y,θ (z, xtm) . (5)\nContext Aggregation. The latent variable z is global in the sense that it depends on the whole context set Dc. Therefore, some form of aggregation mechanism is required to enable the encoder to consume context sets Dc of variable size. To represent a meaningful operation on sets, such an aggregation mechanism has to be invariant to permutations of the context data points. Zaheer et al. (2017) characterize possible aggregation mechanisms w.r.t. this permutation invariance condition, resulting in the structure of traditional aggregation mechanisms depicted in Fig. 2(a). Each context data tuple (xcn, y c n) is first mapped onto a latent observation rn = encr,φ (x c n, y c n) ∈ Rdr . Then, a permutation-invariant operation is applied to the set {rn}Nn=1 to obtain an aggregated latent observation r̄. One prominent choice, employed for instance in Garnelo et al. (2018a), Kim et al. (2019), and Gordon et al. (2019), is to take the mean, i.e.,\nr̄ = 1\nN N∑ n=1 rn. (6)\nSubsequently, r̄ is mapped onto the parameters µz and σ2z of the approximate posterior distribution qφ (z| Dc) using additional encoder networks, i.e., µz = encµz,φ (r̄) and σ2z = encσ2z,φ (r̄). Note that three encoder networks are employed here: (i) encr,φ to map from the context pairs to rn, (ii) encµz,φ to compute µz from the aggregated mean r̄ and (iii) encσ2z,φ to compute the variance σ 2 z from r̄. In what follows, we refer to this aggregation mechanism as mean aggregation (MA) and to the networks encµz,φ and encσ2z,φ collectively as “r̄-to-z-networks”." }, { "heading": "4 BAYESIAN CONTEXT AGGREGATION", "text": "We propose Bayesian Aggregation (BA), a novel context data aggregation technique for CLV models which avoids the detour via an aggregated latent observation r̄ and directly treats the object of interest, namely the latent variable z, as the aggregated quantity. This reflects a central observation for CLV models with global latent variables: context data aggregation and hidden parameter inference are fundamentally the same mechanism. Our key insight is to define a probabilistic observation model p(r|z) for r which depends on z. Given a new latent observation rn = encr,φ(xcn, ycn), we can update p(z) by computing the posterior p(z|rn) = p(rn|z)p(z)/p(rn). Hence, by formulating context data aggregation as a Bayesian inference problem, we aggregate the information contained in Dc directly into the statistical description of z based on first principles." }, { "heading": "4.1 BAYESIAN CONTEXT AGGREGATION VIA GAUSSIAN CONDITIONING", "text": "BA can easily be implemented using a factorized Gaussian observation model of the form p (rn| z) = N ( rn| z, diag(σ2rn) ) , rn = encr,φ (xcn, y c n) , σ 2 rn = encσ2r ,φ (x c n, y c n) . (7)\nNote that, in contrast to standard variational auto-encoders (VAEs) (Kingma and Welling, 2013), we do not learn the mean and variance of a Gaussian distribution, but we learn the latent observation rn (which can be considered as a sample of p(z)) together with the variance σ2rn of this observation. This architecture allows the application of Gaussian conditioning while this is difficult for VAEs. Indeed, we impose a factorized Gaussian prior p0 (z) ≡ N ( z|µz,0, diag ( σ2z,0 )) and arrive at a Gaussian aggregation model which allows to derive the parameters of the posterior distribution qφ (z| Dc) in closed form1 (cf. App. 7.1):\nσ2z = [( σ2z,0 ) + N∑ n=1 ( σ2rn ) ] , µz = µz,0 + σ 2 z N∑ n=1 (rn − µz,0) ( σ2rn ) . (8)\nHere , and denote element-wise inversion, product, and division, respectively. These equations naturally lend themselves to efficient incremental updates as new context data (xcn, y c n) arrives by using the current posterior parameters µz,old and σ2z,old in place of the prior parameters, i.e.,\nσ2z,new = [( σ2z,old ) + ( σ2rn ) ] , µz = µz,old + σ 2 z,new (rn − µz,old ) ( σ2rn ) . (9)\nBA employs two encoder networks, encr,φ and encσ2r ,φ, mapping context tuples to latent observations and their variances, respectively. In contrast to MA, it does not require r̄-to-z-networks, because the set {rn}Nn=1 is aggregated directly into the statistical description of z by means of Eq. (8), cf. Fig. 2(b). Note that our factorization assumptions avoid the expensive matrix inversions that typically occur in Gaussian conditioning and which are difficult to backpropagate. Using factorized distributions renders BA cheap to evaluate with only marginal computational overhead in comparison to MA. Furthermore, we can easily backpropagate through BA to compute gradients to optimize the parameters of the encoder and decoder networks. As the latent space z is shaped by the encoder network, the factorization assumptions are valid because the network will find a space where these assumptions work well. Note further that BA represents a permutation-invariant operation on Dc.\nDiscussion. BA includes MA as a special case. Indeed, Eq. (8) reduces to the mean-aggregated latent observation Eq. (6) if we impose a non-informative prior and uniform observation variances σ2rn ≡ 1.2 This observation sheds light on the benefits of a Bayesian treatment of aggregation. MA assigns the same weight 1/N to each latent observation rn, independent of the amount of information contained in the corresponding context data tuple (xcn, y c n), as well as independent of the uncertainty about the current estimation of z. Bayesian aggregation remedies both of these limitations: the influence of rn on the parameters µz,old and σ2z,old describing the current aggregated state is determined by the relative magnitude of the observation variance σ2rn and the latent variance\n1Note that an extended observation model of the form p (rn| z) = N ( rn| z + µrn , diag(σ2rn) ) , with µrn given by a third encoder output, does not lead to a more expressive aggregation mechanism. Indeed, the resulting posterior variances would stay unchanged and the posterior mean would read µz = µz,0 + σ2z ∑N\nn=1 (rn − µrn − µz,0) ( σ2rn ) . Therefore, we would just subtract two distinct encoder outputs computed from the same inputs, resulting in exactly the same expressivity, which is why we set µrn ≡ 0. 2As motivated above, we consider r̄ as the aggregated quantity of MA and the distribution over z, described by µz and σ2z , as the aggregated quantity of BA. Note that Eq. (8) does not necessarily generalize µz and σ2z after nonlinear r̄-to-z-networks.\nσ2z,old, cf. Eq. (9). This emphasizes the central role of the learned observation variances σ 2 rn : they allow to quantify the amount of information contained in each latent observation rn. BA can therefore handle task ambiguity more efficiently than MA, as the architecture can learn to assign little weight (by predicting high observation variances σ2rn) to context points (x c n, y c n) located in areas with high task ambiguity, i.e., to points which could have been generated by many of the functions in F . Conversely, in areas with little task ambiguity, i.e., if (xcn, y c n) contains a lot of information about the underlying function, BA can induce a strong influence on the posterior latent distribution. In contrast, MA has to find ways to propagate such information through the aggregation mechanism by encoding it in the mean-aggregated latent observation r̄." }, { "heading": "4.2 LIKELIHOOD APPROXIMATION WITH BAYESIAN CONTEXT AGGREGATION", "text": "We show that BA is versatile in the sense that it can replace traditional MA in various CLV-based NP architectures as proposed, e.g., in Garnelo et al. (2018b) and Gordon et al. (2019), which employ samples from the approximate latent posterior qφ (z| Dc) to approximate the likelihood (as discussed in Sec. 3), as well as in deterministic variants like the CNP (Garnelo et al., 2018a).\nSampling-Based Likelihood Approximations. BA is naturally compatible with both the VI and MC likelihood approximations for CLV models. Indeed, BA defines a Gaussian latent distribution from which we can easily obtain samples z in order to evaluate Eq. (2) or Eq. (3) using the decoder parametrization Eq. (5).\nBayesian Context Aggregation for Conditional Neural Processes. BA motivates a novel, alternative, method to approximate the posterior predictive likelihood Eq. (1), resulting in a deterministic loss function which can be efficiently optimized for θ and φ in an end-to-end fashion. To this end, we employ a Gaussian approximation of the posterior predictive likelihood of the form\np ( yt1:M ∣∣xt1:M ,Dc, θ) ≈ N (yt1:M ∣∣µy,Σy) . (10) This is inspired by GPs which also define a Gaussian likelihood. Maximizing this expression yields the optimal solution µy = µ̃y, Σy = Σ̃y, with µ̃y and Σ̃y being the first and second moments of the true posterior predictive distribution. This is a well-known result known as moment matching, a popular variant of deterministic approximate inference used, e.g., in Deisenroth and Rasmussen (2011) and Becker et al. (2019). µ̃y and Σ̃y are functions of the moments µz and σ2z of the latent posterior p (z| Dc, θ) which motivates the following decoder parametrization:\nµy = decµy,θ ( µz, σ 2 z , x t m ) , σ2y = decσ2y,θ ( µz, σ 2 z , x t m ) , Σy = diag ( σ2y ) . (11)\nHere, µz and σ2z are given by the BA Eqs. (8). Note that we define the Gaussian approximation to be factorized w.r.t. individual ytm, an assumption which simplifies the architecture but could be dropped if a more expressive model was required. This decoder can be interpreted as a “moment matching network”, computing the moments of y given the moments of z. Indeed, in contrast to decoder networks of CLV-based NP architectures as defined in Eq. (5), it operates on the moments µz and σ2z of the latent distribution instead of on samples z which allows to evaluate this approximation in a deterministic manner. In this sense, the resulting model is akin to the CNP which defines a deterministic, conditional model with a decoder operating on the mean-aggregated latent observation r̄. However, BA-based models trained in this deterministic manner still benefit from BA’s ability to accurately quantify latent parameter uncertainty which yields significantly improved predictive likelihoods. In what follows, we refer to this approximation scheme as direct parameter-based (PB) likelihood optimization.\nDiscussion. The concrete choice of likelihood approximation or, equivalently, model architecture depends mainly on the intended use-case. Sampling-based models are generally more expressive as they can represent complex, i.e., structured, non-Gaussian, posterior predictive distributions. Moreover, they yield true function samples while deterministic models only allow approximate function samples through auto-regressive (AR) sampling schemes. Nevertheless, deterministic models exhibit several computational advantages. They yield direct probabilistic predictions in a single forward pass, while the predictions of sampling-based methods are only defined through averages over multiple function samples and hence require multiple forward passes. Likewise, evaluating the MC-based likelihood approximation Eq. (3) during training requires to draw multiple\n(K) latent samples z. While the VI likelihood approximation Eq. (2) can be optimized on a single function sample per training step through stochastic gradient descent (Bishop, 2006), it has the disadvantage that it requires to feed target sets Dt through the encoder which can impede the training for small context sets Dc as discussed in detail in App. 7.2." }, { "heading": "5 EXPERIMENTS", "text": "We present experiments to compare the performances of BA and of MA in NP-based models. To provide a complete picture, we evaluate all combinations of likelihood approximations (PB/deterministic Eq. (10), VI Eq. (2), MC Eq. (3)) and aggregation methods (BA Eq. (8), MA Eq. (6)), resulting in six different model architectures, cf. Fig. 4 in App. 7.5.2. Two of these architectures correspond to existing members of the NP family: MA + deterministic is equivalent to the CNP (Garnelo et al., 2018a), and MA + VI corresponds to the Latent-Path NP (LP-NP) (Garnelo et al., 2018b), i.e., the NP without a deterministic path. We further evaluate the Attentive Neural Process (ANP) (Kim et al., 2019), which employs a hybrid approach, combining LP-NP with a cross-attention mechanism in a parallel deterministic path3, as well as an NP-architecture using MA with a self-attentive (SA) encoder network. Note that BA can also be used in hybrid models like ANP or in combination with SA, an idea we leave for future research. In App. 7.4 we discuss NP-based regression in relation to other methods for (scalable) probabilistic regression.\nThe performance of NP-based models depends heavily on the encoder and decoder network architectures as well as on the latent space dimensionality dz . To assess the influence of the aggregation mechanism independently from all other confounding factors, we consistently optimize the encoder and decoder network architectures, the latent-space dimensionality dz , as well as the learning rate of the Adam optimizer (Kingma and Ba, 2015), independently for all model architectures and for all experiments using the Optuna (Akiba et al., 2019) framework, cf. App. 7.5.3. If not stated differently, we report performance in terms of the mean posterior predictive log-likelihood over 256 test tasks with 256 data points each, conditioned on context sets containing N ∈ {0, 1, . . . , Nmax} data points (cf. App. 7.5.4). For sampling-based methods (VI, MC, ANP), we report the joint log-likelihood over the test sets using a Monte-Carlo approximation with 25 latent samples, cf. App. 7.5.4. We average the resulting log-likelihood values over 10 training runs with different random seeds and report 95% confidence intervals. We publish source code to reproduce the experimental results online.4\nGP Samples. We evaluate the architectures on synthetic functions drawn from GP priors with different kernels (RBF, weakly periodic, Matern-5/2), as proposed by Gordon et al. (2020), cf. App. 7.5.1. We generate a new batch of functions for each training epoch. The results (Tab. 1) show that BA consistently outperforms MA, independent of the model architecture. In-\n3For ANP, we use original code from https://github.com/deepmind/neural-processes 4https://github.com/boschresearch/bayesian-context-aggregation\nterestingly, despite employing a factorized Gaussian approximation, our deterministic PB approximation performs at least on-par with the traditional VI approximation which tends to perform\nparticularly poorly for small context sets, reflecting the intricacies discussed in Sec. 4.2. As expected, the MC approximation yields the best results in terms of predictive performance, as it is more expressive than the deterministic approaches and does not share the problems of the VI approach. As shown in Tab. 2 and Tab. 9, App. 7.6, our proposed PB likelihood approximation is\nmuch cheaper to evaluate compared to both sampling-based approaches which require multiple forward passes per prediction. We further observe that BA tends to require smaller encoder and decoder networks as it is more efficient at propagating context information to the latent state as discussed in Sec. 4.1. The hybrid ANP approach is competitive only on the Matern-5/2 function class. Yet, we refer the reader to Tab. 10, App. 7.6, demonstrating that the attention mechanism greatly improves performance in terms of MSE.\nQuadratic Functions. We further seek to study the performance of BA with very limited amounts of training data. To this end, we consider two quadratic function classes, each parametrized by three real parameters from which we generate limited numbers L of training tasks. The first function class is defined on a one-dimensional domain, i.e., x ∈ R, and we choose L = 64, while the second function class, as proposed by Perrone et al. (2018), is defined on x ∈ R3 with L = 128, cf. App. 7.5.1. As shown in Tab. 3, BA again consistently outperforms MA, often by considerably large margins, underlining the efficiency of our Bayesian approach to aggregation in the regime of little training data. On the 1D task, all likelihood approximations perform approximately on-par in combination with BA, while MC outperforms both on the more complex 3D task. Fig. 3 compares prediction qualities.\nDynamics of a Furuta Pendulum. We study BA on a realistic dataset given by the simulated dynamics of a rotary inverted pendulum, better known as the Furuta pendulum (Furuta et al., 1992), which is a highly non-linear dynamical system, consisting of an actuated arm rotating in the horizontal plane with an attached pendulum rotating freely in the vertical plane, parametrized by two masses, three lengths, and two damping constants. The regression task is defined as the one-step-ahead prediction of the four-dimensional system state with a step-size of ∆t = 0.1 s, as detailed in App. 7.5.1. The results (Tab. 4) show that BA improves predictive performance also on complex, non-synthetic regression tasks with higher-dimensional input- and output spaces. Further, they are consistent with our previous findings regarding the likelihood approximations, with MC being strongest in terms of predictive likelihood, followed by our efficient deterministic alternative PB.\n2D Image Completion. We consider a 2D image completion experiment where the inputs x are pixel locations in images showing handwritten digits, and we regress onto the corresponding pixel intensities y, cf. App. 7.6. Interestingly, we found that architectures without deterministic paths were not able to solve this task reliably which is why we only report results for deterministic models.\nAs shown in Tab. 5, BA improves performance in comparison to MA by a large margin. This highlights that BA’s ability to quantify the information content of a context tuple is particularly beneficial on this task, as, e.g., pixels in the middle area of the images typically convey more information about the identity of the digit than pixels located near the borders.\nSelf-attentive Encoders. Another interesting baseline for BA is MA, combined with a self-attention (SA) mechanism in the encoder. Indeed, similar to BA, SA yields non-uniform weights for the latent observations rn, where a given weight is computed from some form of pairwise spatial relationship with all other latent observations in the context set (cf. App. 7.3 for a detailed discussion). As BA’s weight for rn only depends on (xn, yn) itself, BA is computationally more efficient: SA scales like O(N2) in the number N of context tuples while BA scales like O(N), and, furthermore, SA does not allow for efficient incremental updates while this is possible for BA, cf. Eq. (9). Tab. 6 shows a comparison of BA with MA in combination with various different SA mechanisms in the encoder. We emphasize that we compare against BA in its vanilla form, i.e., BA does not use SA in the encoder. The results show that Laplace SA and dot-product SA do not improve predictive performance compared to vanilla MA, while multihead SA yields significantly better results. Nevertheless, vanilla BA still performs better or at least on-par and is computationally more efficient. While being out of the scope of this work, according to these results, a combination of BA with SA seems promising if computational disadvantages can be accepted in favour of increased predictive performance, cf. App. 7.3." }, { "heading": "6 CONCLUSION AND OUTLOOK", "text": "We proposed a novel Bayesian Aggregation (BA) method for NP-based models, combining context aggregation and hidden parameter inference in one holistic mechanism which enables efficient handling of task ambiguity. BA is conceptually simple, compatible with existing NP-based model architectures, and consistently improves performance compared to traditional mean aggregation. It introduces only marginal computational overhead, simplifies the architectures in comparison to existing CLV models (no r̄-to-z-networks), and tends to require less complex encoder and decoder network architectures. Our experiments further demonstrate that the VI likelihood approximation traditionally used to train NP-based models should be abandoned in favor of a MC-based approach, and that our proposed PB likelihood approximation represents an efficient deterministic alternative with strong predictive performance. We believe that a range of existing models, e.g., the ANP or NPs with self-attentive encoders, can benefit from BA, especially when a reliable quantification of uncertainty is crucial. Also, more complex Bayesian aggregation models are conceivable, opening interesting avenues for future research." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Philipp Becker, Stefan Falkner, and the anonymous reviewers for valuable remarks and discussions which greatly improved this paper." }, { "heading": "7 APPENDIX", "text": "We present the derivation of the Bayesian aggregation update equations (Eqs. (8), (9)) in more detail. To foster reproducibility, we describe all experimental settings as well as the hyperparameter optimization procedure used to obtain the results reported in Sec. 5, and publish the source code online.5 We further provide additional experimental results and visualizations of the predictions of the compared architectures." }, { "heading": "7.1 DERIVATION OF THE BAYESIAN AGGREGATION UPDATE EQUATIONS", "text": "We derive the full Bayesian aggregation update equations without making any factorization assumptions. We start from a Gaussian observation model of the form\np (rn| z) ≡ N (rn| z,Σrn) , rn = encr,φ (xcn, ycn) , Σrn = encΣr,φ (xcn, ycn) , (12) where rn and Σrn are learned by the encoder network. If we impose a Gaussian prior in the latent space, i.e., p (z) ≡ N (z|µz,0,Σz,0) , (13) we arrive at a Gaussian aggregation model which allows to derive the parameters of the posterior distribution, i.e., of qφ (z| Dc) = N (z|µz,Σz) (14) in closed form using standard Gaussian conditioning (Bishop, 2006):\nΣz = [ (Σz,0) −1 +\nN∑ n=1 (Σrn) −1\n]−1 , (15a)\nµz = µz,0 + Σz N∑ n=1 (Σrn) −1 (rn − µz,0) . (15b)\nAs the latent space z is shaped by the encoder network, it will find a space where the following factorization assumptions work well (given dz is large enough):\nΣrn = diag ( σ2rn ) , σ2rn = encσ2r ,φ (x c n, y c n) , Σz,0 = diag ( σ2z,0 ) . (16)\nThis yields a factorized posterior, i.e., qφ (z| Dc) = N ( z|µz, diag ( σ2z )) , (17)\nwith\nσ2z = [( σ2z,0 ) + N∑ n=1 ( σ2rn ) ] , (18a)\nµz = µz,0 + σ 2 z N∑ n=1 (rn − µz,0) ( σ2rn ) . (18b)\nHere , and denote element-wise inversion, product, and division, respectively. This is the result Eq. (8) from the main part of this paper." }, { "heading": "7.2 DISCUSSION OF VI LIKELIHOOD APPROXIMATION", "text": "To highlight the limitations of the VI approximation, we note that decoder networks of models employing the PB or the MC likelihood approximation are provided with the same context information at training and test time: the latent variable (which is passed on to the decoder in the form of latent samples z (for MC) or in the form of parameters µz , σ2z describing the latent distribution (for PB)) is in both cases conditioned only on the context set Dc. In contrast, in the variational approximation Eq. (2), the expectation is w.r.t. qφ, conditioned on the union of the context set Dc and the target set Dt. As Dt is not available at test time, this introduces a mismatch between how the model is trained\n5https://github.com/boschresearch/bayesian-context-aggregation\nand how it is used at test time. Indeed, the decoder is trained on samples from qφ (z| Dc ∪ Dt) but evaluated on samples from qφ (z| Dc). This is not a serious problem when the model is evaluated on context sets with sizes large enough to allow accurate approximations of the true latent posterior distribution. Small context sets, however, usually contain too little information to infer z reliably. Consequently, the distributions qφ (z| Dc) and qφ (z| Dc ∪ Dt) typically differ significantly in this regime. Hence, incentivizing the decoder to yield meaningful predictions on small context sets requires intricate and potentially expensive additional sampling procedures to choose suitable target sets Dt during training. As a corner case, we point out that it is not possible to train the decoder on samples from the latent prior, because the right hand side of Eq. (2) vanishes for Dc = Dt = ∅." }, { "heading": "7.3 SELF-ATTENTIVE ENCODER ARCHITECTURES", "text": "Kim et al. (2019) propose to use attention-mechanisms to improve the quality of NP-based regression. In general, given a set of key-value pairs {(xn, yn)}Nn=1, xn ∈ Rdx , yn ∈ Rdy , and a query x∗ ∈ Rdx , an attention mechanism A produces a weighted sum of the values, with the weights being computed from the keys and the query:\nA ( {(xn, yn)}Nn=1 , x∗ ) = N∑ n=1 w (xn, x ∗) yn. (19)\nThere are several types of attention mechanisms proposed in the literature (Vaswani et al., 2017), each defining a specific form of the weights. Laplace attention adjusts the weights according to the spatial distance of keys and query:\nwL (xn, x ∗) ∝ exp (−||xn − x∗||1) . (20)\nSimilarly, dot-product attention computes\nwDP (xn, x ∗) ∝ exp ( xTnx ∗/ √ dx ) . (21)\nA more complex mechanism is multihead attention, which employs a set of 3H learned linear mappings { LKh }H h=1 , { LVh }H h=1 , { LQh }H h=1\n, where H is a hyperparameter. For each h, these mappings are applied to keys, values, and queries, respectively. Subsequently, dot-product attention is applied to the set of transformed key-value pairs and the transformed query. The resulting H values are then again combined by a further learned linear mapping LO to obtain the final result. Self-attention (SA) is defined by setting the set of queries equal to the set of keys. Therefore, SA produces again a set of N weighted values. Combining SA with an NP-encoder, i.e., applying SA to the set {fx(xn) , rn}Nn=1 of inputs xn and corresponding latent observations rn (where we also consider a possible nonlinear transformation fx of the inputs) and subsequently applying MA yields an interesting baseline for our proposed BA. Indeed, similar to BA, SA computes a weighted sum of the latent observations rn. Note, however, that SA weighs each latent observation according to some form of spatial relationship of the corresponding input with all other latent observations in the context set. In contrast, BA’s weight for a given latent observation is based only on features computed from the context tuple corresponding to this very latent observation and allows to incorporate an estimation of the amount of information contained in the context tuple into the aggregation (cf. Sec. 4.1). This leads to several computational advantages of BA over SA: (i) SA scales quadratically in the number N of context tuples, as it has to be evaluated on all N2 pairs of context tuples. In contrast, BA scales linearly with N . (ii) BA allows for efficient incremental updates when context data arrives sequentially (cf. Eq. (9)), while using SA does not provide this possibility: it requires to store and encode the whole context set Dc at once and to subsequently aggregate the whole set of resulting (SA-weighted) latent observations.\nThe results in Tab. 6, Sec. 5 show that multihead SA leads to significant improvements in predictive performance compared to vanilla MA. Therefore, a combination of BA with self-attentive encoders seems promising in situations where computational disadvantages can be accepted in favour of increased predictive performance. Note that BA relies on a second encoder output σ2rn (in addition to the latent observation rn) which assesses the information content in each context tuple (xn, yn). As each SA-weighted rn is informed by the other latent observations in the context set, obviously, one would have to also process the set of σ2rn in a manner consistent with the SA-weighting. We leave such a combination of SA and BA for future research." }, { "heading": "7.4 NEURAL PROCESS-BASED MODELS IN THE CONTEXT OF SCALABLE PROBABILISTIC REGRESSION", "text": "We discuss in more detail how NP-based models relate to other existing methods for (scalable) probabilistic regression, such as (multi-task) GPs (Rasmussen and Williams, 2005; Bardenet et al., 2013; Yogatama and Mann, 2014; Golovin et al., 2017), Bayesian neural networks (BNNs) (MacKay, 1992; Gal and Ghahramani, 2016), and DeepGPs (Damianou and Lawrence, 2013).\nNPs are motivated in Garnelo et al. (2018a;b), Kim et al. (2019), as well as in our Sec. 1, as models which combine the computational efficiency of neural networks with well-calibrated uncertainty estimates (like those of GPs). Indeed, NPs scale linearly in the number N of context and M of target data points, i.e., like O(N +M), while GPs scale like O(N3 +M2). Furthermore, NPs are shown to exhibit well-calibrated uncertainty estimates. In this sense, NPs can be counted as members of the family of scalable probabilistic regression methods.\nA central aspect of NP training which distinguishes NPs from a range of standard methods is that they are trained in a multi-task fashion (cf. Sec. 3). This means that NPs rely on data from a set of related source tasks from which they automatically learn powerful priors and the ability to adapt quickly to unseen target tasks. This multi-task training procedure of NPs scales linearly in the number L of source tasks, which makes it possible to train these architectures on large amounts of source data. Applying GPs in such a multi-task setting can be challenging, especially for large numbers of source tasks. Similarly, BNNs as well as DeepGPs are in their vanilla forms specifically designed for the single-task setting. Therefore, GPs, BNNs, and DeepGPs are not directly applicable in the NP multi-task setting, which is why they are typically not considered as baselines for NP-based models, as discussed in (Kim et al., 2019).\nThe experiments presented in Garnelo et al. (2018a;b) and Kim et al. (2019) focus mainly on evaluating NPs in the context of few-shot probabilistic regression, i.e., on demonstrating the dataefficiency of NPs on the target task after training on data from a range of source tasks. In contrast, the application of NPs in situations with large (> 1000) numbers of context/target points per task has to the best of our knowledge not yet been investigated in detail in the literature. Furthermore, it has not been studied how to apply NPs in situations where only a single or very few source tasks are available. The focus of our paper is a clear-cut comparison of the performance of our BA with traditional MA in the context of NP-based models. Therefore, we also consider experiments similar to those presented in (Garnelo et al., 2018a;b; Kim et al., 2019) and leave further comparisons with existing methods for (multi-task) probabilistic regressions for future work.\nNevertheless, to illustrate this discussion, we provide two simple GP-based baseline methods: (i) a vanilla GP, which optimizes the hyperparameters on each target task individually and does not use\nthe source data, and (ii) a naive but easily interpretable example of a multi-task GP, which optimizes one set of hyperparameters on all source tasks and uses it for predictions on the target tasks without further adaptation. The results in Tab. 7 show that those GP-based models can only compete with NPs on function classes where either the inductive bias as given by the kernel functions fits the data well (RBF GP), or on function classes which exhibit a relatively low degree of variablity (Quadratic 1D). On more complex function classes, NPs produce predictions of much better quality, as they incorporate the source data more efficiently." }, { "heading": "7.5 EXPERIMENTAL DETAILS", "text": "We provide details about the data sets as well as about the experimental setup used in our experiments in Sec. 5." }, { "heading": "7.5.1 DATA GENERATION", "text": "In our experiments, we use several classes of functions to evaluate the architectures under consideration. To generate training data from these function classes, we sample L random tasks (as described in Sec. 5), and Ntot random input locations x for each task. For each minibatch of training tasks, we uniformly sample a context set size N ∈ {nmin, . . . , nmax} and use a random subset of N data points from each task as context sets Dc. The remaining M = Ntot −N data points are used as the target sets Dt (cf. App. 7.5.3 for the special case of the VI likelihood approximation). Tab. 8 provides details about the data generation process.\nGP Samples. We sample one-dimensional functions f : R→ R from GP priors with three different stationary kernel functions as proposed by Gordon et al. (2020).\nA radial basis functions (RBF) kernel with lenghtscale l = 1.0: kRBF (r) ≡ exp ( −0.5r2 ) . (22)\nA weakly periodic kernel: kWP (r) ≡ exp ( −2 sin (0.5r)2 − 0.125r2 ) . (23)\nA Matern-5/2 kernel with lengthscale l = 0.25: kM5/2 (r) ≡ ( 1 + √ 5r\n0.25 +\n5r2\n3 · 0.252\n) exp ( − √ 5r\n0.25\n) . (24)\nQuadratic Functions. We consider two classes of quadratic functions. The first class fQ,1D : R→ R is defined on a one-dimensional domain and parametrized by three parameters a, b, c ∈ R:\nfQ,1D (x) ≡ a2 (x+ b)2 + c. (25)\nThe second class fQ,3D : R3 → R is defined on a three-dimensional domain and also parametrized by three parameters a, b, c ∈ R:\nfQ,3D (x1, x2, x3) ≡ 0.5a ( x21 + x 2 2 + x 2 3 ) + b (x1 + x2 + x3) + 3c. (26)\nThis function class was proposed in Perrone et al. (2018).\nFor both function classes we add Gaussian noise with standard deviation σn to the evaluations, cf. Tab. 8.\nFuruta Pendulum Dynamics. We consider a function class obtained by integrating the non-linear equations of motion governing the dynamics of a Furuta pendulum (Furuta et al., 1992; Cazzolato and Prime, 2011) for a time span of ∆t = 0.1 s. More concretely, we consider the mapping\nΘ (t)→ Θ (t+ ∆t)−Θ (t) , (27)\nwhere Θ = [ θarm (t) , θpend (t) , θ̇arm (t) , θ̇pend (t) ]T denotes the four-dimensional vector describ-\ning the dynamical state of the Furuta pendulum. The Furuta pendulum is parametrized by seven parameters (two masses, three lengths, two damping constants) as detailed in Tab. 8. During training, we provide L = 64 tasks, corresponding to 64 different parameter configurations. We consider the free system and generate noise by applying random torques at each integration time step (∆tEuler = 0.001 s) to the joints of the arm and pendulum drawn from Gaussian distributions with standard deviations στ,pend, στ,arm, respectively.\n2D Image Completion. For this task, we use the MNIST database of 28 × 28 images of handwritten digits (LeCun and Cortes, 2010), and define 2D functions mapping pixel locations x1, x2 ∈ {0, . . . 27} (scaled to the unit square) to the corresponding pixel intensities y ∈ {0, . . . , 255} (scaled to the unit interval), cf. Tab. 8. One training task corresponds to one image drawn randomly from the training set (consisting of 60000 images) and for evaluation we use a subset of the test set (consisting of 10000 images)." }, { "heading": "7.5.2 MODEL ARCHITECTURES", "text": "We provide the detailed architectures used for the experiments in Sec. 5 in Fig. 4. For ANP we use multihead cross attention and refer the reader to Kim et al. (2019) for details about the architecture." }, { "heading": "7.5.3 HYPERPARAMETERS AND HYPERPARAMETER OPTIMIZATION", "text": "To arrive at a fair comparison of our BA with MA, it is imperative to use optimal model architectures for each aggregation method and likelihood approximation under consideration. Therefore, we optimize the number of hidden layers and the number of hidden units per layer of each encoder and decoder MLP (as shown in Fig. 4), individually for each model architecture and each experiment. For the ANP, we also optimize the multihead attention MLPs. We further optimize the latent space dimensionality dz and the learning rate of the Adam optimizer. For this hyperparameter optimization, we use the Optuna framework (Akiba et al., 2019) with TPE Sampler and Hyperband pruner (Li et al., 2017). We consistently use a minibatch size of 16. Further, we use S = 10 latent samples to evaluate\nthe MC likelihood approximation during training. To evaluate the VI likelihood approximation, we sample target set sizes between Ntot and N in each training epoch, cf. Tab. 8." }, { "heading": "7.5.4 EVALUATION PROCEDURE", "text": "To evaluate the performance of the various model architectures we generate L = 256 unseen test tasks with target sets Dt` consisting of M = 256 data points each and compute the average posterior predictive log-likelihood 1L 1 M ∑L `=1 log p ( yt`,1:M ∣∣∣xt`,1:M ,Dc` , θ), given context sets Dc` of size N . Depending on the architecture, we approximate the posterior predictive log-likelihood according to:\n• For BA + PB likelihood approximation:\n1\nL\n1\nM L∑ `=1 M∑ m=1 log p ( yt`,m ∣∣xt`,m, µz,`, σ2z,`, θ) . (28) • For MA + deterministic loss (= CNP):\n1\nL\n1\nM L∑ `=1 M∑ m=1 log p ( yt`,m ∣∣xt`,m, r̄`, θ) . (29) • For architectures employing sampling-based likelihood approximations (VI, MC-LL) we\nreport the joint log-likelihood over all data points in a test set, i.e.\n1\nL\n1\nM L∑ `=1 log ∫ qφ (z`| Dc`) M∏ m=1 p ( yt`,m ∣∣xt`,m, z`, θ) dz` (30) ≈ 1 L 1 M L∑ `=1 log 1 S S∑ s=1 M∏ m=1 p ( yt`,m\n∣∣xt`,m, z`,s, θ) (31) = − 1\nM logS +\n1\nL\n1\nM L∑ l=1\nS\nlogsumexp s=1 ( M∑ m=1 log p ( yt`,m ∣∣xt`,m, z`,s, θ) ) , (32)\nwhere z`,s ∼ qφ (z| D`). We employ S = 25 latent samples.\nTo compute the log-likelihood values given in tables, we additionally average over various context set sizes N as detailed in the main part of this paper.\nWe report the mean posterior predictive log-likelihood computed in this way w.r.t. 10 training runs with different random seeds together with 95% confidence intervals" }, { "heading": "7.6 ADDITIONAL EXPERIMENTAL RESULTS", "text": "We provide additional experimental results accompanying the experiments presented in Sec. 5:\n• Results for relative evaluation runtimes and numbers of parameters of the optimized network architectures on the full GP suite of experiments, cf. Tab. 9.\n• The posterior predictive mean squared error on all experiments, cf. Tab. 10. • The context-size dependent results for the predictive posterior log-likelihood for the 1D\nand 3D Quadratic experiments, the Furuta dynamics experiment, as well as the 2D image completion experiment, cf. Fig. 5.\n• More detailed plots of the predictions on one-dimensional experiments (1D Quadratics (Figs. 6, 7), RBF-GP, (Figs. 8, 9), Weakly Periodic GP (Figs. 10, 11), and Matern-5/2 GP (Figs. 12, 13))." } ]
2,021
BAYESIAN CONTEXT AGGREGATION FOR NEURAL PROCESSES
SP:368ac9d4b7934e68651c1b54286d9332caf16473
[ "Till page 3 the paper was easy to follow, i.e., the analytical expressions in eq(5), and the basic idea of Algorithm 1 (which is same as prior works by Han et al. , Wang et al., Periera et al.) are clear. However, after page 3 the paper is hard to follow. The specific points are as follows:" ]
In this paper we present a deep learning framework for solving large-scale multiagent non-cooperative stochastic games using fictitious play. The HamiltonJacobi-Bellman (HJB) PDE associated with each agent is reformulated into a set of Forward-Backward Stochastic Differential Equations (FBSDEs) and solved via forward sampling on a suitably defined neural network architecture. Decision making in multi-agent systems suffers from curse of dimensionality and strategy degeneration as the number of agents and time horizon increase. We propose a novel Deep FBSDE controller framework which is shown to outperform the current state-of-the-art deep fictitious play algorithm on a high dimensional interbank lending/borrowing problem. More importantly, our approach mitigates the curse of many agents and reduces computational and memory complexity, allowing us to scale up to 1,000 agents in simulation, a scale which, to the best of our knowledge, represents a new state of the art. Finally, we showcase the framework’s applicability in robotics on a belief-space autonomous racing problem.
[]
[ { "authors": [ "George W Brown" ], "title": "Iterative solution of games by fictitious play", "venue": "Activity analysis of production and allocation,", "year": 1951 }, { "authors": [ "Rene Carmona", "Jean-Pierre Fouque", "Li-Hsien Sun" ], "title": "Mean field games and systemic risk", "venue": "Available at SSRN 2307814,", "year": 2013 }, { "authors": [ "T. Duncan", "B. Pasik-Duncan" ], "title": "Some stochastic differential games with state dependent noise", "venue": "54th IEEE Conference on Decision and Control,", "year": 2015 }, { "authors": [ "Ioannis Exarchos", "Evangelos Theodorou", "Panagiotis Tsiotras" ], "title": "Stochastic differential games: A sampling approach via FBSDEs", "venue": "Dynamic Games and Applications,", "year": 2019 }, { "authors": [ "Jiequn Han", "Ruimeng Hu" ], "title": "Deep fictitious play for finding markovian nash equilibrium in multiagent games", "venue": "arXiv preprint arXiv:1912.01809,", "year": 2019 }, { "authors": [ "Jiequn Han", "Arnulf Jentzen", "E Weinan" ], "title": "Solving high-dimensional partial differential equations using deep learning", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Jiequn Han", "Ruimeng Hu", "Jihao Long" ], "title": "Convergence of deep fictitious play for stochastic differential games", "venue": "arXiv preprint arXiv:2008.05519,", "year": 2020 }, { "authors": [ "Ruimeng Hu" ], "title": "Deep fictitious play for stochastic differential games", "venue": "arXiv preprint arXiv:1903.09376,", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "R. Isaacs" ], "title": "Differential Games: A Mathematical Theory with Applications to Warfare and Pursuit, Control and Optimization", "venue": null, "year": 1965 }, { "authors": [ "AH Jazwinski" ], "title": "Stochastic process and filtering theory, academic press", "venue": "A subsidiary of Harcourt Brace Jovanovich Publishers,", "year": 1970 }, { "authors": [ "H. Kushner" ], "title": "Numerical approximations for stochastic differential games", "venue": "SIAM J. Control Optim.,", "year": 2002 }, { "authors": [ "H. Kushner", "S. Chamberlain" ], "title": "On stochastic differential games: Sufficient conditions that a given strategy be a saddle point, and numerical procedures for the solution of the game", "venue": "Journal of Mathematical Analysis and Applications,", "year": 1969 }, { "authors": [ "Anuj Mahajan", "Tabish Rashid", "Mikayel Samvelyan", "Shimon Whiteson" ], "title": "Maven: Multi-agent variational exploration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sure Mataramvura", "Bernt Øksendal" ], "title": "Risk minimizing portfolios and hjbi equations for stochastic differential games", "venue": "Stochastics An International Journal of Probability and Stochastic Processes,", "year": 2008 }, { "authors": [ "Laetitia Matignon", "Guillaume J Laurent", "Nadine Le Fort-Piat" ], "title": "Independent reinforcement learners in cooperative markov games: a survey regarding coordination", "venue": null, "year": 2012 }, { "authors": [ "Marcus A Pereira", "Ziyi Wang", "Ioannis Exarchos", "Evangelos A Theodorou" ], "title": "Learning deep stochastic optimal control policies using forward-backward sdes", "venue": "In Robotics: science and systems,", "year": 2019 }, { "authors": [ "Ashutosh Prasad", "Suresh P Sethi" ], "title": "Competitive advertising under uncertainty: A stochastic differential game approach", "venue": "Journal of Optimization Theory and Applications,", "year": 2004 }, { "authors": [ "Maziar Raissi" ], "title": "Forward-backward stochastic neural networks: Deep learning of high-dimensional partial differential equations", "venue": "arXiv preprint arXiv:1804.07010,", "year": 2018 }, { "authors": [ "Tabish Rashid", "Mikayel Samvelyan", "Christian Schroeder De Witt", "Gregory Farquhar", "Jakob Foerster", "Shimon Whiteson" ], "title": "Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:1803.11485,", "year": 2018 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Dan Simon" ], "title": "Optimal state estimation: Kalman, H infinity, and nonlinear approaches", "venue": null, "year": 2006 }, { "authors": [ "Kyunghwan Son", "Daewoo Kim", "Wan Ju Kang", "David Earl Hostallero", "Yung Yi" ], "title": "Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning", "venue": null, "year": 1905 }, { "authors": [ "Peter Sunehag", "Guy Lever", "Audrunas Gruslys", "Wojciech Marian Czarnecki", "Vinı́cius Flores Zambaldi", "Max Jaderberg", "Marc Lanctot", "Nicolas Sonnerat", "Joel Z Leibo", "Karl Tuyls" ], "title": "Valuedecomposition networks for cooperative multi-agent learning based on team reward", "venue": "In AAMAS,", "year": 2018 }, { "authors": [ "Ming Tan" ], "title": "Multi-agent reinforcement learning: Independent vs. cooperative agents", "venue": "In Proceedings of the tenth international conference on machine learning,", "year": 1993 }, { "authors": [ "Ziyi Wang", "Keuntaek Lee", "Marcus A Pereira", "Ioannis Exarchos", "Evangelos A Theodorou" ], "title": "Deep forward-backward SDEs for min-max control", "venue": "IEEE 58th Conference on Decision and Control (CDC),", "year": 2019 }, { "authors": [ "Ziyi Wang", "Marcus A Pereira", "Evangelos A Theodorou" ], "title": "Deep 2fbsdes for systems with control multiplicative noise", "venue": "arXiv preprint arXiv:1906.04762,", "year": 2019 }, { "authors": [ "Ming Zhou", "Yong Chen", "Ying Wen", "Yaodong Yang", "Yufeng Su", "Weinan Zhang", "Dell Zhang", "Jun Wang" ], "title": "Factorized q-learning for large-scale multi-agent systems", "venue": "In Proceedings of the First International Conference on Distributed Artificial Intelligence,", "year": 2019 }, { "authors": [ "Q̃ = LQL" ], "title": "MRM (16) one can write the posterior mean state x̂ and prior covariance matrix P− estimation update rule by Simon", "venue": null, "year": 2006 }, { "authors": [ "The analytic solution for linear inter-bank problem was derived in Carmona" ], "title": "We provide them here for completeness", "venue": "Assume the ansatz for HJB function is described as: Vi(t,X) =", "year": 2013 }, { "authors": [ "Han" ], "title": "The Total loss and RSE of 1000 agents simulation with 1E-3 learning rate for all simulations. In section 4.1, For the prediction of initial value function, all of frameworks are using 2 layers feed forward network with 128 hidden dimension. For the baseline framework, we followed the suggested configuration motioned", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Stochastic differential games represent a framework for investigating scenarios where multiple players make decisions while operating in a dynamic and stochastic environment. The theory of differential games dates back to the seminal work of Isaacs (1965) studying two-player zero-sum dynamic games, with a first stochastic extension appearing in Kushner & Chamberlain (1969). A key step in the study of games is obtaining the Nash equilibrium among players (Osborne & Rubinstein, 1994). A Nash equilibrium represents the solution of non-cooperative game where two or more players are involved. Each player cannot gain benefit by modifying his/her own strategy given opponents equilibrium strategy. In the context of adversarial multi-objective games, the Nash equilibrium can be represented as a system of coupled Hamilton-Jacobi-Bellman (HJB) equations when the system satisfies the Markovian property. Analytic solutions exist only for few special cases. Therefore, obtaining the Nash equilibrium solution is usually done numerically, and this can become challenging as the number of states/agents increases. Despite extensive theoretical work, the algorithmic part has received less attention and mainly addresses special cases of differential games (e.g., Duncan & Pasik-Duncan (2015)), or suffers from the curse of dimensionality (Kushner, 2002). Nevertheless, stochastic differential games have a variety of applications including in robotics and autonomy, economics and management. Relevant examples include Mataramvura & Øksendal (2008), which formulate portfolio management as a stochastic differential game in order to obtain a market portfolio that minimizes the convex risk measure of a terminal wealth index value, as well as Prasad & Sethi (2004), who investigate optimal advertising spending in duopolistic settings via stochastic differential games.\nReinforcement Learning (RL) aims in obtaining a policy which can generate optimal sequential decisions while interacting with the environment. Commonly, the policy is trained by collecting histories of states, actions, and rewards, and updating the policy accordingly. Multi-agent Reinforcement Learning (MARL) is an extension of RL where several agents compete in a common environment, which is a more complex task due to the interaction between several agents and the environment, as well as between the agents. One approach is to assume agents to be part of environment (Tan, 1993), but this may lead to unstable learning during policy updates (Matignon et al., 2012). On the other hand, a centralized approach considers MARL through an augmented state and action system, reducing its training to that of single agent RL problem. Because of the combinatorial complexity,\nthe centralized learning method cannot scale to more than 10 agents (Yang et al., 2019). Another method is centralized training and decentralized execute (CTDE), however the challenge therein lies on how to decompose value function in the execute phase for value-based MARL. Sunehag et al. (2018) and Zhou et al. (2019) decompose the joint value function into a summation of individual value functions. Rashid et al. (2018) keep the monotonic trends between centralized and decentralized value functions by augmenting the summation non-linearly and designing a mixing network (QMIX). Further modifications on QMIX include Son et al. (2019); Mahajan et al. (2019).\nThe mathematical formulation of a differential game leads to a nonlinear PDE. This motivates algorithmic development for differential games that combine elements of PDE theory with deep learning. Recent encouraging results (Han et al., 2018; Raissi, 2018) in solving nonlinear PDEs within the deep learning community illustrate the scalability and numerical efficiency of neural networks. The transition from a PDE formulation to a trainable neural network is done via the concept of a system of Forward-Backward Stochastic Differential Equations (FBSDEs). Specifically, certain PDE solutions are linked to solutions of FBSDEs, and the latter can be solved using a suitably defined neural network architecture. This is known in the literature as the deep FBSDE approach. Han et al. (2018); Pereira et al. (2019); Wang et al. (2019b) utilize various deep neural network architectures to solve such stochastic systems. However, these algorithms address single agent dynamical systems. Two-player zero-sum games using FBSDEs were initially developed in Exarchos et al. (2019) and transferred to a deep learning setting in Wang et al. (2019a). Recently,Hu (2019) brought deep learning into fictitious play to solve multi-agent non-zero-sum game, Han & Hu (2019) introduced the deep FBSDEs to a multi-agent scenario and the concept of fictitious play, furthermore, Han et al. (2020) gives the convergence proof.\nIn this work we propose an alternative deep FBSDE approach to multi-agent non-cooperative differential games, aiming on reducing complexity and increasing the number of agents the framework can handle. The main contribution of our work is threefold:\n1. We introduce an efficient Deep FBSDE framework for solving stochastic multi-agent games via fictitious play that outperforms the current state of the art in Relative Square Error (RSE) and runtime/memory efficiency on an inter-bank lending/borrowing example.\n2. We demonstrate that our approach scales to a much larger number of agents (up to 1,000 agents, compared to 50 in existing work). To the best of our knowledge, this represents a new state of the art.\n3. We showcase the applicability of our framework to robotics on a belief-space autonomous racing problem which has larger individual control and state space. The experiments demonstrates that the decoupled BSDE provides the possibility of applications for competitive scenario.\nThe rest of the paper is organized as follows: in Section 2 we present the mathematical preliminaries. In Section 3 we introduce the Deep Fictitious Play Belief FBSDE, with simulation results following in Section 4. We conclude the paper and discuss some future directions in Section 5." }, { "heading": "2 MULTI-AGENT FICTITIOUS PLAY FBSDE", "text": "Fictitious play is a learning rule first introduced in Brown (1951) where each player presumes other players’ strategies to be fixed. AnN -player game can then be decoupled intoN individual decisionmaking problems which can be solved iteratively over M stages. When each agent1 converges to a stationary strategy at stage m, this strategy will become the stationary strategy for other players at stage m+ 1. We consider a N -player non-cooperative stochastic differential game with dynamics\ndX(t) = ( f(X(t), t) +G(X(t), t)U(t) ) dt+ Σ(X(t), t)dW (t), X(0) = X0, (1)\nwhere X = (x1,x2, . . . ,xN ) is a vector containing the state process of all agents generated by their controls U = (u1,u2, . . . ,uN ) with xi ∈ Rnx and ui ∈ Rnu . Here, f : Rnx × [0, T ]→ Rnx represents the drift dynamics, G : Rnx × [0, T ] → Rnx×nu represents the actuator dynamics, and Σ : [0, T ]×Rn → Rnx×nw represents the diffusion term. We assume that each agent is only driven by its own controls soG is a block diagonal matrix withGi corresponding to the actuation of agent i.\n1Agent and player are used interchangeably in this paper\nEach agent is also driven by its own nw-dimensional independent Brownian motion Wi, and denote W = (W1,W2, . . . ,WN ). Let Ui be the set of admissible strategies for agent i ∈ I := {1, 2, . . . , N} and U = ⊗Ni=1Ui as the Kronecker product space of Ui. Given the other agents’ strategies, the stochastic optimal control problem for agent i under the fictitious play assumption is defined as minimizing the expectation of the cumulative cost functional J it\nJ it (X,ui,m;u−i,m−1) = E\n[ g(X(T )) + ∫ T t Ci(X(τ),ui,m(X(τ), τ), τ ;u−i,m−1)dτ ] , (2)\nwhere g : Rnx → R+ is the terminal cost, and Ci : [0, T ] × Rnx × U → R+ is the running cost for the i-th player. In this paper we assume that the running cost is of the form C(X,ui,m, t) = q(X) + 1 2u T i,mRui,m + X\nTQui,m. We use the double subscript ui,m to denote the control of agent i at stage m and the negative subscript−i as the strategies excluding player i, u−i = (u1, . . . ,ui−1,ui+1, . . . ,uN ). We can define value function of each player as\nV i(t,X(t)) = inf ui,m∈Ui\n[ J it (X,ui,m;u−i,m−1) ] , V i(T,X(T )) = g(X(T )). (3)\nAssume that the value function in eq. (3) is once differentiable w.r.t. t and twice differentiable w.r.t. x. Then, standard stochastic optimal control theory leads to the HJB PDE\nV i + h+ V iTx (f +GU0,−i) + 1\n2 tr(V ixxΣΣ T) = 0, V i(T,X) = g(X(T )), (4)\nwhere h = Ci∗ + GU∗,0. The double subscript of U∗,0 denotes the augmentation of the optimal control u∗i,m = −R−1(GTi V ix + QTi x) and zero control u−i,m−1 = 0, and U0,−i denotes the augmentation of ui,m = 0 and u−i,m−1. Here we drop the functional dependencies in the HJB equation for simplicity. The detailed proof is in Appendix A. The value function in the HJB PDE can be related to a set of FBSDEs\ndX = (f +GU∗,−i)dt+ ΣdW , X(0) = x0 dV i = −(h+ V iTx GU∗,0)dt+ V Tx ΣdW, V (T ) = g(X(T )),\n(5)\nwhere the backward process corresponds to the value function. The detailed derivation can be found in Appendix B. Note that the FBSDEs here differ from that of Han & Hu (2019) in the optimal control of agent i, GU∗,−i, in the forward process and compensation, V iTx GU∗,0, in the backward process. This is known as the importance sampling for FBSDEs and allows for the FBSDEs to be guided to explore the state space more efficiently." }, { "heading": "3 DEEP FICTITIOUS PLAY FBSDE CONTROLLER", "text": "In this section, we introduce a novel and scalable Deep Fictitious Play FBSDE (SDFP) Controller to solve the multi-agent stochastic optimal control problem. The framework can be extended to the partially observable scenario by combining with an Extended Kalman Filter, whose belief propagation can be described by an SDE for the mean and variance (see derivation in Appendix C). By the natural of decoupled BSDE, the framework can also been extended to cooperative and competitive scenario. In this paper, we demonstrate the example of competitive scenario." }, { "heading": "3.1 NETWORK ARCHITECTURE AND ALGORITHM", "text": "Inspired by the success of LSTM-based deep FBSDE controllers (Wang et al., 2019b; Pereira et al., 2019), we propose an approach based on an LSTM architecture similar to Pereira et al. (2019). The benefits of introducing LSTM are two-fold: 1) LSTM can capture the features of sequential data. A performance comparison between LSTM and fully connected (FC) layers in the deep FBSDE framework has been elaborated in Wang et al. (2019b); 2) LSTM significantly reduces the memory complexity of our model since the memory complexity of LSTM with respect to time is O(1) in the inference phase compared with O(T ) in previous work (Han et al., 2018), where T is the number of time steps. The overall architecture of SDFP is shown in Fig. 1 and features the same time discretization scheme as Pereira et al. (2019). Each player’s policy is characterized by its own copy\nof the network defined in Fig. 11. At stage m, each player can access the stationary strategy of all other players from stage m − 1. During training within a stage, the initial value of each player is predicted by a FC layer parameterized by φ. At each timestep, the optimal policy for each player is computed using the value function gradient prediction V ix from the recurrent network (consisting of FC and LSTM layers), parameterized by θ. The FSDE and BSDE are then forward-propagated using the Euler integration scheme. At terminal time T , the loss function for each player is constructed as the mean squared error between the propagated terminal value V iT and the true terminal value V i∗ T computed from the terminal state. The parameters φ and θ of each player can be trained using any stochastic gradient descent type optimizer such as Adam. The detailed training procedure is shown in Algorithm 2." }, { "heading": "3.2 MITIGATING CURSE OF DIMENSIONALITY AND SAMPLE COMPLEXITY", "text": "Scalability and sample efficiency are two crucial criteria of reinforcement learning. In SDFP, as the number of agents increases, the number of neural network copies would increase correspondingly. Meanwhile, the size of each neural network should be enlarged to gain enough capacity to capture the representation of many agents, leading to the infamous curse of dimensionality; this limits the scalability of prior works. However, one can mitigate the curse of dimensionality in this case by taking advantage of the symmetric game setup. We summarize merits of symmetric game as following:\n1. Since all agents have the same dynamics and cost function, only one copy of the network is needed. The strategy of other agents can be inferred by applying the same network.\n2. Thanks to the symmetric property, we can applied invariant layer to extract invariant features to accelerate training and improve the performance with respect to the accumulate cost and RSE loss.\nSharing one network: It’s important to note that querying other agents should not introduce additional gradient paths. This significantly reduces the memory complexity. When querying other agents’ strategy, one can either iterate through each agent or feed all agents’ states to the network in a batch. The latter approach reduces the time complexity by adopting the parallel nature of modern GPU but requires O(N2) memory rather than O(N) for the first approach.\nInvariant Layers: The memory complexity can be further reduced with an invariant layer embedding (Zaheer et al., 2017). The invariant layer utilizes a sum function along with the features in the same set to render the network invariant to permutation of agents. We apply the invariant layer on X−i and concatenate the resulting features to the features extracted from Xi. However, vanilla invariant layer embedding will not reduce the memory complexity. Thanks to the symmetric problem setup, one can apply a trick to reduce the invariant layer memory complexity form O(N2) to O(N). A\ndetailed introduction to the invariant layer and our implementation can be found in Appendix D and E. The full algorithm is outlined in Algorithm 1.\nAlgorithm 1 Scalable Deep Fictitious Play FBSDE for symmetric simplification 1: Hyper-parameters:N : Number of players, T : Number of timesteps, M : Number of stages in\nfictitious play, Ngd: Number of gradient descent steps per stage, U0: the initial strategies for players in set I, B: Batch size, : training threshold, ∆t: time discretization 2: Parameters:V (x0;φ): Network weights for initial value prediction, θ: Weights and bias of fully connected layers and LSTM layers. 3: θ: Initialize trainable papermeters:θ0, φ0 4: while LOSS is above certain threshold do 5: for m← 1 to M do 6: for all i ∈ I in parallel do 7: Collect opponent agent’s policy which is same as ith policy: fm−1LSTMi(·), f m−1 FCi\n(·) 8: for l← 1 to Ngd do 9: for t← 1 to T − 1 do" }, { "heading": "4 SIMULATION RESULTS", "text": "In this section, we demonstrate the capability of SFDP on two different systems in simulation. We first apply the framework to an inter-bank lending/borrowing problem, which is a classical multiplayer non-cooperative game with an analytic solution. We compare against both the analytic solution and prior work (Han & Hu, 2019). Different approaches introduced in Section 3.2 are compared empirically on this system. We also apply the framework to a variation of the problem for which no analytic solution exists. Finally, we showcase the general applicability of our framework in an autonomous racing problem in belief space. All experiment configurations can be found in Ap-\npendix J. we plot the results of 3 repeated runs with different seeds with the line and shaded region showing the mean and mean±standard deviation respectively. The hyperparameters and dynamics coefficients used in the inter-bank experiments are the same as Han & Hu (2019) unless otherwise noted." }, { "heading": "4.1 INTER-BANK LENDING/BORROWING PROBLEM", "text": "We first consider an inter-bank lending and borrowing model (Carmona et al., 2013) where the dynamics of the log-monetary reserves of N banks is described by the diffusion process\ndXit = [ a(X̄ −Xit) + uit ] dt+ σ(ρdW 0t + √ 1− ρ2dW it ), X̄t = 1\nN N∑ i=1 Xit , i ∈ I. (6)\nThe state Xit ∈ R denotes the log-monetary reserve of bank i at time t > 0. The control uit denotes the cash flow to/from a central bank, where as a(X̄ − Xit) denotes the lending/borrowing rate of bank i from all other banks. The system is driven by N independent standard Brownian motion W it , which denotes the idiosyncratic noise, and a common noise W 0t . The cost function has the form,\nCi,t(X,ui;u−i) = 1\n2 u2i − qui(X̄ −Xi) + 2 (X̄ −Xi)2. (7)\nThe derivation of the FBSDEs and analytic solution can be found in Appendix F. We compare the result of implementation corresponding to Algorithm 2 on a 10-agent problem with analytic solution and previous work from Han & Hu (2019) with the same hyperparameters. Fig. 3 shows the performance of our method compared with analytic solution. The state and control trajectories outputted by the deep FBSDE solution are aligned closely with the analytic solution. Table 1 shows the numerical performance compared with prior work by Han & Hu (2019). Our method outperforms by Relative Square Error (RSE) metrics and computation wall time. The RSE is defined as following:\nRSE =\n∑ i∈I\n1≤j≤B(V̂ i(0,Xj(0))− V i(0,Xj(0)))2∑\ni∈I 1≤j≤B(V̂\ni(0,Xj(0))− V̄ i(0,Xj(0)))2 , (8)\nWhere V̂ i is the analytic solution of value function for ith agents at intial state Xj(0). The initial state Xj(0) is new batch of data sampled from same distribution as X(0) in the training phase. The batch size B is 256 for all inter-bank simulations. V i is the approximated value function for ith agent by FBSDE controller, and V̄ i is the average of analytic solution for ith agent over the entire batch.\nTime/Memory Complexity Analysis: We empirically verify the time and memory complexity of different implementation approaches introduced in 3.2, which is shown in Fig. 4. Note that all\n1The experiment is conducted on Nvidia TITAN RTX\nexperiments hereon correspond to the symmetric SDFP implementation in Algorithm 1 We also test sample efficiency and generalization capability of the invariant layer on a 50-agent problem trained over 100 stages. The number of initial states is limited during the training and the evaluation criterion is the terminal cost of the test set in which the initial states are different from the initial states during training. Fig. 2 showcases the improvement in sample efficiency and generalization performance of invariant layer. We suspect this is due to the network needing to learn with respect to a specific permutation of the input, whereas permutation invariance is built into the network architecture with invariant layer.\nImportance sampling: An important distinction of SDFP from the baseline in Han & Hu (2019) is the importance sampling scheme, which helps the LSTM architecture achieve a fast convergence rate during training. However, the baseline, which uses fully connected layer as backbone, is not suitable for importance sampling, as it would lead to an extremely deep network with fully connected layers from gradient topology perspective. Sandler et al. (2018) mentioned that the information loss is inevitable for this kind of fully connected deep network with nonlinear activation. On the other hand, LSTM does not suffer from this problem because of the existence of long and short memory. We illustrate the benefits of importance sampling for LSTM backbone and gradient flow of fully connected layer backbone in Appendix I.\nHigh dimension experiment: We also analyze the performance of our framework and that of Han & Hu (2019) both with and without invariant layer on high dimensional problems. We first demonstrate the mitigation of the invariant layer on the curse of many agents. Fig. 5 demonstrates the ablation experiment of the two deep FBSDE frameworks (SDFP and Han & Hu (2019)). In order to illustrate that invariant layer can mitigate the curse of dimensionality, we also integrate invariant layers on Han & Hu (2019) and shows the performance in the same figure 5. In this experiment, the weights of FBSDE frameworks with invariant layer are adjusted in order to dismiss the performance improvement resulting from increased weights from the invariant layers. The total cost and RSE are computed by averaging the corresponding values over the last twenty stages of each run. It can be observed from the plot that without invariant layer, the framework suffers from curse of many agents in the prediction of initial value as the RSE increases with respect to the number of agents. On the other hand, RSE increases at a slower rate with invariant layer. In terms of total cost, which is computed from the cost function defined in eq.2, our framework enjoys the benefits of importance sampling and invariant layer, and achieves better numerical results over the number of agents. We further analyse the influence of invariant layers in the training phase by demonstrating fig.7. Invariant layer helps mintage over-fitting phenomenon in the training and evaluation phase, meanwhile accelerating the training process, even though both of frameworks adopting same feature extracting backbone (LSTM) and importance sampling technique.\nWe also show that the invariant layer accelerates training empirically on a 500-agent problem. Fig. 6 shows that the FBSDE frameworks converge much faster with invariant layer than without it. We suspect that the acceleration effect results from increased sample efficiency. Note that the comparison is done on a 500-agent problem because the framework does not scale to 1000 agents without the invariant layer. A comparison of the two frameworks with invariant layer only on a 1000-agent problem can be found in Fig 16, which shows similar results to the 500-agent problem.\nSuperlinear Simulation: We also consider a variant of dynamics in section 4.1,\ndXit = [ a(X̄ −Xit)3 + uit ] dt+ σ(ρdW 0t + √ 1− ρ2dW it ), X̄t = 1\nN N∑ i=1 Xit , i ∈ I. (9)\nDue to the nonlinearity in the drift term, analytic solution or simple numerical representation of the Nash equilibrium does not exist (Han & Hu, 2019). The drift rate a is set to 1.0 to compensate for the vanishing drift term caused by super-linearity. Heuristically, the the distribution of control and state should be more concentrated than that of the linear dynamics. We compare the state and control of a fixed agent i at terminal time against analytic solution and deep FBSDE solution of the linear dynamics with the same coefficients. Fig. 8 is generated by evaluating the trained deep FBSDE model with a batch size of 50000. It can be observed that the solution from super-linear dynamics is more concentrated as expected. The terminal control distribution plot verifies that the super-linear drift term pushes the state back to the average faster than linear dynamics and thus requires less control effort. Since the numerical solution is not available in the superlinear case, we compare the total loss and training loss between baseline Han & Hu (2019) and our algorithm in the appendix 12." }, { "heading": "4.2 BELIEF SPACE AUTONOMOUS RACING", "text": "In this section, we demonstrate the general applicability of our framework on an autonomous racing example in belief space. We consider a 2-car autonomous racing competition problem with racecar dynamics ẋ = [v cos θ, v sin θ, uacc − cdragv, usteerv/L]T (10) where x = [x, y, v, θ]T represent the x, y position, forward velocity and heading respectively. Here we assume x, y, v, uacc ∈ R, usteer ∈ [−1, 1]. The goal of each player is to drive faster than the opponent, stay on the track and avoid collision. An additional competition loss can be added to facilitate competition between players. During the competition, players have access to the global augmented states and opponent’s history controller. Additionally, we assume that stochasticity enters the system through the control channels and have a continuous-time noisy observation model with full-state observation. The FBSDE derivation of belief space stochastic dynamics is included in Appendix G.\nThe framework for the racing problem is trained with batch size of 64, and 100 time steps over a time horizon of 10 seconds. Since all the trials will run over 1 lapse of the circle, here we only show the first 8 second result for neatness. Fig. 13 demonstrate the capability of our framework. When there is no competition loss, both of cars can stay in the track. Since there is no competition between two cars, they demonstrate similar behaviors. When we add competition loss on both cars, both of them try to cut the corner in order to occupy the leading position as shown in the second plot in Fig. 13. If competition loss is present in only one of the two cars, then the one with competition loss will dominate the game as shown in the botton subplots of Figure 13. Notably the simulation is running in belief space where all states are estimated with\nobservation noise and additive noise in the system. The results emphasizes the generalization ability of our framework on more complex systems with higher state and control dimensions. Fig. 9 shows a single trajectory of each car’s posterior distribution." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a scalable deep learning framework for solving multi-agent stochastic differential game using fictitious play. The framework relies on the FBSDE formulation with importance sampling for sufficient exploration. In the symmetric game setup, an invariant layer is incorprated to render the framework agnostic to permutatoon of agents and further reduce the memory complexity. The scalability of this algorithm, along with a detailed sensitivity analysis, is demonstrated in an inter-bank borrowing/lending example. The framework achieves lower loss and scales to much higher dimensions than the state of the art. The general applicability of the framework is showcased on a belief space autonomous racing problem in simulation." }, { "heading": "A MULTI-AGENT HJB DERIVATION", "text": "Applying Bellman’s principle to the value function equation 3 as following\nV i(t,X(t)) = inf ui∈Ui E\n[ V i(t+ dt,X(t+ dt)) + ∫ t+dt t Cidτ ] = inf\nui∈Ui E [ Cidt+ V i(t,X(t)) + V it (t,X(t))dt\n+ V iTx (t,X(t))dX + 1\n2 tr(Vxx(t,X(t)ΣΣ\nT)dt ]\n= inf ui∈Ui\nE [ Cidt+ V i(t,X(t)) + V it (t,X(t))dt\n+ V iTx (t,X(t))((f +GU)dt+ ΣdW ) + 1\n2 tr(V ixx(t,X(t))ΣΣ\nT)dt ]\n= inf ui∈Ui\n[ Cidt+ V i(t,X(t)) + V it (t,X(t))dt\n+ V iTx (t,X(t))((f +GU)dt) + 1\n2 tr(V ixx(t,X(t))ΣΣ\nT)dt ]\n⇒ 0 = V it (t,X(t)) + inf ui∈Ui\n[ Ci + V iTx (t,X(t))(f +GU) ] + 1\n2 tr(V ixx(t,X(t))ΣΣ T)\n(11) Given the cost function assumption, the infimum can be obtained explicitly using optimal control u∗i,m = −R−1(GTi V ix +QTi x). With that we can obtain the final form of the HJB PDE as\nV i + h+ V iTx (f +GU0,−i) + 1\n2 tr(V ixxΣΣ T) = 0, V i(T,X) = g(X(T )). (12)" }, { "heading": "B FBSDE DERIVATION", "text": "Given the HJB PDE in equation 4, one can apply the nonlinear Feynman-Kac lemma Han & Hu (2019) to obtain a set of FBSDE as\ndX(t) = (f +GU0,−i)dt+ ΣdW , X(0) = x0 dV i = −hdt+ V iTx ΣdW, V (X(T )) = g(X(T )).\n(13)\nNote that the forward process X is driven by the control of all agents other than i. This means that agent i searches the state space with Brownian motion only to respond to other agents’ strategies. To increase the efficiency of the search, one can add any control from agent i to guide its exploration, as long as the backward process is compensated for accordingly. In this work, since we consider problems with a closed form solution of the optimal control ui,m, we add it to the forward process for importance sampling from a new set of FBSDEs.\ndX = (f +GU∗,−i)dt+ ΣdW , X(0) = x0 dV i = −(h+ V iTx GU∗,0)dt+ V Tx ΣdW, V (T ) = g(X(T )).\n(14)" }, { "heading": "C CONTINOUS TIME EXTENDED KALMAN FILTER", "text": "The Partial Observable Markov Decision Process is generally difficult to solve within infinite dimensional space belief. Commonly, the Value function does not have explicit parameterized form. Kalman filter overcome this challenge by presuming the noise distribution is Gaussian distribution. In order to deploy proposed Forward Backward Stochastic Differential Equation (FBSDE) model in the Belief space, we need to utilize extended Kalman filter in continuous time Jazwinski (1970) correspondingly. Given the partial observable stochastic system:\ndx dt = f(x, u, w, t), and z = h(x, v, t) (15)\nWhere f is the stochastic state process featured by a Gaussian noise w ∼ N (0, Q), h is the observation function while v ∼ N (0, R) is the observation noise. Next, we consider the linearization of the stochastic dynamics in equation 20 represented as follows:\nA = ∂f\n∂x ∣∣∣∣ x̂ , L = ∂f ∂w ∣∣∣∣ x̂ , C = ∂h ∂x ∣∣∣∣ x̂ ,M = ∂h ∂v ∣∣∣∣ x̂ , Q̃ = LQLT, R̃ = MRMT (16)\none can write the posterior mean state x̂ and prior covariance matrix P− estimation update rule by Simon (2006):\nx̂(0) = E[x(0)], P−(0) = E[(x(0)− x̂)(x(0)− x̂)T] K = PCTR̃−1\n˙̂x = f(x̂, u, w0, t) +K[z − h(x̂, v0, t)] Ṗ− = AP− + P−AT + Q̃− P−CTR̃−1CP−\n(17)\nWe follow the notation in (Simon, 2006), where x is the real state, x̂ is the mean of state estimated by Kalman filter based on the noisy sensor observation, P− represents for the covariance matrix of the estimated state, nominal noise values are given as w0 = 0 and v0 = 0, where superscript + is the posterior estimation and − is the prior estimation. Then we can define a Gaussian belief dynamics as b(x̂k, P−k ) by the mean state x̂ and variance P\n− of normal distribution N (x̂k, P−k ) The belief dynamics results in a decoupled FBSDE system as follows:\ndbk = g(bk,uk, 0)dt+ Σ(bk,uk, 0)dW, dW ∼ N (0, I)\ndV = −Ci ? dt+ V i T x ΣdW (18)\nwhere:\ng(bk,uk) =\n[ b(t,X(t),ui,m(t);u−i,m)\nvec(AkP − k + P − k A T k + Q̃k − P − k C T k R̃ −1 k CkP − k ) ] Σ(bk,uk) = [√ KkCkP − k dt\n0 ] V (T ) = g(X(T ))\nX̂(0) = E[X(0)]\nP−(0) = E[(X(0)− X̂)(X(0)− X̂)T]\n(19)" }, { "heading": "D DEEP SETS", "text": "A function f maps its domain from X to Y . Domain X is a vector space Rd and Y is a continuous space R. Assume the function take a set as input:X = {x1...xN}, then the function f is indifferent if it satisfies property (Zaheer et al., 2017).\nProperty 1. A function f : X → Y defined on sets is permutation invariant to the order of objects in the set. i.e. For any permutation function π:f({x1...xN}) = f( { xπ(1)...xπ(N) } )\nIn this paper, we discuss when f is a nerual network strictly.\nTheorem 1 X has elements from countable universe. A function f(X) is a valid permutation invariant function, i.e invariant to the permutation of X , iff it can be decomposed in the from ρ( ∑ x∈X φ(x)), for appropriate function ρ and φ.\nIn the symmetric multi-agent system, each agents is not distinguishable. This property gives some hints about how to extract the features of −ith agents by using neural network. The states of −ith agents can be represented as a set:X = {X1, X2, ..., Xi−1, Xi+1, ..., XN}. We want to design a neural network f which has the property of permutation invariant. Specifically, φ is represented as a one layer neural network and ρ is a common nonlinear activation function.\nE INVARIANT LAYER ARCHITECTURE\nThe architecture of invariant layer is described in Fig. 10. The input of the layer is the states at time step t. The Invariant Model module in Fig. 10 is described in Appendix D, where φ is a neural network and ρ is nonlinear activation function. The specific configuration of neural network in Invariant model can be found in J.\nNoticing that all the agents has the access to the global states, we define the state input features of neural network for ith agent as:\nXt,i = {xi, x1, x2..., xi−1, xi+1, ...xN} , (20)\nwith shape of [BS,N ]. In the other word, we always put own feature at first place. For each agent i, there exists such feature tensor, then the shape of input tensor will become [BS,N,N ] for invariant layer. In invariant layer, we first separate the input feature Xt into two parts: Xt,i and Xt,−i. Then the features of −ith agents Xt,−i will be sent to the invariant model. The shape of Xt,−i will be [B,N,N − 1] where N is the number of agents. First,we could use neural network to map the feature into Nf dimension space, where Nf is the feature dimensions. Then the shape of the tensor will become [BS,N,N − 1, Nf ], After summing up the features of all the element in the set, the dimension of the tensor would reduce to [BS,N, 1, Nf ], and we denote this feature tensor as F1. However, the memory complexity is O(N2 ×Nf ) which is not tolerable when the number of agent N increases. Alternatively, we can simply mapping the feature tensor [BS,N ] into desired feature dimension Nf , then the tensor would become [BS,N,Nf ], and we denote it to be F2. Now we create another tensor which is the average of features of element in set with size [BS, 1, Nf ] and we denote it to be F̄2. Then we denote F ′2 = (F̄2×N−F2)/(N−1) which has size of [BS,N,Nf ]. We can find that F ′2 = F1, and the memory complexity of computing F ′ 2 is just O(N). The derivation is true if the system is symmetric and the agents are not distinguishable. The trick can be extended to high state dimension for individual agent.\nF INTERBANK\nBy pluging the running cost to the HJB function, one can have,\nVi,t + inf ui∈Ui N∑ j=1 [a(X̄ −Xj) + u2j ]Vxj + 1 2 u2i − qui(X̄ −Xi) + 2 (X̄ −Xi)2 + 1\n2 tr(Vxx,iΣΣ\nT) = 0.\n(21)\nBy computing the infimum explicitly, the optimal control of player i is:ui(X, t) = q(X̄ −Xi) − Vx,i(X, t). The final form of HJB can be obtained as\nVi,t + 1\n2 tr(Vxx,iΣΣ T) + a(X̄ −Xi)Vx,i + ∑ j 6=i [a(X̄ −Xj) + uj ]Vx,j\n+ 2 (X̄ −Xi)2 −\n1 2 (q(X̄ −Xi)− Vx,i)2 = 0\n(22)\nApplying Feynman-Kac lemma to equation 22, the corresponding FBSDE system is\ndX(t) = (f(X(t), t) +G(X(t), t)u(t))dt+ Σ(t,X(t))dWt, X(0) = x0\ndVi = −[ 2 (X̄ −Xi)2 −\n1 2 (q(X̄ −Xi)− Vx,i)2 + ui]dt+ V TxiΣdW, V (T ) = g(X(T )).\n(23)" }, { "heading": "G BELIEF CAR RACING", "text": "The full stochastic model can be written as dx = (f(x) +G(x)u)dt+ Σ(x)dw, z = h(x) +m\nf(x) = v cos θv sin θ−cdragv 0 , G(x) = Σ(x) = 0 00 01 0 0 v/L , h(x) = x (24) Where dw is standard brownian motion. We consider the problem of two cars racing on circle track. The cost function of each car is designed as\nJt = exp (∣∣x2 a2 + y2 b2 − 1 ∣∣)︸ ︷︷ ︸\ntrack cost\n+ ReLU ( − v )︸ ︷︷ ︸\nvelocity cost\n+ exp ( − d)︸ ︷︷ ︸\ncollision cost\nWhere d is Euclidean distance between two cars. In this showcase, we use continuous time extended Kalman Filter to propagate belief space dynamics described in equation 19. The detailed algorithm for Belief space deep fictitious play FBSDE can be found in Appendix. We introduce the concept of game by using an additional competition cost:\nJcompetition = exp(− [ cos(θ) sin(θ) ]T [ x1 − x2 y1 − y2 ] )\nWhere xi, yi is the x, y position of ith car. When ith car is leading, the competition loss will be minor, and it will increase exponentially when the car is trailing. Thanks to decoupled BSDE structure, each car can measure this competition loss separately and optimize the value function individually." }, { "heading": "H ANALYTIC SOLUTION FOR INTER-BANK BORROWING/LENDING PROBLEM", "text": "The analytic solution for linear inter-bank problem was derived in Carmona et al. (2013). We provide them here for completeness. Assume the ansatz for HJB function is described as:\nVi(t,X) = η(t)\n2 (X̄ −Xi)2 = µ(t)i ∈ I (25)\nWhere η(t), µ(t) are two scalar functions. The optimal control under this ansatz is:\nα?i (t,X) =\n[ q + η(t)(1− 1\nN )\n] (X̄ −Xi) (26)\nBy pluginging the ansatz into HJB function derived in 22, one can have,\nη̇(t) = 2(a+ q)η(t) + (1− 1 N2 )η2(t)− ( − q2), η(T ) = c, µ̇(t) = −1 2 σ2(1− ρ2)(1− 1 N )η(t), µ(T ) = 0.\n(27)\nThere exists the analytic solution for the Riccati equation described above as,\nη(t) = −( − q2)(e(δ+−δ−)(T−t) − 1)− c(δ+e(δ+−δ−)(T−t) − δ−)\n(δ−e(δ+−δ−)(T−t) − δ+)− c(1− 1/N2)(e(δ+−δ−)(T−t))− 1 . (28)\nWhere δ± = −(a+ q)± √ R and R = (a+ q)2 + (1− 1/N2)( − q2)\nI IMPORTANCE SAMPLING\nFig. 14 demonstrates how fully connected layers with importance sampling would lead to a extreme deep fully connected neural network. Fig. 15 demonstrates how importance sampling helps increase convergence rate in FBSDE with LSTM backbone. The experiment is conducted with 50 agents and 50 stages. All the configuration is identical except the existence of importance sampling." }, { "heading": "J EXPERIMENT CONFIGURATIONS", "text": "This Appendix elaborates the experiment configurations for section 4. For all the simulation in section 4, the number of SGD iteration is fixed as NSGD = 100. We are using Adam as optimizer\nwith 1E-3 learning rate for all simulations. In section 4.1, For the prediction of initial value function, all of frameworks are using 2 layers feed forward network with 128 hidden dimension. For the baseline framework, we followed the suggested configuration motioned in Han et al. (2018). At each time steps, Vx,i is approximated by three layers of feed forward network with 64 hidden dimensions. We add batch norm Ioffe & Szegedy (2015) after each affine transformation and before each nonlinear activation function. For Deep FBSDE with LSTM backbone, we are using two layer LSTM parametrized by 128 hidden state. If the framework includes the invariant layer, the number of mapping features is chosen to be 256. The hyperparameters of the dynamics is listed as following:\na = 0.1, q = 0.1, c = 0.5, = 0.5, ρ = 0.2, σ = 1, T = 1. (29)\nIn the simulation, the time horizon is separated into 40 time-steps by Euler method. Learning rate is chosen to be 1E-3 which is the default learning rate for Adam optimizer. The initial state for each agents are sampled from the uniform distribution [δ0, δ1]. Where δ0 is the constant standard deviation of state X(t) during the process. In the evaluation, we are using 256 new sampled trajectory which are different from training trajectory to evaluate the performance in RSE error and total cost error. The number of stage is set to be 100 which is enough for all framework to converge.\nIn section 4.2, the hyperparameter is listed as following:\ncdrag = 0.01, L = 0.1, c = 0.5, T = 10.0 (30)\nThe observation noise is sampled from Gaussian noise m ∼ N (0, 0.01I). The time horizon is enrolled into 100 time-steps by Euler method. In this experiments, the initial value Vi is approximated a single trainable scale and Vx,i(t) is approximated by two layers of LSTM parametrized with 32 hidden dimensions. The number of stage is set to be 10.\nAlgorithm 2 Scalable Deep Fictitious Play FBSDE 1: Hyper-parameters:N : Number of players, T : Number of timesteps, M : Number of stages in\nfictitious play, Ngd: Number of gradient descent steps per stage, U0: the initial strategies for players in set I, B: Batch size, : training threshold, ∆t: time discretization 2: Parameters:V (x0;φ): Network weights for initial value prediction, θ: Weights and bias of fully connected layers and LSTM layers. 3: Initialize trainable papermeters:θ0, φ0 4: while LOSS is above certain threshold do 5: for m← 1 to M do 6: for all i ∈ I in parallel do 7: Collect opponent agent’s policy fm−1LSTM−i(·), f m−1 FC−i\n(·) 8: for l← 1 to Ngd do 9: for t← 1 to T − 1 do\n10: for j ← 1 to B in parallel do 11: Compute network prediction for ith player: V ixi,j,t = f m FCi (fmLSTMi(X j t ; θ l−1 i ))\n12: Compute ith optimal Control:uj,?i,t = −R −1 i (G T i V i xi,j,t +QTi x j i ) 13: Infer −ith players’ network prediction: V ix−i,j,t = f m−1 FC−i (fm−1LSTM−i(Xt; θ−i)) 14: Compute −ith optimal Control:uj,?−i,t = −R −1 −i (G T −iV j x−i,t +Q T −ix j −i) 15: Sample noise ∆W j ∼ N (0,∆t) 16: Propagate FSDE: Xjt+1 = fFSDE(X j t ,u j,? i,t ,u j,∗ −i,t,∆W j , t) 17: Propagate BSDE: V ij,t+1 = fBSDE(V i j,t,X j t ,u j,∗ i,t ,∆W\nj , t) 18: end for 19: end for 20: Compute loss: L = 1B ∑B j=1(V i,? j,T − V ij,T )2 21: Gradient Update: θl, φl 22: end for 23: end for 24: end for 25: end while" } ]
2,020
MULTI-AGENT DEEP FBSDE REPRESENTATION FOR LARGE SCALE STOCHASTIC DIFFERENTIAL GAMES
SP:e4664a073afd05446cb1ddc217163692a9a12c1c
[ "This paper attempts to answer the four questions raised from the mutual information estimator. To this end, this paper investigates why the MINE succeeds or fails during the optimization on a synthetic dataset. Based on the observations and discussions, the paper then proposes a novel lower bound to regularize the neural networks and alleviate the problems of MINE." ]
With the variational lower bound of mutual information (MI), the estimation of MI can be understood as an optimization task via stochastic gradient descent. In this work, we start by showing how Mutual Information Neural Estimator (MINE) searches for the optimal function T that maximizes the Donsker-Varadhan representation. With our synthetic dataset, we directly observe the neural network outputs during the optimization to investigate why MINE succeeds or fails: We discover the drifting phenomenon, where the constant term of T is shifting through the optimization process, and analyze the instability caused by the interaction between the logsumexp and the insufficient batch size. Next, through theoretical and experimental evidence, we propose a novel lower bound that effectively regularizes the neural network to alleviate the problems of MINE. We also introduce an averaging strategy that produces an unbiased estimate by utilizing multiple batches to mitigate the batch size limitation. Finally, we show that L regularization achieves significant improvements in both discrete and continuous settings.
[]
[ { "authors": [ "David Barber", "Felix V Agakov" ], "title": "Information maximization in noisy channels: A variational approach", "venue": "In Advances in Neural Information Processing Systems,", "year": 2004 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeshwar", "Sherjil Ozair", "Yoshua Bengio", "Aaron Courville", "Devon Hjelm" ], "title": "Mutual information neural estimation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Vincent François-Lavet", "Yoshua Bengio", "Doina Precup", "Joelle Pineau" ], "title": "Combined reinforcement learning via abstract representations", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Weihao Gao", "Sreeram Kannan", "Sewoong Oh", "Pramod Viswanath" ], "title": "Estimating mutual information for discrete-continuous mixtures", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jiantao Jiao", "Weihao Gao", "Yanjun Han" ], "title": "The nearest neighbor information estimator is adaptively near minimax rate-optimal", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Alexander Kraskov", "Harald Stögbauer", "Peter Grassberger" ], "title": "Estimating mutual information", "venue": "Physical review E,", "year": 2004 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Xiao Lin", "Indranil Sur", "Samuel A Nastase", "Ajay Divakaran", "Uri Hasson", "Mohamed R Amer" ], "title": "Data-efficient mutual information neural estimator", "venue": null, "year": 1905 }, { "authors": [ "David McAllester", "Karl Stratos" ], "title": "Formal limitations on the measurement of mutual information", "venue": "arXiv preprint arXiv:1811.04251,", "year": 2018 }, { "authors": [ "Sudipto Mukherjee", "Himanshu Asnani", "Sreeram Kannan" ], "title": "Ccmi: Classifier based conditional mutual information estimation", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "XuanLong Nguyen", "Martin J Wainwright", "Michael I Jordan" ], "title": "Estimating divergence functionals and the likelihood ratio by convex risk minimization", "venue": "IEEE Transactions on Information Theory,", "year": 2010 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-gan: Training generative neural samplers using variational divergence minimization", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Daniil Polykovskiy", "Alexander Zhebrak", "Dmitry Vetrov", "Yan Ivanenkov", "Vladimir Aladinskiy", "Polina Mamoshina", "Marine Bozdaganyan", "Alexander Aliper", "Alex Zhavoronkov", "Artur Kadurin" ], "title": "Entangled conditional adversarial autoencoder for de novo drug discovery", "venue": "Molecular pharmaceutics,", "year": 2018 }, { "authors": [ "Ben Poole", "Sherjil Ozair", "Aaron Van Den Oord", "Alex Alemi", "George Tucker" ], "title": "On variational bounds of mutual information", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Mirco Ravanelli", "Yoshua Bengio" ], "title": "Learning speaker representations with mutual information", "venue": "arXiv preprint arXiv:1812.00271,", "year": 2018 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Andrew M Saxe", "Yamini Bansal", "Joel Dapello", "Madhu Advani", "Artemy Kolchinsky", "Brendan D Tracey", "David D Cox" ], "title": "On the information bottleneck theory of deep learning", "venue": "Journal of Statistical Mechanics: Theory and Experiment,", "year": 2019 }, { "authors": [ "Ravid Shwartz-Ziv", "Naftali Tishby" ], "title": "Opening the black box of deep neural networks via information", "venue": "arXiv preprint arXiv:1703.00810,", "year": 2017 }, { "authors": [ "Jiaming Song", "Stefano Ermon" ], "title": "Understanding the limitations of variational mutual information estimators", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Naftali Tishby", "Noga Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": "In 2015 IEEE Information Theory Workshop (ITW),", "year": 2015 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Petar Veličković", "William Fedus", "William L Hamilton", "Pietro Liò", "Yoshua Bengio", "R Devon Hjelm" ], "title": "Deep graph infomax", "venue": "arXiv preprint arXiv:1809.10341,", "year": 2018 }, { "authors": [ "Poole" ], "title": "2019)): Establish a single estimate through the average of estimated MI from each batch", "venue": null, "year": 2019 }, { "authors": [ "Oord et al", "Poole" ], "title": "2019) and ReMINE-L1 (our method with L1 regularization)", "venue": null, "year": 2019 }, { "authors": [ "Mukherjee" ], "title": "Effectiveness on the Conditional Mutual Information Estimation Task We compare the performance of various estimators on the conditional MI (CMI) estimation task. To set the baseline, we chose CCMI (Mukherjee et al., 2020), MINE (Belghazi et al., 2018), and KSG estimator (Kraskov et", "venue": null, "year": 2021 }, { "authors": [ "Mukherjee" ], "title": "hyper-parameter settings such as network structures and optimizer parameters. We only changed the objective function of MINE to test our method. As shown in Fig. 15, ReMINE can reach comparable performance without changing the form to classification loss. Also, ReMINE produces stable estimates compared to MINE", "venue": "CMI estimation results on Model", "year": 2000 } ]
[ { "heading": "1 INTRODUCTION", "text": "Identifying a relationship between two variables of interest is one of the great linchpins in mathematics, statistics, and machine learning (Goodfellow et al., 2014; Ren et al., 2015; He et al., 2016; Vaswani et al., 2017). Not surprisingly, this problem is closely tied to measuring the relationship between two variables. One of the fundamental approaches is information theory-based measurement, namely the estimation of mutual information (MI). Recently, Belghazi et al. (2018) proposed a neural network-based MI estimator, which is called Mutual Information Neural Estimator (MINE). Due to its differentiability and applicability, it motivated several researches such as various loss functions bridging the gap between latent variables and representations (Chen et al., 2016; Belghazi et al., 2018; Oord et al., 2018; Hjelm et al., 2019), and methodologies identifying the relationship between input, output and hidden variables (Tishby & Zaslavsky, 2015; Shwartz-Ziv & Tishby, 2017; Saxe et al., 2019). Although many have shown the computational tractability and its usefulness, many intriguing questions about the MI estimator itself remain unanswered.\n• How does the neural network inside MINE behave when estimating MI? • Why does MINE loss diverge in some cases? Where does the instability originate from? • Can we make a more stable estimate on small batch size settings? • Why does the value of each term in MINE loss are shifting even after the estimated MI\nconverges? Are there any side effects of this phenomenon?\nThis study attempts to answer these questions by designing a synthetic dataset to interpret network outputs. Through keen observation, we dissect the Donsker-Varadhan representation (DV) one by one and conclude that the instability and the drifting are caused by the interrelationship between the stochastic gradient descent based optimization and the theoretical properties of DV. Based on these insights, we extend DV to draw out a novel lower bound for MI estimation, which mitigates the aforementioned problems, and circumvents the batch size limitation by maintaining the history of network outputs. We furthermore look into the L2 regularizer form of our bound in detail and analyze how various hyper-parameters impact the estimation of MI and its dynamics during the optimization process. Finally, we demonstrate that our method, called ReMINE, performs favorably against other existing estimators in multiple settings." }, { "heading": "2 RELATED WORKS", "text": "Definition of Mutual Information The mutual information between two random variablesX and Y is defined as\nI(X;Y ) = DKL(PXY ||PX ⊗ PY ) = EPXY [log dPXY dPX⊗Y ] (1)\nwhere PXY and PX ⊗ PY are the joint and the marginal distribution, respectively. DKL is the Kullback-Leibler (KL) divergence. Without loss of generality, we consider PXY and PX ⊗ PY as being distributions on a compact domain Ω ⊂ Rd.\nVariational Mutual Information Estimation Recent works on MI estimation focus on training a neural network to represent a tight variational MI lower bound, where there are several types of representations. Although these methods are known to have statistical limitations (McAllester & Stratos, 2018), their versatility is widely employed nonetheless (Hjelm et al., 2019; Veličković et al., 2018; Polykovskiy et al., 2018; Ravanelli & Bengio, 2018; François-Lavet et al., 2019).\nOne of the most commonly used is the Donsker-Varadhan representation, which is first used in Belghazi et al. (2018) to estimate MI through neural networks. Lemma 1. (Donsker-Varadhan representation (DV))\nI(X;Y ) = sup T :Ω→R\nEPXY [T ]− log(EPX⊗PY [eT ]). (2)\nwhere both the expectations EPXY [T ] and EPX⊗PY [eT ] are finite.\nHowever, as the second term in Eq. (2) leads to biased gradient estimates with a limited number of samples, MINE uses exponential moving averages of mini-batches to alleviate this problem. To further improve the sampling efficiency of MINE, Lin et al. (2019) proposes DEMINE that partitions the samples into train and test sets.\nOther representations based on f-measures are also proposed by Nguyen et al. (2010); Nowozin et al. (2016), which produce unbiased estimates and hence eliminating the need for additional techniques. Lemma 2. (Nguyen, Wainwright, and Jordan representation (NWJ))\nI(X;Y ) = sup T :Ω→R\nEPXY [T ]− EPX⊗PY [eT−1], (3)\nwhere the bound is tight when T = log(dP/dQ) + 1.\nNevertheless, if MI is too big, estimators exhibit large bias or variation (McAllester & Stratos, 2018; Song & Ermon, 2020). To balance in between, Poole et al. (2019) design a new estimator Iα that interpolates Contrastive Predictive Coding (Oord et al., 2018) and NWJ.\nYet, these methods concentrate on various stabilization techniques rather than revealing the dynamics inside the black box. In this paper, we focus on the DV representation and provide intuitive understandings of the inner mechanisms of neural network-based estimators. Based on the analysis, we introduce a new regularization term for MINE, which can effectively remedy its weaknesses theoretically and practically." }, { "heading": "3 HOW DOES MINE ESTIMATE?", "text": "Before going any further, we first observe the statistics network output in MINE during the optimization process using our novel synthetic dataset, and identify and analyze the following phenomena:\n• Drifting phenomenon (Fig. 1a), where estimates of EPXY [T ] and log(EPX⊗PY [eT ]) drifts in parallel even after the MI estimate converges. • Exploding network outputs (Fig. 1d), where smaller batch sizes cause the network outputs\nto explode, but training with larger batch size reduces the variance of MI estimates (Fig. 2a). • Bimodal distribution of the outputs (Fig. 2b), where the network not only classifies input\nsamples but also clusters the network outputs as the MI estimate converges.\nBased on these observations, we analyze the inner workings of MINE, and understand how batch size affects MI estimation." }, { "heading": "3.1 EXPERIMENT SETTINGS", "text": "Dataset. We designed a one-hot discrete dataset with uniform distribution U(1, N) to estimate I(X;X) = logN with MINE, while easily discerning samples of joint distribution X,X from marginal distribution X ⊗ X . Additionally, we use one-hot representation to increase the input dimension, resulting in more network weights to train. In this paper, we used N = 16.\nNetwork settings. We designed a simple statistics network T with a concatenated vector of dimension N × 2 = 32 as input. We pass the input through two fully connected layers with ReLU activation by widths: 32−256−1. The last layer outputs a single scalar with no bias and activation. We used stochastic gradient descent (SGD) with learning rate 0.1 to optimize the statistics network." }, { "heading": "3.2 OBSERVATIONS", "text": "We can observe the drifting phenomenon in Fig. 1a, where the statistics of the network output are adrift even after the convergence of MINE loss. The analysis for this phenomenon will be covered in more detail with theoretical results in Section 4. This section will focus extensively on the relationship between batch size and logsumexp, and the classifying nature of MINE loss.\nBatch size limitation. MINE estimates in a batch-wise manner, i.e., MINE uses samples inside a single batch when estimating EPXY [T ] and log(EPX⊗PY [eT ]). Consider the empirical DV\nÎ(X;Y ) = sup Tθ:Ω→R\nE(n) P̂XY [Tθ]− logE(n)̂PX⊗PY [e Tθ ], (4)\nwhere E(n) P̂ is an empirical average associated with batch size n. Therefore, the variance of Î(X;Y ) increases as the batch size decreases. The observation in Fig. 2a is consistent with the batch size limitation problem (McAllester & Stratos, 2018; Song & Ermon, 2020), which shows that MINE must have a batch size proportional to the exponential of true MI to control the variance of the estimation.\nExploding network outputs. We can understand the output explosion problem in detail by comparing Fig. 1b and Fig. 1d. During optimization, network outputs of joint samples get increased by the first term of Eq. (4), where the inverse of the batch size is multiplied to the gradient of each network output. On the other hand, the output of marginal samples get decreased by the second term of Eq. (4) which concentrates the gradient to the maximum output. Note that the second term is dominated by the maximum network output due to logsumexp, which is a smooth approximation of max. As a single batch is sampled from the true underlying distribution, joint case samples may or may not exist. If it exists, then the joint sample output dominates the term, and its output gets de-\ncreased accordingly, while other non-joint sample outputs also get slightly decreased. In summary, the second term acts as an occasional restriction for the increase of joint sample network outputs.1\nThe second term imposes a problem when the batch size is not large enough. With reduced sample size, joint samples that dominate the second term are getting rare. For the case where joint sample does not exist, marginal sample network outputs decrease much faster compared to the opposite case, and joint sample network outputs are more rarely restricted; thus network outputs diverge in both directions (Fig. 1d), and the second term vibrates between two extreme values depending on whether the joint case occurred (Fig. 1c). This obviously leads to numerical instability and estimation failure.\nBimodal distribution of the outputs. We furthermore observed network outputs directly, as both averaging terms of DV can inhibit the observation of how the statistics network acts on each sample. From the neural network viewpoint, whether each sample is realized from the joint or the marginal distribution is not distinguishable for the joint cases in marginal samples. Therefore, the statistics network has no means but to return the same output value, as it can be seen in Fig. 1b, indicating that the network can only separate joint and non-joint cases. This approach provides a clue that the network is solving a classification task, isolating joint samples from marginal samples, although the statistics network is only provided with samples from the joint and marginal distribution.\nWe observed the distribution of network outputs in detail, on the case where only the marginal samples are fed to the statistics network in Fig. 2b. It stands to reason that the network outputs follow a particular distribution, as the network output estimates a log-likelihood ratio between joint and marginal distribution with an added constant (Lemma 3). Through this, we can view the estimated MI as a sample average; hence Fig. 2a resembles the Gaussian noise by Central Limit Theorem (CLT).\nLet us continue by concentrating on each network output. There is no distinction between the loglikelihood ratio of the samples in the same class for the one-hot discrete dataset: j for the joint case and j for the non-joint case. This explains the classifying nature of the statistics network, and there have to be exactly two clusters in Fig. 2b. Also, as j becomes −∞, exp(j) nears 0, and exp(j) is few magnitudes bigger than exp(j) (see Fig. 2b). As mentioned above, few joint cases dominate the second term, so the second term becomes inherently noisier than the first term. Note that the effectiveness of conventional methods, such as applying exponential moving average to the second term (Belghazi et al., 2018) or clipping the network output values to restrict the magnitude of network outputs (Song & Ermon, 2020) can also be understood with the analysis above.\nIn addition, we cannot interpret the network outputs directly as the log-likelihood ratio due to unregularized outputs, or the drifting problem. We will look into this fundamental limitation of MINE in more detail in the next section.\n1Loosely speaking, the first term slowly increases a lot of joint samples network outputs, in contrary to the second term which quickly decreases a few joint sample network outputs." }, { "heading": "4 THE PROPOSED METHOD: REMINE", "text": "We look into the drifting problem of Fig. 1a in detail to introduce a novel loss with a regularizer that constrains the magnitude of network outputs and enables the use of network outputs from multiple batches. All proofs are provided in Appendix.\nFirst, let us concentrate on the optimal function T ∗, which can be directly drawn from the DV representation. We start with the results of Belghazi et al. (2018), Lemma 3. (Optimal Function T ∗). Let P and Q be distributions on Ω ⊂ Rd. For some constant C∗ ∈ R, there exists an optimal function T ∗ = log dPdQ + C\n∗ such that DKL(P||Q) = EP[T ∗] − log(EQ[eT ∗ ]).\nNote that Mukherjee et al. (2020) directly utilize this fact to model the statistics network. Let us extend the result above, and show that C∗ can be any real number and T ∗ still be optimal. Lemma 4. (Family of Optimal Functions). For any constant C ∈ R, T = log dPdQ + C satisfies DKL(P||Q) = EP[T ]− log(EQ[eT ]).\nThis explains the drifting phenomenon we encountered in Fig. 1a. C∗ is drifting as there are no penalties on different C∗s. Drifting can be stopped by freezing the network weights (Lin et al., 2019), but is there a direct way to regularize C so that the neural network can concentrate on finding a single solution, rather than a family of solutions? Theorem 5. (ReMINE Loss Function) Let d be a distance function on R. For any constant C ′ ∈ R and function T : Ω→ R,\nDKL(P||Q) = sup T :Ω→R EP[T ]− log(EQ[eT ])− d(log(EQ[eT ]), C ′),\nNote that for the optimal T ∗, EPXY [T ∗] = I(X;Y ) + C ′ and log(EPX⊗Y [eT ∗ ]) = C ′. Based on Theorem 5, we propose a novel loss function by adding a new term d(log(EPX⊗PY [eT ]), C ′) that regularizes the drifting of C∗. The details of the ReMINE algorithm is as follows.\nAlgorithm 1: ReMINE θ ← Initialize network parameters, K ←Moving average window size, i← 0 repeat\nDraw J samples from the joint distribution: (x(1)i , y (1) i ), · · · (x (J) i , y (J) i ) ∼ PXY Draw M samples from the marginal distribution: (x(1)i , y (1) i ), · · · (x (M) i , y (M) i ) ∼ PX ⊗ PY Evaluate the lower-bound: ÊPXY ← 1J ∑J j=1 Tθ(x (j) i , y (j) i ) , ÊPX⊗PY ← log( 1M ∑M m=1 e Tθ(x (m) i ,y (m) i ))\nν(θ)← ÊPXY − ÊPX⊗PY − d(ÊPX⊗PY , C) Update the statistics network parameters: θ ← θ +∇θν(θ) Estimate MI based on the last window W = [max(0, i−K + 1),min(K, i)] of size K: Î(X;Y ) = 1 J·|W | ∑ w∈W ∑J j=1 Tθ(x (j) w , y(j)w )− log( 1M·|W | ∑ w∈W ∑M m=1 e Tθ(x (m) w ,y (m) w ))\nNext iteration: i← i+ 1 until convergence;\nReMINE differs from other MINE-like methods that rely on single batch estimates, as ReMINE can utilize all the previous network outputs after the convergence of Î(X;Y ) to establish a final estimate. To demonstrate why these MINE-like methods cannot use the network outputs from multiple batches, we first describe two basic averaging strategies and then show that both methods produce a biased estimate when the statistics network T is drifting. Theorem 6. (Estimation Bias caused by Drifting) The two averaging strategies below produce a biased MI estimate when the drifting problem occurs.\n1. Macro-averaging (similar to that of Poole et al. (2019)): Establish a single estimate through the average of estimated MI from each batch.\n2. Micro-averaging (our method): Calculate DV representation using the average of the each individual network outputs." }, { "heading": "5 EXPERIMENTAL RESULTS OF L2 REGULARIZATION", "text": "Many choices exist for the distance function d when implementing ReMINE. Here, we choose d(x,C) = λ(x − C)2 and explain the rationale behind it. We also explore different choices of hyperparameters C and λ. We show that ReMINE blocks the drifting effect by restricting log(EPX⊗PY [eT ]) estimates to be C, and visualize the loss surface of ReMINE for different choices of λs. Finally, we show that ReMINE achieves better performance than state-of-the-art methods in the continuous domain, and shows comparable performance on self-consistency tests.2" }, { "heading": "5.1 EFFECTIVENESS OF L2 REGULARIZATION", "text": "For the sake of brevity, let batch size B, step size γ, f = E(n) P̂XY [Tθ], g = logE(n)̂PX⊗PY [e Tθ ], and C = 0. Then,∇Î(X;Y ) = ∇f −∇g− 2λg∇g = ∇f −∇g(1 + 2λg). As previously discussed, f increases each B joint sample outputs with (γ/B)∇Tθ. On the other hand, g is the approximation of the maximum marginal sample output, so the gradient becomes close to γ(1 + 2λg)∇Tθ for the maximum output. We can break down the dynamics of g into two parts, depending on whether joint samples exist in g. If not, maximum marginal sample output gets reduced with step size γ(1+2λg), which is, unlike MINE, adaptively adjusted by the size of g. Hence, ReMINE regularizes the maximum output to be centered around − 12λ , preventing the network outputs from diverging to −∞. If any joint sample exists, its network output will be big enough to dominate g. Step size is also γ(1 + 2λg), which regularizes the joint sample network outputs more strongly as it increases. This, too, helps to avoid the output explosion in the +∞ side, as shown in Fig. 3.\nImpact of C. Fig. 4a shows the impact of changing C on MI estimation, with the same settings as Fig. 1a. We observed that the newly added regularizer penalizes the log(EPX⊗PY [eT ]) term to converge towards C as expected, without losing the ability to estimate MI.\nImpact of λ. As we observed in Section 3.2, the network outputs of joint and non-joint cases converge to j and j, respectively. Using this, we visualize the effect of the proposed method by drawing the loss surface of MINE and ReMINE for the one-hot discrete dataset in Fig. 5.\nLMINE = EPXY [T ]− log(EPX⊗PY [eT ]) = j − log(pej + (1− p)ej) (5) LReMINE = EPXY [T ]− log(EPX⊗PY [eT ])− d(log(EPX⊗PY [eT ]), C) (6)\n= j − log(pej + (1− p)ej)− λ(log(pej + (1− p)ej)− C)2. (7)\nWe again observe the drifting phenomena, as the loss surface has a plateau of equally palatable solutions. Regularization term successfully warps the loss surface, so that it has a single solution. As λ increases, the loss surface becomes steeper, resulting in sporadic spikes for each gradient step.\n2We release our code in" }, { "heading": "5.2 REMINE IN THE CONTINUOUS DOMAIN", "text": "20-D Gaussian dataset. We sampled (x, y) from d-dimensional correlated Gaussian dataset where X ∼ N(0, Id) and Y ∼ N(ρX, (1 − ρ2)Id) given the correlation parameter 0 ≤ ρ < 1, which is taken from Belghazi et al. (2018) to test ReMINE on continuous random variables. The true MI for the dataset is I(X,Y ) = −d2 log(1− ρ 2). In these experiments, we set batch size to 64.\nNetwork settings. We consider a joint architecture, which concatenates the inputs (x, y), and then passes through three fully connected layers with ReLU activation (excluding the output layer) by widths 40 − 256 − 256 − 1, same as the network used in Poole et al. (2019). We used Adam optimizer with learning rate 5× 10−4, β1 = 0.9 and β2 = 0.999.\nComparison to state-of-the-arts. As mentioned in Fig. 4b, we can remove the log(EPX⊗PY [eT ]) term by choosing C = 0. As discussed in Section 3.2, the second term is inherently noisy. Hence, using all the terms in ReMINE only in optimization and removing the second term in estimation can effectively reduce noise. We call this trick ReMINE-J.\nTo verify the quality of lower bounds, we compare ReMINE and ReMINE-J to InfoNCE (Oord et al., 2018), JS (Hjelm et al., 2019; Poole et al., 2019), MINE (Belghazi et al., 2018), NWJ (Nguyen et al., 2010), SMILE (Song & Ermon, 2020), SMILE+JS (which estimates with SMILE, and optimizes with JS), TUBA (Barber & Agakov, 2004; Poole et al., 2019) and Iα (Poole et al., 2019). To make a fair comparison, ReMINE also uses the macro-averaging strategy, the same as the other methods. Our methods show comparable or better estimation performance with less variance than others, as shown in Fig. 6. Exact values for bias, variance, and mean square error to the true MI for each estimator are shown in Appendix." }, { "heading": "5.3 REMINE IN THE IMAGE DOMAIN", "text": "Experiment settings. Song & Ermon (2020) introduced the self-consistency tests on image datasets to verify whether the MI estimates follow the basic properties of MI. For our experiments, we used the same network and optimizer, types of tests, and the number of epochs of Song & Ermon (2020) with batch size 16. We did not use our micro-averaging strategy for a fair comparison.\nComparison on self-consistency tests. To compare the stability of DV-based estimators, we conducted self-consistency tests on ReMINE, MINE, SMILE, and SMILE+JS. For type 1, every estimator successfully returns values between the theoretical bound with an increasing trend. For type 2, only ReMINE and SMILE+JS estimates are close to the ideal value. For type 3, none of the estimators worked well. However, ReMINE shows smaller variance compared to MINE and has similar stability to SMILE+JS." }, { "heading": "6 CONCLUSION", "text": "In this paper, we studied how the neural network inside MINE handles the MI estimation problem. We delved into the drifting problem, where two terms of DV continue to fluctuate together even after the MI estimate converges, and the explosion problem, where the network outputs become unstable due to properties of the second term in DV when batch size is small. Based on the analysis, we penalized the objective function for obtaining a unique solution by using L2 regularization. Despite the simplicity, the proposed loss and the micro-averaging strategy mitigate drifting, exploding, and batch size limitation problems. Further, ReMINE enables us to directly interpret the network output values as the log-likelihood ratio of joint and marginal distribution probability and performs favorably against state-of-the-art methods. However, further investigation needs to be done on the impact of optimizers on the batch size limitation, and why DV-based estimators fail in some of the self consistency tests." }, { "heading": "7 APPENDIX: PROOFS", "text": "" }, { "heading": "7.1 PROOF OF FAMILY OF OPTIMAL FUNCTIONS", "text": "Theorem. For any constant C ∈ R, T = log dPdQ +C satisfiesDKL(P||Q) = EP[T ]− log(EQ[e T ]).\nProof. Suppose that T = log dPdQ +C. We can write the function T = (T ∗ −C∗) +C by Lemma 3 in the manuscript. Therefore,\nEP[T ] = EP[T ∗ − C∗ + C] = EP[T ∗]− C∗ + C,\nand\nlog(EQ[eT ]) = log(EQ[eT ∗−C∗+C ])\n= log(eC−C ∗ EQ[eT ∗ ]) = (C − C∗) + log(EQ[eT ∗ ]).\nSince EP[T ]− log(EQ[eT ]) = EP[T ∗]− log(EQ[eT ∗ ]), the function T also optimal." }, { "heading": "7.2 PROOF OF REMINE LOSS FUNCTION", "text": "Theorem. Let d be a distance function on R. For any constant C ′ ∈ R and function T : Ω→ R,\nDKL(P||Q) = sup T :Ω→R EP[T ]− log(EQ[eT ])− d(log(EQ[eT ]), C ′),\nProof. i) For any T ,\nEP[T ]− log(EQ[eT ])− d(log(EQ[eT ]), C ′) ≤ EP[T ]− log(EQ[eT ]).\nTherefore, supT :Ω→R EP[T ]− log(EQ[eT ])− d(log(EQ[eT ]), C ′) ≤ DKL(P||Q).\nii) By the theorem above, there exists T ∗ = log dPdQ + C ′ such that\nDKL(P||Q) = EP[T ∗]− log(EQ[eT ∗ ])\nand\nlog(EQ[eT ∗ ]) = log(EQ[eC ′ dP dQ ]) = log( ∫ eC ′ dP dQ dQ) = C ′.\nTherefore,\nsup T :Ω→R\nEP[T ]− log(EQ[eT ])− d(log(EQ[eT ]), C ′) ≥ EP[T ∗]− log(EQ[eT ∗ ])− d(log(EQ[eT ∗ ]), C ′)\n= DKL(P||Q)\nCombining i) and ii) finishes the proof." }, { "heading": "7.3 PROOF OF ESTIMATION BIAS CAUSED BY DRIFTING", "text": "Theorem. (Estimation Bias caused by Drifting) The two averaging strategies below produce a biased MI estimate when the drifting problem occurs.\n1. Macro-averaging (similar to that of Poole et al. (2019)): Establish a single estimate through the average of estimated MI from each batch.\n2. Micro-averaging (our method): Calculate DV representation using the average of the each individual network outputs.\nProof. Let the outputs of ith batch, jth sample inside the batch as T (J)ij , T (M) ij , joint and marginal case respectively, and the output without drifting as T ∗ij , and drifting constant for each batch Ci. Then, Tij = T ∗ij + Ci.\nWhen the number of batch is B and each batch size is N ,\n1. Macro averaging:\n1 B Σi[ 1 N ΣjT (J) ij − log( 1 N Σj expT (M) ij )] (8)\n= 1\nB Σi[\n1\nN Σj(T\n(J∗) ij + Ci)− log(\n1\nN Σj exp(T\n(M∗) ij + Ci))] (9)\n= 1\nB Σi[\n1\nN Σj(T\n(J∗) ij + Ci)− log(\n1\nN expCiΣj expT\n(M∗) ij )] (10)\n= 1\nB Σi[\n1\nN ΣjT\n(J∗) ij − log(exp(−Ci)\n1\nN expCiΣj expT\n(M∗) ij )] (11)\n= 1\nB Σi[\n1\nN ΣjT\n(J∗) ij − log(\n1\nN Σj expT\n(M∗) ij )] (12)\n= 1\nNB ΣijT\n(J∗) ij −\n1 B Σi[log( 1 N Σj expT (M∗) ij )] (13)\n6= 1 NB ΣijT (J∗) ij − log( 1 NB Σij expT (M∗) ij ) (14)\n2. Micro averaging:\n1\nNB ΣijT\n(J) ij − log(\n1\nNB Σij expT\n(M) ij ) (15)\n= 1\nNB Σij(T\n(J∗) ij + Ci)− log(\n1\nNB Σij exp (T\n(M∗) ij + Ci))) (16)\n= 1\nNB ΣijT\n(J∗) ij − log[(\n1\nNB Σij exp (T\n(M∗) ij + Ci)) 1 BΣiCi ] (17)\n6= 1 NB ΣijT (J∗) ij − log( 1 NB Σij expT (M∗) ij ) (18)" }, { "heading": "8 APPENDIX: ADDITIONAL EXPLANATIONS", "text": "Additional explanations for Fig. 4b As the N -dimensional one-hot discrete dataset is uniform, we can easily calculate the likelihood ratio of joint and non-joint case samples. For all the possible samples, PX ⊗ PY = 1/N2, as they are total of N2. Also, for joint case samples, PXY = 1/N , and PXY = 0 for non-joint case samples. Hence, the likelihood ratio for the joint cases is N , and non-joint cases is 0. These are consistent with the experimental results, where j converges to logN , and j keeps decreasing. Nonetheless, as exp(j) gets closer to zero, the second term of ReMINE loss has lesser influence; hence the decreasing speed of j gets slowed down to a halt as it reaches − 12λ = −5.\nWe can explain the same result from the perspective of j and j. As we observed in Section 3.2, the network output values of joint and non-joint cases converge to j and j, respectively. Since the dataset is uniform, the probability p of joint cases appearing from the marginal samples is 1N . Therefore, we can analyze the value of j and j after convergence as follows: as iteration i→∞,\nEPXY [T (i)] = j → I(X;Y ) + C = logN + C (19)\nlog(EPX⊗PY [eT (i)]) = log(pej + (1− p)ej) = log( 1\nN ej + N − 1 N ej)→ C (20)\nwhere T (i) is the statistics network at iteration i. We combine Eq. (19) and Eq. (20) to\n1\nN elogN+C + N − 1 N ej → eC , and (21)\nej → 0. (22)\nIn summary, j will converge to logN +C, and ej to 0, as shown in Fig. 4b. Note that j and j serves as a back-of-the-envelope calculation for us to estimate network outputs easily on discrete settings.\nWhat happens if the batch size is small? When the batch size is 1 and C = 0, the loss function of ReMINE changes its characteristics as follows.\n• Joint case occurs. As the samples are indistinguishable,\nL = j − log ej − d(j, 0) = −λj2, (23)\nwhich is maximized when j = 0. • Non-joint case occurs.\nL = j − log ej + d(log ej , 0) = j − j − λj2. (24)\nThe latter quadratic term of j is maximized when j = − 12λ .\nIf the statistics network succeeds to converge on both cases for our one-hot discrete dataset,\nÎ(X;Y ) = E(n) P̂XY [Tθ]− logE(n)̂PX⊗PY [e Tθ ] = 0− log(pe0 + (1− p)e− 12λ )→ − log p (25)\nwhen λ→ +0. As p = 1N , Î(X;Y )→ logN . Intuitively, on smaller batch sizes, joint cases cannot occur in marginal samples, as mentioned in Section 3.2. Hence, EPXY [T ] and log(EPX⊗PY [eT ]) behave differently compared to the larger batch size. The regularizer term penalizes both terms in different ways. Joint cases in marginal samples can contribute only with Eq. (23), so EPXY [T ]→ 0. Moreover, as λ gets smaller, log(EPX⊗PY [eT ]) gets regularized less so that it can converge to −Î(X;Y ). In contrast, since MINE has no regularization term, namely λ = 0, there is no way for the joint case in marginal samples to influence T , hence failing to estimate MI as shown in Fig. 8b.\nImpact of λwith batch size. We inspect the relationship between batch size and λ in detail. Fig. 8 shows that imposing regularization reduces noise on a large batch size domain. However, on a small batch size domain, log(EPX⊗PY [eT ]) cannot have nonzero value, hence failing to estimate MI value. The effect of the ReMINE loss in two different domains gets mixed in between.\nVisualizing network outputs on 1-D Gaussian. The dataset forbids us to label joint and non-joint samples explicitly, so we visualized the network outputs on 2-D plane. We used the same experiment settings as Section 3.1, only changing the input dimension to 2. We can see in Fig. 9 that the network outputs of the overlapping region remain near 0, which indicates that the likelihood is equal between joint and marginal distribution. Other regions are separated by the sign of their outputs. Positive network output means that joint distribution is more probable than marginal distribution to sample that data point, and vice versa." }, { "heading": "9 APPENDIX: ADDITIONAL EXPERIMENTAL DETAILS", "text": "Additional experiments outputs on 20-D Gaussian. For quantitative comparison with other approaches on the 20-D correlated Gaussian dataset, we show the bias, variance and mean squared error (MSE) of the neural network-based and nearest neighbor-based MI estimators in Fig. 10, Fig. 11 and Table 1. We omitted values which are more than 100 in Table 1. We additionally show results from KL (Jiao et al., 2018), Mixed KSG (Gao et al., 2017), CCMI (Mukherjee et al., 2020), TNCE (Oord et al., 2018; Poole et al., 2019) and ReMINE-L1 (our method with L1 regularization).\nBoth ReMINE and ReMINE-J shows comparable or better performance compared to other methods. Note that L1 regularizer also suffers from the explosion problem, as the gradient is not adaptively adjusted by the magnitude of network outputs, as discussed in Section 5.1.\nAdditional experiments on the self consistency test. We report the performance of other variational bound methods (JS, Iα, InfoNCE) in Fig. 12. Iα and JS often result in unstable MI estimates, as shown in the type 2 experiment. On the other hand, InfoNCE estimates MI quite reliably but also fails for type 3.\nTo observe on a different dataset setting, we also used MNIST (LeCun et al., 1998). As shown in Fig. 13, tests yielded similar results.\nTo observe on a different statistics network, we used modified ResNet18 (He et al., 2016) that outputs a single scalar. As shown in Fig. 14, SMILE has become more unstable, but there are no significant differences in other variational bounds. This experiment shows that the network size has a small impact on the validity of this test on CIFAR-10.\nEffectiveness on the Conditional Mutual Information Estimation Task We compare the performance of various estimators on the conditional MI (CMI) estimation task. To set the baseline, we chose CCMI (Mukherjee et al., 2020), MINE (Belghazi et al., 2018), and KSG estimator (Kraskov et al., 2004). The Experiment is performed under the Model 1 setting in Mukherjee et al. (2020). We refer to the supplementary of Mukherjee et al. (2020) for hyper-parameter settings such as network structures and optimizer parameters. We only changed the objective function of MINE to test our method. As shown in Fig. 15, ReMINE can reach comparable performance without changing the form to classification loss. Also, ReMINE produces stable estimates compared to MINE." } ]
2,020
null
SP:b1c7e0c9656a0ec0399b6602f89f46323ff3436b
[ "The paper proposes contextual dropout as a sample-dependent dropout module, which can be applied to different models at the expense of marginal memory and computational overhead. The authors chose to focus on Visual Question Answering and Image classification tasks. The results in the paper show the contextual dropbox can improve accuracy on ImageNet and VQA2.0 datasets. " ]
Dropout has been demonstrated as a simple and effective module to not only regularize the training process of deep neural networks, but also provide the uncertainty estimation for prediction. However, the quality of uncertainty estimation is highly dependent on the dropout probabilities. Most current models use the same dropout distributions across all data samples due to its simplicity. Despite the potential gains in the flexibility of modeling uncertainty, sample-dependent dropout, on the other hand, is less explored as it often encounters scalability issues or involves non-trivial model changes. In this paper, we propose contextual dropout with an efficient structural design as a simple and scalable sample-dependent dropout module, which can be applied to a wide range of models at the expense of only slightly increased memory and computational cost. We learn the dropout probabilities with a variational objective, compatible with both Bernoulli dropout and Gaussian dropout. We apply the contextual dropout module to various models with applications to image classification and visual question answering and demonstrate the scalability of the method with large-scale datasets, such as ImageNet and VQA 2.0. Our experimental results show that the proposed method outperforms baseline methods in terms of both accuracy and quality of uncertainty estimation.
[ { "affiliations": [], "name": "Xinjie Fan" }, { "affiliations": [], "name": "Shujian Zhang" }, { "affiliations": [], "name": "Korawat Tanwisuth" }, { "affiliations": [], "name": "Xiaoning Qian" }, { "affiliations": [], "name": "Mingyuan Zhou" } ]
[ { "authors": [ "Jimmy Ba", "Brendan Frey" ], "title": "Adaptive dropout for training deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Emmanuel Bengio", "Pierre-Luc Bacon", "Joelle Pineau", "Doina Precup" ], "title": "Conditional computation in neural networks for faster models", "venue": "arXiv preprint arXiv:1511.06297,", "year": 2015 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "David M Blei", "Alp Kucukelbir", "Jon D McAuliffe" ], "title": "Variational inference: A review for statisticians", "venue": "Journal of the American Statistical Association,", "year": 2017 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "arXiv preprint arXiv:1505.05424,", "year": 2015 }, { "authors": [ "Shahin Boluki", "Randy Ardywibowo", "Siamak Zamani Dadaneh", "Mingyuan Zhou", "Xiaoning Qian" ], "title": "Learnable Bernoulli dropout for Bayesian deep learning", "venue": "In Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Yuntian Deng", "Yoon Kim", "Justin Chiu", "Demi Guo", "Alexander Rush" ], "title": "Latent alignment and variational attention", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zhe Dong", "Andriy Mnih", "George Tucker" ], "title": "DisARM: An antithetic gradient estimator for binary latent variables", "venue": "In Advances in Neural Information Processing Systems", "year": 2020 }, { "authors": [ "Xinjie Fan", "Shujian Zhang", "Bo Chen", "Mingyuan Zhou" ], "title": "Bayesian attention modules", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", "venue": "In international conference on machine learning,", "year": 2016 }, { "authors": [ "Yarin Gal", "Jiri Hron", "Alex Kendall" ], "title": "Concrete dropout", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Asghar Ghasemi", "Saleh Zahediasl" ], "title": "Normality tests for statistical analysis: A guide for non-statisticians", "venue": "International journal of endocrinology and metabolism,", "year": 2012 }, { "authors": [ "Yash Goyal", "Tejas Khot", "Douglas Summers-Stay", "Dhruv Batra", "Devi Parikh" ], "title": "Making the v in vqa matter: Elevating the role of image understanding in visual question answering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Alex Graves" ], "title": "Practical variational inference for neural networks", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Geoffrey E Hinton", "Nitish Srivastava", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan R Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": "arXiv preprint arXiv:1207.0580,", "year": 2012 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Matthew D Hoffman", "David M Blei", "Chong Wang", "John Paisley" ], "title": "Stochastic variational inference", "venue": "The Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Durk P Kingma", "Tim Salimans", "Max Welling" ], "title": "Variational dropout and the local reparameterization trick", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Volodymyr Kuleshov", "Nathan Fenner", "Stefano Ermon" ], "title": "Accurate uncertainties for deep learning using calibrated regression", "venue": "arXiv preprint arXiv:1807.00263,", "year": 2018 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Hugo Larochelle", "Dumitru Erhan", "Aaron Courville", "James Bergstra", "Yoshua Bengio" ], "title": "An empirical evaluation of deep architectures on problems with many factors of variation", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "MNIST handwritten digit database", "venue": "AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist,", "year": 2010 }, { "authors": [ "Chunyuan Li", "Changyou Chen", "David Carlson", "Lawrence Carin" ], "title": "Preconditioned stochastic gradient Langevin dynamics for deep neural networks", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Yang Li", "Shihao Ji" ], "title": "L0-ARM: Network sparsification via stochastic binary optimization", "venue": "In The European Conference on Machine Learning (ECML),", "year": 2019 }, { "authors": [ "Christos Louizos", "Max Welling" ], "title": "Multiplicative normalizing flows for variational Bayesian neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "David JC MacKay" ], "title": "A practical bayesian framework for backpropagation networks", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "Dmitry Molchanov", "Arsenii Ashukha", "Dmitry Vetrov" ], "title": "Variational dropout sparsifies deep neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jishnu Mukhoti", "Yarin Gal" ], "title": "Evaluating bayesian deep learning methods for semantic segmentation", "venue": "arXiv preprint arXiv:1811.12709,", "year": 2018 }, { "authors": [ "Mahdi Pakdaman Naeini", "Gregory Cooper", "Milos Hauskrecht" ], "title": "Obtaining well calibrated probabilities using Bayesian binning", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Radford M Neal" ], "title": "Bayesian learning for neural networks, volume 118", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Yurii E Nesterov" ], "title": "A method for solving the convex programming problem with convergence rate o (1/kˆ 2)", "venue": "In Dokl. akad. nauk Sssr,", "year": 1983 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Noam Shazeer", "Azalia Mirhoseini", "Krzysztof Maziarz", "Andy Davis", "Quoc Le", "Geoffrey Hinton", "Jeff Dean" ], "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer", "venue": "arXiv preprint arXiv:1701.06538,", "year": 2017 }, { "authors": [ "Jiaxin Shi", "Shengyang Sun", "Jun Zhu" ], "title": "Kernel implicit variational inference", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Ravi Teja Mullapudi", "William R Mark", "Noam Shazeer", "Kayvon Fatahalian" ], "title": "Hydranets: Specialized dynamic architectures for efficient inference", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Damien Teney", "Peter Anderson", "Xiaodong He", "Anton Van Den Hengel" ], "title": "Tips and tricks for visual question answering: Learnings from the 2017 challenge", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jonathan Tompson", "Ross Goroshin", "Arjun Jain", "Yann LeCun", "Christoph Bregler" ], "title": "Efficient object localization using convolutional networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Li Wan", "Matthew Zeiler", "Sixin Zhang", "Yann Le Cun", "Rob Fergus" ], "title": "Regularization of neural networks using dropconnect", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Zhendong Wang", "Mingyuan Zhou" ], "title": "Thompson sampling via local uncertainty", "venue": "arXiv preprint arXiv:1910.13673,", "year": 2019 }, { "authors": [ "Max Welling", "Yee W Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "In Reinforcement Learning,", "year": 1992 }, { "authors": [ "Bing Xu", "Naiyan Wang", "Tianqi Chen", "Mu Li" ], "title": "Empirical evaluation of rectified activations in convolutional network", "venue": "arXiv preprint arXiv:1505.00853,", "year": 2015 }, { "authors": [ "Kelvin Xu", "Jimmy Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron Courville", "Ruslan Salakhudinov", "Rich Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Mingzhang Yin", "Mingyuan Zhou" ], "title": "ARM: Augment-REINFORCE-merge gradient for discrete latent variable models", "venue": null, "year": 2018 }, { "authors": [ "Mingzhang Yin", "Nhat Ho", "Bowei Yan", "Xiaoning Qian", "Mingyuan Zhou" ], "title": "Probabilistic Best Subset Selection by Gradient-Based Optimization", "venue": "arXiv e-prints,", "year": 2020 }, { "authors": [ "Zhou Yu", "Jun Yu", "Yuhao Cui", "Dacheng Tao", "Qi Tian" ], "title": "Deep modular co-attention networks for visual question answering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Yu" ], "title": "2019) and the standard deviation parameter of 1/3 for Gaussian dropout", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (NNs) have become ubiquitous and achieved state-of-the-art results in a wide variety of research problems (LeCun et al., 2015). To prevent over-parameterized NNs from overfitting, we often need to appropriately regularize their training. One way to do so is to use Bayesian NNs that treat the NN weights as random variables and regularize them with appropriate prior distributions (MacKay, 1992; Neal, 2012). More importantly, we can obtain the model’s confidence on its predictions by evaluating the consistency between the predictions that are conditioned on different posterior samples of the NN weights. However, despite significant recent efforts in developing various types of approximate inference for Bayesian NNs (Graves, 2011; Welling & Teh, 2011; Li et al., 2016; Blundell et al., 2015; Louizos & Welling, 2017; Shi et al., 2018), the large number of NN weights makes it difficult to scale to real-world applications.\nDropout has been demonstrated as another effective regularization strategy, which can be viewed as imposing a distribution over the NN weights (Gal & Ghahramani, 2016). Relating dropout to Bayesian inference provides a much simpler and more efficient way than using vanilla Bayesian NNs to provide uncertainty estimation (Gal & Ghahramani, 2016), as there is no more need to explicitly instantiate multiple sets of NN weights. For example, Bernoulli dropout randomly shuts down neurons during training (Hinton et al., 2012; Srivastava et al., 2014). Gaussian dropout multiplies the neurons with independent, and identically distributed (iid) Gaussian random variables drawn from N (1, α), where the variance α is a tuning parameter (Srivastava et al., 2014). Variational dropout generalizes Gaussian dropout by reformulating it under a Bayesian setting and allowing α to be learned under a variational objective (Kingma et al., 2015; Molchanov et al., 2017).\n∗ Equal contribution. Corresponding to: mingyuan.zhou@mccombs.utexas.edu\nHowever, the quality of uncertainty estimation depends heavily on the dropout probabilities (Gal et al., 2017). To avoid grid-search over the dropout probabilities, Gal et al. (2017) and Boluki et al. (2020) propose to automatically learn the dropout probabilities, which not only leads to a faster experiment cycle but also enables the model to have different dropout probabilities for each layer, bringing greater flexibility into uncertainty modeling. But, these methods still impose the restrictive assumption that dropout probabilities are global parameters shared across all data samples. By contrast, we consider parameterizing dropout probabilities as a function of input covariates, treating them as data-dependent local variables. Applying covariate-dependent dropouts allows different data to have different distributions over the NN weights. This generalization has the potential to greatly enhance the expressiveness of a Bayesian NN. However, learning covariate-dependent dropout rates is challenging. Ba & Frey (2013) propose standout, where a binary belief network is laid over the original network, and develop a heuristic approximation to optimize free energy. But, as pointed out by Gal et al. (2017), it is not scalable due to its need to significantly increase the model size.\nIn this paper, we propose a simple and scalable contextual dropout module, whose dropout rates depend on the covariates x, as a new approximate Bayesian inference method for NNs. With a novel design that reuses the main network to define how the covariate-dependent dropout rates are produced, it boosts the performance while only slightly increases the memory and computational cost. Our method greatly enhances the flexibility of modeling, maintains the inherent advantages of dropout over conventional Bayesian NNs, and is generally simple to implement and scalable to the large-scale applications. We plug the contextual dropout module into various types of NN layers, including fully connected, convolutional, and attention layers. On a variety of supervised learning tasks, contextual dropout achieves good performance in terms of accuracy and quality of uncertainty estimation." }, { "heading": "2 CONTEXTUAL DROPOUT", "text": "We introduce an efficient solution for data-dependent dropout: (1) treat the dropout probabilities as sample-dependent local random variables, (2) propose an efficient parameterization of dropout probabilities by sharing parameters between the encoder and decoder, and (3) learn the dropout distribution with a variational objective." }, { "heading": "2.1 BACKGROUND ON DROPOUT MODULES", "text": "Consider a supervised learning problem with training data D := {xi, yi}Ni=1, where we model the conditional probability pθ(yi |xi) using a NN parameterized by θ. Applying dropout to a NN often means element-wisely reweighing each layer with a data-specific Bernoulli/Gaussian distributed random mask zi, which are iid drawn from a prior pη(z) parameterized by η (Hinton et al., 2012; Srivastava et al., 2014). This implies dropout training can be viewed as approximate Bayesian inference (Gal & Ghahramani, 2016). More specifically, one may view the learning objective of a supervised learning model with dropout as a log-marginal-likelihood: log ∫ ∏N i=1 p(yi |xi, z)p(z)dz. To maximize this often intractable log-marginal, it is common to resort to variational inference (Hoffman et al., 2013; Blei et al., 2017) that introduces a variational distribution q(z) on the random mask z and optimizes an evidence lower bound (ELBO):\nL(D) = Eq(z) [ log ∏N i=1 pθ(yi |xi,z)pη(z)\nq(z)\n] = (∑N i=1 Ezi∼q(z) [log pθ(yi |xi, zi)] ) − KL(q(z)||pη(z)), (1)\nwhere KL(q(z)||pη(z)) = Eq(z)[log q(z)− log p(z)] is a Kullback–Leibler (KL) divergence based regularization term. Whether the KL term is explicitly imposed is a key distinction between regular dropout (Hinton et al., 2012; Srivastava et al., 2014) and their Bayesian generalizations (Gal & Ghahramani, 2016; Gal et al., 2017; Kingma et al., 2015; Molchanov et al., 2017; Boluki et al., 2020)." }, { "heading": "2.2 COVARIATE-DEPENDENT WEIGHT UNCERTAINTY", "text": "In regular dropout, as shown in (1), while we make the dropout masks data specific during optimization, we keep their distributions the same. This implies that while the NN weights can vary from data to data, their distribution is kept data invariant. In this paper, we propose contextual dropout, in which the distributions of dropout masks zi depend on covariates xi for each sample (xi, yi). Specifically, we define the variational distribution as qφ(zi |xi), where φ denotes its NN parameters. In the framework of amortized variational Bayes (Kingma & Welling, 2013; Rezende\net al., 2014), we can view qφ as an inference network (encoder) trying to approximate the posterior p(zi | yi,xi) ∝ p(yi |xi, zi)p(zi). Note as we have no access to yi during testing, we parameterize our encoder in a way that it depends on xi but not yi. From the optimization point of view, what we propose corresponds to the ELBO of log ∏N i=1 ∫ p(yi |xi, zi)p(zi)dzi given qφ(zi |xi) as the encoder, which can be expressed as\nL(D) = ∑N i=1 L(xi, yi), L(xi, yi) = Ezi∼qφ(· |xi)[log pθ(yi |xi,zi)]− KL(qφ(zi |xi)||pη(zi)). (2)\nThis ELBO differs from that of regular dropout in (1) in that the dropout distributions for zi are now parameterized by xi and a single KL regularization term is replaced with the aggregation of N data-dependent KL terms. Unlike conventional Bayesian NNs, as zi is now a local random variable, the impact of the KL terms will not diminish as N increases, and from the viewpoint of uncertainty quantification, contextual dropout relies only on aleatoric uncertainty to model its uncertainty on yi given xi. Like conventional BNNs, we may add epistemic uncertainty by imposing a prior distribution on θ and/or φ, and infer their posterior given D. As contextual dropout with a point estimate on both θ and φ is already achieving state-of-the-art performance, we leave that extension for future research. In what follows, we omit the data index i for simplification and formally define its model structure.\nCross-layer dependence: For a NN with L layers, we denote z = {z1, . . . ,zL}, with zl representing the dropout masks at layer l. As we expect zl to be dependent on the dropout masks in previous layers {zj}j<l, we introduce an autoregressive distribution as qφ(z |x) = ∏L l=1 qφ(z\nl |xl−1), where xl−1, the output of layer l − 1, is a function of {z1, . . . ,zl−1,x}. Parameter sharing between encoder and decoder: We aim to build an encoder by modeling qφ(zl |xl−1), where x may come from complex and highly structured data such as images and natural languages. Thus, extracting useful features from x to learn the encoder distribution qφ itself becomes a problem as challenging as the original one, i.e., extracting discriminative features from x to predict y. As intermediate layers in the decoder network pθ are already learning useful features from the input, we choose to reuse them in the encoder, instead of extracting the features from scratch. If we denote layer l of the decoder network by glθ, then the output of layer l, given its input xl−1, would be Ul = glθ(x\nl−1). Considering this as a learned feature for x, as illustrated in Figure 1, we build the encoder on this output as\nαl = hlϕ(U l), draw zl conditioning on αl, and element-wisely multiply zl with Ul (with broadcast if needed) to produce the output of layer l as xl. In this way, we use {θ,ϕ} to parameterize the encoder, which reuses parameters θ of the decoder. To produce the dropout rates of the encoder, we only need extra parameters ϕ, the added memory and computational cost of which are often insignificant in comparison to these of the decoder." }, { "heading": "2.3 EFFICIENT PARAMETERIZATION OF CONTEXTUAL DROPOUT MODULE", "text": "Denote the output of layer l by a multidimensional array (tensor) Ul = glθ(x l−1) ∈ RC l 1×...×C l\nDl , where Dl denotes the number of the dimensions of Ul and Cld denotes the number of elements along dimension d ∈ {1, . . . , Dl}. For efficiency, the output shape of hlϕ is not matched to the shape of Ul. Instead, we make it smaller and broadcast the contextual dropout masks zl across the dimensions of Ul (Tompson et al., 2015). Specifically, we parameterize dropout logits αl of the variational distribution to have Cld elements, where d ∈ {1, ...., Dl} is a specified dimension of Ul. We sample zl from the encoder and broadcast them across all but dimension d of Ul. We sample zl ∼ Ber(σ(αl)) under contextual Bernoulli dropout, and follow Srivastava et al. (2014) to use zl ∼ N(1, σ(αl)/(1− σ(αl))) for contextual Gaussian dropout. To obtain αl ∈ RCld , we first take the average pooling of Ul across all but dimension d, with the output denoted as Favepool,d(Ul), and then apply two fully-connected layers Φl1 and Φ l 2 connected by FNL, a (Leaky) ReLU based nonlinear activation function, as\nαl = hlϕ(U l) = Φl2(FNL(Φ l 1(Favepool,d(U l)))), (3)\nwhere Φl1 is a linear transformation mapping from RC l d to RCld/γ , while Φl2 is from RC l d/γ back to RCld , with γ being a reduction ratio controlling the complexity of hlϕ. Below we describe how to apply contextual dropout to three representative types of NN layers.\nContextual dropout module for fully-connected layers2: If layer l is a fully-connected layer and Ul ∈ RC l 1×···×C l Dl , we set αl ∈ RC l\nDl , where Dl is the dimension that the linear transformation is applied to. Note, if Ul ∈ RCl1 , then αl ∈ RCl1 , and Favepool,1 is an identity map, so αl = Φl2(FNL(Φl1(Ul))). Contextual dropout module for convolutional layers: Assume layer l is a convolutional layer with Cl3 as convolutional channels and U\nl ∈ RCl1×Cl2×Cl3 . Similar to Spatial Dropout (Tompson et al., 2015), we set αl ∈ RCl3 and broadcast its corresponding zl spatially as illustrated in Figure 2. Such parameterization is similar to the squeeze-and-excitation unit for convolutional layers, which has been shown to be effective in image classification tasks (Hu et al., 2018). However, in squeeze-andexcitation, σ(αl) is used as channel-wise soft attention weights instead of dropout probabilities, therefore it serves as a deterministic mapping in the model instead of a stochastic unit used in the inference network.\nContextual dropout module for attention layers: Dropout has been widely used in attention layers (Xu et al., 2015b; Vaswani et al., 2017; Yu et al., 2019). For example, it can be applied to multi-head attention weights after the softmax operation (see illustrations in Figure 2). The weights are of dimension [H,NK , NQ], where H is the number of heads, NK the number of keys, and NQ the number of queries. In this case, we find that setting αl ∈ RH gives good performance. Intuitively, this coincides with the choice of channel dimension for convolutional layers, as heads in attention could be analogized as channels in convolution." }, { "heading": "2.4 VARIATIONAL INFERENCE FOR CONTEXTUAL DROPOUT", "text": "In contextual dropout, we choose L(D) = ∑\n(x,y)∈D L(x, y) shown in (2) as the optimization objective. Note in our design, the encoder qφ reuses the decoder parameters θ to define its own parameters. Therefore, we copy the values of θ into φ and stop the gradient of θ when optimizing qφ. This is theoretically sound (Ba & Frey, 2013). Intuitively, the gradients to θ from pθ are less noisy than that from qφ as the training of pθ(y |x, z) is supervised while that of qφ(z) is unsupervised. As what we have expected, allowing gradients from qφ to backpropagate to θ is found to adversely affect the training of pθ in our experiments. We use a simple prior pη, making the prior distributions for dropout masks the same within each layer. The gradients with respect to η and θ can be expressed as\n∇ηL(x, y) = Ez∼qφ(· |x)[∇η log pη(z)], ∇θL(x, y) = Ez∼qφ(· |x)[∇θ log pθ(y |x, z)], (4)\nwhich are both estimated via Monte Carlo integration, using a single z ∼ qφ(z |x) for each x. Now, we consider the gradient of L with respect toϕ, the components of φ = {θ,ϕ} not copied from the decoder. For Gaussian contextual dropout, we estimate the gradients via the reparameterization\n2Note that full-connected layers can be applied to multi-dimensional tensor as long as we specify the dimension along which the summation operation is conducted (Abadi et al., 2015).\ntrick (Kingma & Welling, 2013). For zl ∼ N(1, σ(αl)/(1 − σ(αl))), we rewrite it as zl = 1 + √ σ(αl)/(1− σ(αl)) l, where l ∼ N (0, I). Similarly, sampling a sequence of z = {zl}Ll=1 from qφ(z |x) can be rewritten as fφ( ,x), where fφ is a deterministic differentiable mapping and are iid standard Gaussian. The gradient ∇ϕL(x, y) can now be expressed as (see pseudo code of Algorithm 3 in Appendix)\n∇ϕL(x, y) = E ∼N (0,1)[∇ϕ(log pθ(y |x, fφ( ,x))− log qφ(fφ( ,x) |x) log pη(fφ( ,x)) )]. (5)\nFor Bernoulli contextual dropout, backpropagating the gradient efficiently is not straightforward, as the Bernoulli distribution is not reparameterizable, restricting the use of the reparameterization trick. In this case, a commonly used gradient estimator is the REINFORCE estimator (Williams, 1992) (see details in Appendix A). This estimator, however, is known to have high Monte Carlo estimation variance. To this end, we estimate ∇ϕL with the augment-REINFORCE-merge (ARM) estimator (Yin & Zhou, 2018), which provides unbiased and low-variance gradients for the parameters of Bernoulli distributions. We defer the details of this estimator to Appendix A. We note there exists an improved ARM estimator (Yin et al., 2020; Dong et al., 2020), applying which could further improve the performance." }, { "heading": "2.5 TESTING AND COMPLEXITY ANALYSIS", "text": "Testing stage: To obtain a point estimate, we follow the common practice in dropout (Srivastava et al., 2014) to multiply the neurons by the expected values of random dropout masks, which means that we predict y with pθ(y |x, z̄), where z̄ = Eqφ(z |x)[z] under the proposed contextual dropout. When uncertainty estimation is needed, we draw K random dropout masks to approximate the posterior predictive distribution of y given x using p̂(y |x) = 1K ∑K k=1 pθ(y |x, z(k)), where z(1), . . . ,z(K) iid∼ qφ(z |x).\nComplexity analysis: The added computation and memory of contextual dropout are insignificant due to the parameter sharing between the encoder and decoder. Extra memory and computational cost mainly comes from the part of hlϕ, where both the parameter size and number of operations are of order O((Cld)\n2/γ), where γ is from 8 to 16. This is insignificant, compared to the memory and computational cost of the main network, which are of order larger than O((Cld)\n2). We verify the point by providing memory and runtime comparisons between contextual dropout and other dropouts on ResNet in Table 3 (see more model size comparisons in Table 5 in Appendix)." }, { "heading": "2.6 RELATED WORK", "text": "Data-dependent variational distribution: Deng et al. (2018) model attentions as latent-alignment variables and optimize a tighter lower bound (compared to hard attention) using a learned inference network. To balance exploration and exploitation for contextual bandits problems, Wang & Zhou (2019) introduce local variable uncertainty under the Thompson sampling framework. However, their inference networks of are both independent of the decoder, which may considerably increase memory and computational cost for the considered applications. Fan et al. (2020) propose Bayesian attention modules with efficient parameter sharing between the encoder and decoder networks. Its scope is limited to attention units as Deng et al. (2018), while we demonstrate the general applicability of contextual dropout to fully connected, convolutional, and attention layers in supervised learning models. Conditional computation (Bengio et al., 2015; 2013; Shazeer et al., 2017; Teja Mullapudi et al., 2018) tries to increase model capacity without a proportional increase in computation, where an independent gating network decides turning which part of a network active and which inactive for each example. In contextual dropout, the encoder works much like a gating network choosing the distribution of sub-networks for each sample. But the potential gain in model capacity is even larger, e.g., there are potentially ∼ O((2d)L) combinations of nodes for L fully-connected layers, where d is the order of the number of nodes for one layer. Generalization of dropout: DropConnect (Wan et al., 2013) randomly drops the weights rather than the activations so as to generalize dropout. The dropout distributions for the weights, however, are still the same across different samples. Contextual dropout utilizes sample-dependent dropout probabilities, allowing different samples to have different dropout probabilities." }, { "heading": "3 EXPERIMENTS", "text": "Our method can be straightforwardly deployed wherever regular dropout can be utilized. To test its general applicability and scalability, we apply the proposed method to three representative types of NN layers: fully connected, convolutional, and attention layers with applications on MNIST (LeCun et al., 2010), CIFAR (Krizhevsky et al., 2009), ImageNet (Deng et al., 2009), and VQA-v2 (Goyal et al., 2017). To investigate the model’s robustness to noise, we also construct noisy versions of datasets by adding Gaussian noises to image inputs (Larochelle et al., 2007).\nFor evaluation, we consider both the accuracy and uncertainty on predicting y given x. Many metrics have been proposed to evaluate the quality of uncertainty estimation. On one hand, researchers are generating calibrated probability estimates to measure model confidence (Guo et al., 2017; Naeini et al., 2015; Kuleshov et al., 2018). While expected calibration error and maximum calibration error have been proposed to quantitatively measure calibration, such metrics do not reflect how robust the probabilities are with noise injected into the network input, and cannot capture epistemic or model uncertainty (Gal & Ghahramani, 2016). On the other hand, the entropy of the predictive distribution as well as the mutual information, between the predictive distribution and posterior over network weights, are used as metrics to capture both epistemic and aleatoric uncertainty (Mukhoti & Gal, 2018). However, it is often unclear how large the entropy or mutual information is large enough to be classified as uncertain, so such metric only provides a relative uncertainty measure.\nHypothesis testing based uncertainty estimation: Unlike previous information theoretic metrics, we use a statistical test based method to estimate uncertainty, which works for both single-label and multi-label classification models. One advantage of using hypothesis testing over information theoretic metrics is that the p-value of the test can be more interpretable, making it easier to be deployed in practice to obtain a binary uncertainty decision. To quantify how confident our model is about this prediction, we evaluate whether the difference between the empirical distributions of the two most possible classes from multiple posterior samples is statistically significant. Please see Appendix D for a detailed explanation of the test procedure.\nUncertainty evaluation via PAvPU: With the p-value of the testing result and a given p-value threshold, we can determine whether the model is certain or uncertain about one prediction. To evaluate the uncertainty estimates, we uses Patch Accuracy vs Patch Uncertainty (PAvPU) (Mukhoti & Gal, 2018), which is defined as PAvPU = (nac + niu)/(nac + nau + nic + niu), where nac, nau, nic, niu are the numbers of accurate and certain, accurate and uncertain, inaccurate and certain, inaccurate and uncertain samples, respectively. This PAvPU evaluation metric would be higher if the model tends to generate the accurate prediction with high certainty and inaccurate prediction with high uncertainty." }, { "heading": "3.1 CONTEXTUAL DROPOUT ON FULLY CONNECTED LAYERS", "text": "We consider an MLP with two hidden layers of size 300 and 100, respectively, with ReLU activations. Dropout is applied to the input layer and the outputs of first two full-connected layers. We use MNIST as the benchmark. We compare contextual dropout with MC dropout (Gal & Ghahramani, 2016), concrete dropout (Gal et al., 2017), Gaussian dropout (Srivastava et al., 2014), and Bayes by Backprop (Blundell et al., 2015). Please see the detailed experimental setting in Appendix C.1.\nResults and analysis: In Table 1, we show accuracy, PAvPU (p-value threshold equal to 0.05) and, test predictive loglikelihood with error bars (5 random runs) for models with different dropouts under the challenging noisy data3 (added Gaussian noise with mean 0, variance 1). Note that the uncertainty\n3Results on original data is deferred to Table 6 in Appendix .\nresults for p-value threshold 0.05 is in general consistent with the results for other p-value thresholds (see more in Table 6 in Appendix). We observe that contextual dropout outperforms other methods in all metrics. Moreover, compared to Bayes by Backprop, contextual dropout is more memory and computationally efficient. As shown in Table 5 in Appendix, contextual dropout only introduces 16% additional parameters. However, Bayes by Backprop doubles the memory and increases the computations significantly as we need multiple draws of NN weights for uncertainty. Due to this reason, we do not include it for the following large model evaluations. We note that using the output of the gating network to directly scale activations (contextual gating) underperforms contextual dropout, which shows that the sampling process is important for preventing overfitting and improving robustness to noise. Adding a regular dropout layer on the gating activations (contextual gating + dropout) improves a little, but still underperforms contextual dropout, demonstrating that how we use the gating activations matters. In Figure 3, we observe that Bernoulli contextual dropout learns different dropout probabilities for different samples adapting the sample-level uncertainty which further verifies our motivation and supports the empirical improvements. For sample-dependent dropout, the dropout probabilities would not vanish to zero even though the prior for regularization is also learned, because the optimal dropout probabilities for each sample is not necessarily zero. Enabling different samples to have different network connections could greatly enhance the model’s capacity. The prior distribution also plays a different role here. Instead of preventing the dropout probabilities from going to zero, the prior tries to impose some similarities between the dropout probabilities of different samples.\nCombine contextual dropout with Deep Ensemble: Deep ensemble proposed by Lakshminarayanan et al. (2017) is a simple way to obtain uncertainty by ensembling models trained independently from different random initializations. In Figure 4, we show the performance of combining different dropouts with deep ensemble on noisy MNIST data. As the number of NNs increases, both accuracy and PAvPU increase for all dropouts. However, Bernoulli contextual dropout outperforms other dropouts by a large margin in both metrics, showing contextual dropout is compatible with deep ensemble and their combination can lead to significant improvements. Out of distribution (OOD) evaluation: we evaluate different dropouts in an OOD setting, where we train our model with clean data but test it on noisy data. Contextual dropout achieves accuracy of 78.08, consistently higher than MC dropout (75.22) or concrete dropout (74.93). Meanwhile, the proposed method is also better at uncertainty estimation with PAvPU of 78.49, higher than MC (74.61) or Concrete (75.49)." }, { "heading": "3.2 CONTEXTUAL DROPOUT ON CONVOLUTIONAL LAYERS", "text": "We apply dropout to the convolutional layers in WRN (Zagoruyko & Komodakis, 2016). In Figure 6 in Appendix, we show the architecture of WRN, where dropout is applied to the first convolutional layer in each network block; in total, dropout is applied to 12 convolutional layers. We evaluate on CIFAR-10 and CIFAR-100 . The detailed setting is provided in Appendix C.1.\nResults and analysis: We show the results for CIFAR-100 in Table 2 (see CIFAR-10 results in Tables 8-9 in Appendix). Accuracies, PAvPUs, and test predictive loglikelihoods are incorporated for both the original and noisy data. We consistently observe that contextual dropout outperforms other models in accuracy, uncertainty estimation, and loglikelihood.\nUncertainty visualization: We conducted extensive qualitative analyses for uncertainty evaluation. In Figures 9-11 in Appendix F.2, we visualize 15 CIFAR images (with true label) and compare the corresponding probability outputs of different dropouts in boxplots. We observe (1) contextual dropout predicts the correct answer if it is certain, (2) contextual dropout is certain and predicts the correct answers on many images for which MC or concrete dropout is uncertain, (3) MC or concrete dropout is uncertain about some easy examples or certain on some wrong predictions (see details in Appendix F.2), (4) on an image that all three methods have high uncertainty, contextual dropout places a higher probability on the correct answer than the other two. These observations verify that contextual dropout provides better calibrated uncertainty.\nLarge-scale experiments with ImageNet: Contextual dropout is also applied to the convolutional layers in ResNet-18, where we plug contextual dropout into a pretrained model, and fine-tune the pretrained model on ImageNet. In Table 3, we show it is even possible to finetune a pretrained model with contextual dropout module, and without much additional memory or run time cost, it achieves better performance than both the original model and the one with regular Gaussian dropout. Training model with contextual dropout from scratch can further improve the performance. See detailed experimental setting in Appendix C.1." }, { "heading": "3.3 CONTEXTUAL DROPOUT ON ATTENTION LAYERS", "text": "We further apply contextual dropout to the attention layers of VQA models, whose goal is to provide an answer to a question relevant to the content of a given image. We conduct experiments on the commonly used benchmark, VQA-v2 (Goyal et al., 2017), containing human-annotated questionanswer (QA) pairs. There are three types of questions: Yes/No, Number, and Other. In Figure 5, we show one example for each question type. There are 10 answers provided by 10 different human annotators for each question (see explanation of evaluation metrics in Appendix C.2). As shown in the examples, VQA is generally so challenging that there are often several different human annotations for a given image. Therefore, good uncertainty estimation becomes even more necessary.\nModel and training specifications: We use MCAN (Yu et al., 2019), a state-of-the-art Transformerlike model for VQA. Self-attention layers for question features and visual features, as well as the question-guided attention layers of visual features, are stacked one over another to build a deep model. Dropout is applied in every attention layer (after the softmax and before residual layer (Vaswani et al., 2017)) and fully-connected layer to prevent overfitting (Yu et al., 2019), resulting in 62 dropout layers\nin total. Experiments are conducted using the code of Yu et al. (2019) as basis. Detailed experiment setting is in Appendix C.2.\nResults and analysis: We compare different dropouts on both the original VQA dataset and a noisy version, where Gaussian noise with standard deviation 5 is added to the visual features. In Tables 4, we show the overall accuracy and uncertainty estimation. The results show that on the original data, contextual dropout achieves better accuracy and uncertainty estimation than the others. Moreover, on noisy data, where the prediction becomes more challenging and requires more model flexibility and robustness, contextual dropouts outperform their regular dropout counterparts by a large margin in terms of accuracy with consistent improvement across all three question types.\nVisualization: In Figures 12-15 in Appendix F.3, we visualize some image-question pairs, along with the human annotations and compare the predictions and uncertainty estimations of different dropouts. We show three of them in Figure 5. As shown in the plots, overall contextual dropout is more conservative on its wrong predictions and more certain on its correct predictions than other methods (see more detailed explanations in Appendix F.3)." }, { "heading": "4 CONCLUSION", "text": "We introduce contextual dropout as a simple and scalable data-dependent dropout module that achieves strong performance in both accuracy and uncertainty estimation on a variety of tasks including large scale applications. With an efficient parameterization of the coviariate-dependent variational distribution, contextual dropout boosts the flexibility of Bayesian neural networks with only slightly increased memory and computational cost. We demonstrate the general applicability of contextual dropout on fully connected, convolutional, and attention layers, and also show that contextual dropout masks are compatible with both Bernoulli and Gaussian distribution. On both image classification and visual question answering tasks, contextual dropout consistently outperforms corresponding baselines. Notably, on ImageNet, we find it is possible to improve the performance of a pretrained model by adding the contextual dropout module during a finetuning stage. Based on these results, we believe contextual dropout can serve as an efficient alternative to data-independent dropouts in the versatile tool box of dropout modules." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The authors acknowledge the support of Grants IIS-1812699, IIS-1812641, ECCS-1952193, CCF1553281, and CCF-1934904 from the U.S. National Science Foundation, and the Texas Advanced Computing Center for providing HPC resources that have contributed to the research results reported within this paper. M. Zhou acknowledges the support of a gift fund from ByteDance Inc." }, { "heading": "A DETAILS OF ARM GRADIENT ESTIMATOR FOR BERNOULLI CONTEXTUAL DROPOUT", "text": "In this section, we will explain the implementation details of ARM for Bernoulli contextual dropout. To compute the gradients with respect to the parameters of the variational distribution, a commonly used gradient estimator is the REINFORCE estimator (Williams, 1992) as\n∇ϕL(x, y) = Ez∼qφ(· |x)[r(x, z, y)∇ϕ log qφ(z |x)], r(x, z, y) := log pθ(y |x,z)pη(z) qφ(z |x) .\nThis gradient estimator is, however, known to have high variance (Yin & Zhou, 2018). To mitigate this issue, we use ARM to compute the gradient with Bernoulli random variable.\nARM gradient estimator: In general, denoting σ(α) = 1/(1 + e−α) as the sigmoid function, ARM expresses the gradient of E(α) = Ez∼∏Kk=1 Ber(zk;σ(αk))[r(z)] as ∇αE(α) = Eπ∼∏Kk=1 Uniform(πk;0,1)[gARM(π)], gARM(π) := [r(ztrue)− r(zsudo)](1/2− π), (6)\nwhere ztrue := 1[π<σ(α)] and zsudo := 1[π>σ(−α)] are referred to as the true and pseudo actions, respectively, and 1[·] ∈ {0, 1}K is an indicator function. Sequential ARM: Note that the above equation is not directly applicable to our model due to the cross-layer dependence. However, the dropout masks within each layer are independent of each other conditioned on these of the previous layers, so we can break our expectation into a sequence and apply ARM sequentially. We rewrite L = Ez∼qφ(· |x)[r(x, z, y)]. When computing∇ϕL, we can ignore the ϕ in r as the expectation of∇ϕ log qφ(z |x) is zero. Using the chain rule, we have ∇ϕL = ∑L l=1∇αlL∇ϕαl. With decomposition L = Ez1:l−1∼qφ(· |x)Ezl∼Ber(σ(αl))[r(x, z1:l, y)], where r(x, z1:l, y) := Ezl+1:L∼qφ(· |x,z1:l)[r(x, z, y)], we know\n∇αlL = Ez1:l−1∼qφ(· |x)Eπl∼∏k Uniform(πlk;0,1)[gARM(πl)], gARM(π l) = [r(x,z1:l−1,zltrue, y)− r(x,z1:l−1,zlsudo, y)](1/2− πl),\nwhere zltrue := 1[πl<σ(αl)] and z l sudo := 1[πl>σ(−αl)]. We estimate the gradients via Monte Carlo integration. We provide the pseudo code in Algorithm 1.\nImplementation details: The computational complexity of sequential ARM is O(L) times of that of the decoder computation. Although it is embarrassingly parallelizable, in practice, with limited computational resource available, it maybe be challenging to use sequential ARM when L is fairly large. In such cases, the original non-sequential ARM can be viewed as an approximation to strike a good balance between efficiency and accuracy (see the pseudo code in Algorithm 2 in Appendix). In our cases, for image classification models, L is small enough (3 for MLP, 12 for WRN) for us to use sequential ARM. For VQA, L is as large as 62 and hence we choose the non-sequential ARM.\nTo control the learning rate of the encoder, we use a scaled sigmoid function: σt(αl) = 11+exp(−tαl) , where a larger t corresponding to a larger learning rate for the encoder. This function is also used in Li & Ji (2019) to facilitate the transition of probability between 0 and 1 for the purpose of pruning NN weights." }, { "heading": "B ALGORITHMS", "text": "Below, we present training algorithms for both Bernoulli and Gaussian contextual dropout.\nAlgorithm 1: Bernoulli contextual dropout with sequential ARM Input: data D, r, {glθ}Ll=1, {hlϕ}Ll=1, step size s Output: updated θ, ϕ, η repeat Gϕ = 0; Sample x, y from data D; x0 = x for l = 1 to L do U l = glθ(x l−1), αl = hlϕ(U l)\nSample πl from Uniform(0,1); zltrue := 1[πl<σt(αl)]; zlsudo := 1[πl>σt(−αl)]; if zltrue = zlsudo then rlsudo =None; else xlsudo = U\nl zl,sudo for k = l + 1 to L do Uksudo = g k θ(x k−1 sudo ), α k sudo = h k ϕ(U k sudo)\nSample πksudo from Uniform(0,1); zksudo := 1[πksudo<σt(αksudo)]; xksudo = U k sudo zk,sudo;\nend for rlsudo = r(x L sudo, y)\nend if xl = U l zltrue\nend for rtrue = r(x L true, y) for l = 1 to L do if rlsudo is not None then Gϕ = Gϕ + t(rtrue − rlsudo)(1/2− πl)∇ϕαl ;\nend if end for ϕ = ϕ+ sGϕ, with step-size s; θ = θ + s\n∂ log pθ(y |x,z1:L,true) ∂θ ;\nη = η + s ∂ log pη(z1:L,true)\n∂η ; until convergence\nAlgorithm 2: Bernoulli contextual dropout with independent ARM Input: data D, r, {glθ}Ll=1, {hlϕ}Ll=1, step size s Output: updated θ, ϕ, η repeat Gϕ = 0; Sample x, y from data D; x0 = x for l = 1 to L do U l = glθ(x l−1), αl = hlϕ(U l)\nSample πl from Uniform(0,1); zltrue := 1[πl<σt(αl)]; xl = U l zltrue\nend for rtrue = r(x L true, y) x0sudo = x for l = 1 to L do U lsudo = g l θ(x l−1 sudo), α l sudo = h l ϕ(U l sudo)\nzlsudo := 1[πlsudo>σt(−αlsudo)]; xlsudo = U l sudo zlsudo\nend for rsudo = r(x L sudo, y); for l = 1 to L do Gϕ = Gϕ + t(rtrue − rsudo)(1/2− πl)∇ϕαl ; end for ϕ = ϕ+ sGϕ, with step-size s; θ = θ + s\n∂ log pθ(y |x,z1:L,true) ∂θ ;\nη = η + s ∂ log pη(z1:L,true)\n∂η ; until convergence\nAlgorithm 3: Gaussian contextual dropout with reparamaterization trick Input: data D, r, {glθ}Ll=1, {hlϕ}Ll=1, step size s Output: updated θ, ϕ, η repeat\nSample x, y from data D; x0 = x for l = 1 to L do U l = glθ(x l−1), αl = hlϕ(U l)\nSample l from N (0, 1); τ l = √ 1−σt(αl) σt(αl) ; zl := 1 + τ l l; xl = U l zl\nend for ϕ = ϕ+ s∇ϕ(log pθ(y |x, z1:L)− log qφ(z1:L|x)log pη(z1:L) ), with step-size s; θ = θ + s∂ log pθ(y |x,z1:L)∂θ ; η = η + s\n∂ log pη(z1:L) ∂η ;\nuntil convergence" }, { "heading": "C DETAILS OF EXPERIMENTS", "text": "All experiments are conducted using a single Nvidia Tesla V100 GPU.\nChoice of hyper-parameters in Contextual Dropout: Contextual dropout introduces two additional hyperparameters compared to regular dropout. One is the channel factor γ for the encoder network. In our experiments, the results are not sensitive to the choice of the value of the channel factor γ. Any number from 8 to 16 would give similar results, which is also observed in (Hu et al., 2018). The other is the sigmoid scaling factor t that controls the learning rate of the encoder. We find that the performance is not that sensitive to its value and it is often beneficial to make it smaller than the learning rate of the decoder. In all experiments considered in the paper, which cover various noise levels and model sizes, we have simply fixed it at t = 0.01.\nC.1 IMAGE CLASSIFICATION\nMLP: We consider an MLP with two hidden layers of size 300 and 100, respectively, and use ReLU activations. Dropout is applied to all three full-connected layers. We use MNIST as the benchmark. All models are trained for 200 epochs with batch size 128 and the Adam optimizer (Kingma & Ba, 2014) (β1 = 0.9, β2 = 0.999). The learning rate is 0.001. We compare contextual dropout with MC dropout (Gal & Ghahramani, 2016) and concrete dropout (Gal et al., 2017). For MC dropout, we use the hand-tuned dropout rate at 0.2. For concrete dropout, we initialize the dropout rate at 0.2 for Bernoulli dropout and the standard deviation parameter at 0.5 for Gaussian dropout. and set the Concrete temperature at 0.1 (Gal et al., 2017). We initialize the weights in contextual dropout with He-initialization preserving the magnitude of the variance of the weights in the forward pass (He et al., 2015). We initialize the biases in the way that the dropout rate is 0.2 when the weights for contextual dropout are zeros. We also initialize our prior dropout rate at 0.2. For hyperparameter tuning, we hold out 10, 000 samples randomly selected from the training set for validation. We use the chosen hyperparameters to train on the full training set (60, 000 samples) and evaluate on the testing set (10, 000 samples). We use Leaky ReLU (Xu et al., 2015a) with 0.1 as the non-linear\noperator in contextual dropout. The reduction ratio γ is set as 10, and sigmoid scaling factor t as 0.01. For Bayes by Backprop, we use − log σ1 = 0,− log σ2 = 6, π = 0.2 (following the notation in the original paper). For evaluation, we set M = 20.\nWRN: We consider WRN (Zagoruyko & Komodakis, 2016), including 25 convolutional layers. In Figure 6, we show the architecture of WRN, where dropout is applied to the first convolutional layer in each network block; in total, dropout is applied to 12 convolutional layers. We use CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009) as benchmarks. All experiments are trained for 200 epochs with the Nesterov Momentum optimizer (Nesterov, 1983), whose base learning rate is set as 0.1, with decay factor 1/5 at epochs 60 and 120. All other hyperparameters are the same as MLP except for Gaussian dropout, where we use standard deviation equal to 0.8 for the CIFAR100 with no noise and 1 for all other cases.\nResNet: We used ResNet-18 as the baseline model. We use momentum SGD, with learning rate 0.1, and momentum weight 0.9. Weight decay is utilized with weight 1e−4. For models trained from scratch, we train the models with 90 epochs. For finetuning models, we start with pretrained baseline ResNet models and finetune for 1 epoch.\nC.2 VQA\nDataset: The dataset is split into the training (80k images and 444k QA pairs), validation (40k images and 214k QA pairs), and testing (80k images and 448k QA pairs) sets. We perform evaluation on the validation set as the true labels for the test set are not publicly available (Deng et al., 2018).\nEvaluation metric: the evaluation for VQA is different from image classification. The accuracy for a single answer could be a number between 0 and 1 (Goyal et al., 2017): Acc(ans) = min{(#human that said ans)/3, 1}. We generalize the uncertainty evaluation accordingly:\nnac = ∑ i AcciCeri, niu = ∑ i(1− Acci)(1− Ceri) , nau = ∑ i Acci(1− Ceri), nic = ∑ i(1− Acci)(Ceri)\nwhere for the ith prediction Acci is the accuracy and Ceri ∈ {0, 1} is the certainty indicator. Experimental setting: We follow the setting by Yu et al. (2019), where bottom-up features extracted from images by Faster R-CNN (Ren et al., 2015) are used as visual features, pretrained wordembeddings (Pennington et al., 2014) and LSTM (Hochreiter & Schmidhuber, 1997) are used to extract question features. We adopt the encoder-decoder structure in MCAN with six co-attention layers. We use the same model hyperparameters and training settings in Yu et al. (2019) as follows: the dimensionality of input image features, input question features, and fused multi-modal features are set to be 2048, 512, and 1024, respectively. The latent dimensionality in the multi-head attention is 512, the number of heads is set to 8, and the latent dimensionality for each head is 64. The size of the answer vocabulary is set to N = 3129 using the strategy in Teney et al. (2018). To train the MCAN model, we use the Adam optimizer (Kingma & Ba, 2014) with β1 = 0.9 and β2 = 0.98. The base learning rate is set to min(2.5te−5, 1e−4), where t is the current epoch number starting from 1. After 10 epochs, the learning rate is decayed by 1/5 every 2 epochs. All the models are trained up to 13 epochs with the same batch size of 64.\nWe only conduct training on the training set (no data augmentation with visual genome dataset), and evaluation on the validation set. For MC dropout, we use the dropout rate of 0.1 for Bernoulli dropout as in Yu et al. (2019) and the standard deviation parameter of 1/3 for Gaussian dropout. For concrete dropout, we initialize the dropout rate at 0.1 and set the Concrete temperature at 0.1 (Gal et al., 2017). For hyperparameter tuning, we randomly hold out 20% of the training set for validation. After tuning, we train on the whole training set and evaluate on the validation set. We initialize the weights with He-initialization preserving the magnitude of the variance of the weights in the forward pass (He\net al., 2015). We initialize the biases in the way that the dropout rate is 0.1 when the weights for contextual dropout are zeros. We also initialize our prior dropout rate at 0.1. We use ReLU as the non-linear operator in contextual dropout. We use γ = 8 for layers with Cld > 8, otherwise γ = 1. We set α ∈ RdV for residual layers." }, { "heading": "D STATISTICAL TEST FOR UNCERTAINTY ESTIMATION", "text": "Consider M posterior samples of predictive probabilities {pm}Mm=1, where pm is a vector with the same dimension as the number of classes. For single-label classification models, pm is produced by a softmax layer and sums to one, while for multi-label classification models, pm is produced by a sigmoid layer and each element is between 0 and 1. The former output is used in most image classification models, while the latter is often used in VQA where multiple answers could be true for a single input. In both cases, to quantify how confident our model is about this prediction, we evaluate whether the difference between the probabilities of the first and second highest classes is statistically significant with a statistical test. We conduct the normality test on the output probabilities for both image classification and VQA models, and find most of the output probabilities are approximately normal (we randomly pick some Q-Q plots (Ghasemi & Zahediasl, 2012) and show them in Figures 7 and 8). This motivates us to use two-sample t-test4. In the following, we briefly summarize the two-sample t-test we use.\nTwo sample hypothesis testing is an inferential statistical test that determines whether there is a statistically significant difference between the means in two groups. The null hypothesis for the t-test is that the population means from the two groups are equal: µ1 = µ2, and the alternative hypothesis is µ1 6= µ2. Depending on whether each sample in one group can be paired with another sample in the other group, we have either paired t-test or independent t-test. In our experiments, we utilize both types of two sample t-test. For a single-label model, the probabilities are dependent between two classes due to the softmax layer, therefore, we use the paired two-sample t-test; for a multi-label model, the probabilities are independent given the logits of the output layer, so we use the independent two-sample t-test.\nFor paired two-sample t-test, we calculate the difference between the paired observations calculate the t-statistic as below:\nT = Ȳ\ns/ √ N ,\nwhere Ȳ is the mean difference between the paired observations, s is the standard deviation of the differences, and N is the number of observations. Under the null hypothesis, this statistic follows a t-distribution with N − 1 degrees of freedom if the difference is normally distributed. Then, we use this t-statistic and t-distribution to calculate the corresponding p-value.\nFor independent two-sample t-test, we calculate the t-statistic as below:\nT = Ȳ1 − Ȳ2√\ns2/N1 + s2/N2\ns2 =\n∑ (y1 − Ȳ1) + ∑ (y2 − Ȳ2)\nN1 +N2 − 2\nwhere N1 and N2 are the sample sizes, and Ȳ1 and Ȳ2 are the sample means. Under the null hypothesis, this statistic follows a t-distribution with N1 +N2 − 2 degrees of freedom if both y1 and y2 are normally distributed. We calculate the p-value accordingly.\nTo justify the assumption of the two-sample t-test, we run the normality test on the output probabilities for both image classification and VQA models. We find most of the output probabilities are approximately normal. We randomly pick some Q-Q plots (Ghasemi & Zahediasl, 2012) and show them in Figures 7 and 8.\nE TABLES AND FIGURES FOR p-VALUE 0.01, 0.05 AND 0.1" }, { "heading": "F QUALITATIVE ANALYSIS", "text": "In this section, we include the Q-Q plots of the output probabilities as the normality test for the assumptions of two-sample t-test. In Figure 7, we test the normality of differences between highest probabilities and second highest probabilities on WRN model with contextual dropout trained on the orignal CIFAR-10 dataset. In Figure 8, we test the normality of highest probabilities and second highest probabilities (separately) on VQA model with contextual dropout trained on the original VQA-v2 dataset. We use 20 data points for the plots.\nF.1 NORMALITY TEST OF OUTPUT PROBABILITIES\n4Note that we also tried a nonparametric test, Wilcoxon rank-sum test, and obtain similar results.\nF.2 BOXPLOT FOR CIFAR-10\nIn this section, we visualize 5 most uncertain images for each dropout (only include Bernoulli, Concrete, and Contextual Bernoulli dropout for simplicity) leading to 15 images in total. The true images with the labels are on the left side and boxplots of probability distributions of different dropouts are on the right side. All models are trained on the original CIFAR-10 dataset. Among these 15 images, we observe that contextual dropout predicts the right answer if it is certain, and it is certain and predicts the right answer on many images that MC dropout or concrete dropout is uncertain about (e.g, many images in Figure 9-10). However, MC dropout or concrete dropout is uncertain about some easy examples (images in Figures 9-10) or certain on some wrong predictions (images in Figure 11). Moreover, on an image that all three methods have high uncertainty, concrete dropout often places a higher probability on the correct answer than the other two methods (images in Figure 11).\nF.3 VISUALIZATION FOR VISUAL QUESTION ANSWERING\nIn Figures 12-15, we visualize some image-question pairs, along with the human annotations (for simplicity, we only show the different answers in the annotation set) and compare the predictions and uncertainty estimations of different dropouts (only include Bernoulli dropout, Concrete dropout, and contextual Bernoulli dropout) on the noisy data. We include 12 randomly selected image-question pairs, and 6 most uncertain image-question pairs for each dropout as challenging samples (30 in total). For each sample, we manually rank different methods by the general rule that accurate and certain is the most preferred, followed by accurate and uncertain, inaccurate and uncertain, and then inaccurate and certain. For each image-question pair, we rank three different dropouts based on their answers and p-values, and highlight the best performing one, the second best, and the worst with green, yellow, and red, respectively (tied ranks are allowed). As shown in the plots, overall contextual dropout is more conservative on its wrong predictions and more certain on its correct predictions than other methods for both randomly selected images and challenging images." } ]
2,021
CONTEXTUAL DROPOUT: AN EFFICIENT SAMPLE- DEPENDENT DROPOUT MODULE
SP:ee9764a48b109b9860c0a6f657a6cdd819237e7e
[ "The authors propose a end-to-end deep learning model called Net-DNF to handle tabular data. The architecture of Net-DNF has four layers: the first layer is a dense layer (learnable weights) with tanh activation eq(1). The second layer (DNNF) is formed by binary conjunctions over literals eq(2). The third layer is an embedding formed by n DNNF blocks eq(3). the last layer is a linear transformation of the embedding with a sigmoid activation eq(4). The authors also propose a feature selection method based on a trainable binarized selection with a modified L1 and L2 regularization. In the experimental analysis, Net-DNF outperforms fully connected networks. " ]
A challenging open question in deep learning is how to handle tabular data. Unlike domains such as image and natural language processing, where deep architectures prevail, there is still no widely accepted neural architecture that dominates tabular data. As a step toward bridging this gap, we present Net-DNF a novel generic architecture whose inductive bias elicits models whose structure corresponds to logical Boolean formulas in disjunctive normal form (DNF) over affine soft-threshold decision terms. Net-DNFs also promote localized decisions that are taken over small subsets of the features. We present extensive experiments showing that Net-DNFs significantly and consistently outperform fully connected networks over tabular data. With relatively few hyperparameters, Net-DNFs open the door to practical end-to-end handling of tabular data using neural networks. We present ablation studies, which justify the design choices of Net-DNF including the inductive bias elements, namely, Boolean formulation, locality, and feature selection.
[ { "affiliations": [], "name": "Liran Katzir" }, { "affiliations": [], "name": "Gal Elidan" } ]
[ { "authors": [ "Martin Anthony" ], "title": "Connections between neural networks and Boolean functions", "venue": "In Boolean Methods and Models,", "year": 2005 }, { "authors": [ "Sercan Ömer Arik", "Tomas Pfister" ], "title": "Tabnet: Attentive interpretable tabular learning", "venue": "CoRR, abs/1908.07442,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Nicholas Leonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Jianbo Chen", "Le Song", "Martin J Wainwright", "Michael I Jordan" ], "title": "Learning to explain: An information-theoretic perspective on model interpretation", "venue": "arXiv preprint arXiv:1802.07814,", "year": 2018 }, { "authors": [ "Tianqi Chen", "Carlos Guestrin" ], "title": "Xgboost: A scalable tree boosting system", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Philip Derbeko", "Ran El-Yaniv", "Ron Meir" ], "title": "Variance optimized bagging", "venue": "In European Conference on Machine Learning,", "year": 2002 }, { "authors": [ "Ji Feng", "Yang Yu", "Zhi-Hua Zhou" ], "title": "Multi-layered gradient boosting decision trees", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jerome H Friedman" ], "title": "Greedy function approximation: a gradient boosting machine", "venue": "Annals of Statistics,", "year": 2001 }, { "authors": [ "Fangcheng Fu", "Jiawei Jiang", "Yingxia Shao", "Bin Cui" ], "title": "An experimental evaluation of large scale GBDT systems", "venue": "Proc. VLDB Endow.,", "year": 2019 }, { "authors": [ "Geoffrey Hinton" ], "title": "Neural networks for machine learning", "venue": "Coursera, video lectures,", "year": 2012 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Quantized neural networks: Training neural networks with low precision weights and activations", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Robert A Jacobs" ], "title": "Bias/variance analyses of mixtures-of-experts architectures", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Robert A Jacobs", "Michael I Jordan", "Steven J Nowlan", "Geoffrey E Hinton" ], "title": "Adaptive mixtures of local experts", "venue": "Neural Computation,", "year": 1991 }, { "authors": [ "Guolin Ke", "Qi Meng", "Thomas Finley", "Taifeng Wang", "Wei Chen", "Weidong Ma", "Qiwei Ye", "TieYan Liu" ], "title": "Lightgbm: A highly efficient gradient boosting decision tree", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Guolin Ke", "Jia Zhang", "Zhenhui Xu", "Jiang Bian", "Tie-Yan Liu" ], "title": "Tabnn: A universal neural network solution for tabular data", "venue": null, "year": 2018 }, { "authors": [ "Yifeng Li", "Chih-Yu Chen", "Wyeth W Wasserman" ], "title": "Deep feature selection: theory and application to identify enhancers and promoters", "venue": "Journal of Computational Biology,", "year": 2016 }, { "authors": [ "Ron Meir", "Ran El-Yaniv", "Shai Ben-David" ], "title": "Localized boosting", "venue": "In COLT, pp. 190–199. Citeseer,", "year": 2000 }, { "authors": [ "Sergei Popov", "Stanislav Morozov", "Artem Babenko" ], "title": "Neural oblivious decision ensembles for deep learning on tabular data", "venue": "arXiv preprint arXiv:1909.06312,", "year": 2019 }, { "authors": [ "Liudmila Prokhorenkova", "Gleb Gusev", "Aleksandr Vorobev", "Anna Veronika Dorogush", "Andrey Gulin" ], "title": "Catboost: unbiased boosting with categorical features", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "J Ross Quinlan" ], "title": "Discovering rules by induction from large collections of examples", "venue": "Expert Systems in the Micro electronics Age,", "year": 1979 }, { "authors": [ "Mojtaba Seyedhosseini", "Tolga Tasdizen" ], "title": "Disjunctive normal random forests", "venue": "Pattern Recognition,", "year": 2015 }, { "authors": [ "Shai Shalev-Shwartz", "Shai Ben-David" ], "title": "Understanding machine learning: From theory to algorithms", "venue": "Cambridge university press,", "year": 2014 }, { "authors": [ "Ira Shavitt", "Eran Segal" ], "title": "Regularization learning networks: Deep learning for tabular datasets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hans Ulrich Simon" ], "title": "On the number of examples and stages needed for learning decision trees", "venue": "In Proceedings of the Third Annual Workshop on Computational Learning Theory,", "year": 1990 }, { "authors": [ "Stephen Tyree", "Kilian Q. Weinberger", "Kunal Agrawal", "Jennifer Paykin" ], "title": "Parallel boosted regression trees for web search ranking", "venue": "Proceedings of the 20th International Conference on World Wide Web,", "year": 2011 }, { "authors": [ "Joaquin Vanschoren", "Jan N Van Rijn", "Bernd Bischl", "Luis Torgo" ], "title": "Openml: networked science in machine learning", "venue": "ACM SIGKDD Explorations Newsletter,", "year": 2014 }, { "authors": [ "Theodore Vasiloudis", "Hyunsu Cho", "Henrik Boström" ], "title": "Block-distributed gradient boosted trees", "venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2019 }, { "authors": [ "Yongxin Yang", "Irene Garcia Morillo", "Timothy M Hospedales" ], "title": "Deep neural decision trees", "venue": "arXiv preprint arXiv:1806.06988,", "year": 2018 }, { "authors": [ "Jerry Ye", "Jyh-Herng Chow", "Jiang Chen", "Zhaohui Zheng" ], "title": "Stochastic gradient boosted distributed decision trees", "venue": "Proceedings of the 18th ACM Conference on Information and Knowledge Management,", "year": 2009 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela van der Schaar" ], "title": "INVASE: instance-wise variable selection using neural networks", "venue": "In 7th International Conference on Learning Representations, ICLR (Poster)", "year": 2019 }, { "authors": [ "Hui Zou", "Trevor Hastie" ], "title": "Regularization and variable selection via the elastic net", "venue": "Journal of the royal statistical society: series B (statistical methodology),", "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "A key point in successfully applying deep neural models is the construction of architecture families that contain inductive bias relevant to the application domain. Architectures such as CNNs and RNNs have become the preeminent favorites for modeling images and sequential data, respectively. For example, the inductive bias of CNNs favors locality, as well as translation and scale invariances. With these properties, CNNs work extremely well on image data, and are capable of generating problem-dependent representations that almost completely overcome the need for expert knowledge. Similarly, the inductive bias promoted by RNNs and LSTMs (and more recent models such as transformers) favors both locality and temporal stationarity.\nWhen considering tabular data, however, neural networks are not the hypothesis class of choice. Most often, the winning class in learning problems involving tabular data is decision forests. In Kaggle competitions, for example, gradient boosting of decision trees (GBDTs) (Chen & Guestrin, 2016; Friedman, 2001; Prokhorenkova et al., 2018; Ke et al., 2017) are generally the superior model. While it is quite practical to use GBDTs for medium size datasets, it is extremely hard to scale these methods to very large datasets. Scaling up the gradient boosting models was addressed by several papers (Ye et al., 2009; Tyree et al., 2011; Fu et al., 2019; Vasiloudis et al., 2019). The most significant computational disadvantage of GBDTs is the need to store (almost) the entire dataset in memory1. Moreover, handling multi-modal data, which involves both tabular and spatial data (e.g., medical records and images), is problematic. Thus, since GBDTs and neural networks cannot be organically optimized, such multi-modal tasks are left with sub-optimal solutions. The creation of a purely neural model for tabular data, which can be trained with SGD end-to-end, is therefore a prime open objective.\nA few works have aimed at constructing neural models for tabular data (see Section 5). Currently, however, there is still no widely accepted end-to-end neural architecture that can handle tabular data and consistently replace fully-connected architectures, or better yet, replace GBDTs. Here we present Net-DNFs, a family of neural network architectures whose primary inductive bias is an ensemble comprising a disjunctive normal form (DNF) formulas over linear separators. This family also promotes (input) feature selection and spatial localization of ensemble members. These inductive\n1This disadvantage is shared among popular GBDT implementations: XGBoost, LightGBM, and CatBoost.\nbiases have been included by design to promote conceptually similar elements that are inherent in GBDTs and random forests. Appealingly, the Net-DNF architecture can be trained end-to-end using standard gradient-based optimization. Importantly, it consistently and significantly outperforms FCNs on tabular data, and can sometime even outperform GBDTs.\nThe choice of appropriate inductive bias for specialized hypothesis classes for tabular data is challenging since, clearly, there are many different kinds of such data. Nevertheless, the “universality” of forest methods in handling a wide variety of tabular data suggests that it might be beneficial to emulate, using neural networks, the important elements that are part of the tree ensemble representation and algorithms. Concretely, every decision tree is equivalent to some DNF formula over axis-aligned linear separators (see details in Section 3). This makes DNFs an essential element in any such construction. Secondly, all contemporary forest ensemble methods rely heavily on feature selection. This feature selection is manifested both during the induction of each individual tree, where features are sequentially and greedily selected using information gain or other related heuristics, and by uniform sampling features for each ensemble member. Finally, forest methods include an important localization element – GBDTs with their sequential construction within a boosting approach, where each tree re-weights the instance domain differently – and random forests with their reliance on bootstrap sampling. Net-DNFs are designed to include precisely these three elements.\nAfter introducing Net-DNF, we include a Vapnik-Chervonenkins (VC) comparative analysis of DNFs and trees showing that DNFs potentially have advantage over trees when the input dimension is large and vice versa. We then present an extensive empirical study. We begin with an ablation study over three real-life tabular data prediction tasks that convincingly demonstrates the importance of all three elements included in the Net-DNF design. Second, we analyze our novel feature selection component over controlled synthetic experiments, which indicate that this component is of independent interest. Finally, we compare Net-DNFs to FCNs and GBDTs over several large classification tasks, including two past Kaggle competitions. Our results indicate that Net-DNFs consistently outperform FCNs, and can sometime even outperform GBDTs." }, { "heading": "2 DISJUNCTIVE NORMAL FORM NETWORKS (NET-DNFS)", "text": "In this section we introduce the Net-DNF architecture, which consists of three elements. The main component is a block of layers emulating a DNF formula. This block will be referred to as a Disjunctive Normal Neural Form (DNNF). The second and third components, respectively, are a feature selection module, and a localization one. In the remainder of this section we describe each component in detail. Throughout our description we denote by x ∈ Rd a column of input feature vectors, by xi, its ith entry, and by σ(·) the sigmoid function." }, { "heading": "2.1 A DISJUNCTIVE NORMAL NEURAL FORM (DNNF) BLOCK", "text": "A disjunctive normal neural form (DNNF) block is assembled using a two-hidden-layer network. The first layer creates affine “literals” (features) and is trainable. The second layer implements a number of soft conjunctions over the literals, and the third output layer is a neural OR gate. Importantly, only the first layer is trainable, while the two other are binary and fixed.\nWe begin by describing the neural AND and OR gates. For an input vector x, we define soft, differentiable versions of such gates as\nOR(x) , tanh ( d∑ i=1 xi + d− 1.5 ) , AND(x) , tanh ( d∑ i=1 xi − d+ 1.5 ) .\nThese definitions are straightforwardly motivated by the precise neural implementation of the corresponding binary gates. Notice that by replacing tanh by a binary activation and changing the bias constant from 1.5 to 1, we obtain an exact implementation of the corresponding logical gates for binary input vectors (Anthony, 2005; Shalev-Shwartz & Ben-David, 2014); see a proof of this statement in Appendix A. Notably, each unit does not have any trainable parameters. We now define the AND gate in a vector form to project the logical operation over a subset of variables. The projection is controlled by an indicator column vector (a mask) u ∈ {0, 1}d. With respect to such a projection vector u, we define the corresponding projected gate as ANDu(x) , tanh ( uTx− ||u||1 + 1.5 ) .\nEquipped with these definitions, a DNNF(x) : Rd → R with k conjunctions over m literals is, L(x) , tanh ( xTW + b ) ∈ Rm (1)\nDNNF(x) , OR([ANDc1(L(x)),ANDc2(L(x)), . . . ,ANDck(L(x))]) . (2)\nEquation (1) defines L(x) that generates m “neural literals”, each of which is the result of a tanhactivation of a (trainable) affine transformation. The (trainable) matrix W ∈ Rd×m, as well as the row vector bias term b ∈ Rm, determine the affine transformations for each literal such that each of its columns corresponds to one literal. Equation (2) defines a DNNF. In this equation, the vectors ci ∈ {0, 1}m, 1 ≤ i ≤ k, are binary indicators such that cij = 1 iff the jth literal belongs to the ith conjunction. In our design, each literal belongs to a single conjunction. These indicator vectors are defined and fixed according to the number and length of the conjunctions (See Appendix D.2)." }, { "heading": "2.2 NET-DNFS", "text": "The embedding layer of a Net-DNF with n DNNF blocks is a simple concatenation\nE(x) , [DNNF1(x),DNNF2(x), . . . ,DNNFn(x)]. (3)\nDepending on the application, the final Net-DNF is a composition of an output layer over E(x). For example, for binary classification (logistic output layer), Net-DNF(x) : Rd → (0, 1) is,\nNet-DNF(x) , σ ( n∑ i=1 wiDNNFi(x) + bi ) . (4)\nTo summarize, a Net-DNF is always a four-layer network (including the output layer), and only the first and last layers are learned. Each DNNF block has two parameters: the number of conjunctions k and the length m of these conjunctions, allowing for a variety of Net-DNF architectures. In all our experiments we considered a single Net-DNF architecture that has a fixed diversity of DNNF blocks which includes a number of different DNNF groups with different k, each of which has a number of conjunction sizes m (see details in Appendix D.2). The number n of DNNFs was treated as a hyperparameter, and selected based on a validation set as described on Appendix D.1." }, { "heading": "2.3 FEATURE SELECTION", "text": "One key strategy in decision tree training is greedy feature selection, which is performed hierarchically at any split, and allows decision trees to exclude irrelevant features. Additionally, decision tree ensemble algorithms apply random sampling to select a subset of the features, which is used to promote diversity, and prevent different trees focusing on the same set of dominant features in their greedy selection. In line with these strategies, we include in our Net-DNFs conceptually similar feature selection elements: (1) a subset of features uniformly and randomly sampled for each DNNF; (2) a trainable mechanism for feature selection, applied on the resulting random subset. These two elements are combined and implemented in the affine literal generation layer described in Equation (1), and applied independently for each DNNF. We now describe these techniques in detail.\nRecalling that d is the input dimension, the random selection is made by generating a stochastic binary mask, ms ∈ {0, 1}d (each block has its own mask), such that the probability of any entry being 1 is p (see Appendix D.2 for details on setting this parameter). For a given mask ms, this selection can be applied over affine literals using a simple product diag(ms)W , where W is the matrix of Equation (1). We then construct a trainable mask mt ∈ Rd, which will be applied on the features that are kept by ms . We introduce a novel trainable feature selection component that combines binary quantization of the mask together with modified elastic-net regularization. To train a binarized vector we resort to the straight-through estimator (Hinton, 2012; Hubara et al., 2017), which can be used effectively to train non-differentiable step functions such as a threshold or sign. The trick is to compute the step function exactly in the forward pass, and utilize a differentiable proxy in the backward pass. We use a version of the straight-through estimator for the sign function (Bengio et al., 2013),\nΦ(x) , { sign(x), forward pass; tanh(x), backward pass.\nUsing the estimator Φ(x), we define a differentiable binary threshold function T (x) = 12Φ(|x|− )+ 1 2 , where ∈ R defines an epsilon neighborhood around zero for which the output of T (x) is zero, and one outside of this neighborhood (in all our experiments, we set = 1 and initialize the entries of mt above this threshold). We then apply this selection by diag(T (mt))W . Given a fixed stochastic selection ms, to train the binarized selection mt we employ regularization. Specifically, we consider a modified version of the elastic net regularization, R(mt,ms), which is tailored to our task. The modifications are reflected in two parts. First, the balancing between the L1 and L2 regularization is controlled by a trainable parameter α ∈ R. Second, the expressions of the L1 and L2 regularization are replaced by R1(mt,ms), R2(mt,ms), respectively (defined below). Moreover, since we want to take into account only features that were selected by the random component, the regularization is applied on the vector mts = mt ms, where is element-wise multiplication. The functional form of the modified elastic net regularization is as follows,\nR2(mt,ms) , ∣∣∣∣ ||mts||22||ms||1 − β 2 ∣∣∣∣ , R1(mt,ms) , ∣∣∣∣ ||mts||1||ms||1 − β ∣∣∣∣ R(mt,ms) ,\n1− σ(α) 2 R2(mt,ms) + σ(α)R1(mt,ms).\nThe above formulation of R2(·) and R1(·) is motivated as follows. First, we normalize both norms by dividing with the effective input dimension, ||ms||1, which is done to be invariant to the (effective) input size. Second, we define R2 and R1 as absolute errors, which encourages each entry to be, on average, approximately equal to the threshold . The reason is that the vector mt passes through a binary threshold, and though the exact values of its entries are irrelevant. What is relevant is whether these values are within epsilon neighborhood of zero or not. Thus, when the values are roughly equal to the threshold, it is more likely to converge to a balanced point where the regularization term is low and the relevant features were selected. The threshold term is controlled by β (a hyperparameter), which controls the cardinality of mt, where smaller values of β lead to sparser mt. To summarize, feature selection is manifested by both architecture and loss. Architecture relies on the masks mt,ms, while the loss function uses R(mt,ms).\nFinally, the functional form of a DNNF block with the feature selection component is obtained by plugging the masks into Equation (2), L(x) , tanh ( xT diag(T (mt)) diag(ms)W + b ) ∈ Rm. Additionally, the mean over R(mt,ms) in all DNNFs is added to the loss function as a regularizer." }, { "heading": "2.4 SPATIAL LOCALIZATION", "text": "The last element we incorporate in the Net-DNF construction is spatial localization. This element encourages each DNNF unit in a Net-DNF ensemble to specialize in some focused proximity of the input domain. Localization is a well-known technique in classical machine learning, with various implementations and applications (Jacobs et al., 1991; Meir et al., 2000). On the one hand, localization allows construction of low-bias experts. On the other hand, it helps promote diversity, and reduction of the correlation between experts, which can improve the performance of an ensemble (Jacobs, 1997; Derbeko et al., 2002). We incorporate spatial localization by associating a Gaussian kernel loc(x|µ,Σ)i with a trainable mean vector µi and a trainable diagonal covariance matrix Σi for the ith DNNF. Given a Net-DNF with n DNNF blocks, the functional form of its embedding layer (Equation 3), with the spatial localization, is\nloc(x|µ,Σ) , [e−||Σ1(x−µ1)||2 , e−||Σ2(x−µ2)||2 , . . . , e−||Σn(x−µn)||2 ] ∈ Rn\nsm-loc(x|µ,Σ) , Softmax {loc(x|µ,Σ) · σ(τ)} ∈ (0, 1)n\nE(x) , [sm-loc(x|µ,Σ)1 ·DNNF1(x), . . . , sm-loc(x|µ,Σ)n ·DNNFn(x)],\nwhere τ ∈ R is a trainable parameter such that σ(τ) serves as the trainable temperature in the softmax. The inclusion of an adaptive temperature in this localization mechanism facilitates a data-dependent degree of exclusivity: at high temperatures, only a few DNNFs will handle an input instance whereas at low temperatures, more DNNFs will effectively participate in the ensemble. Observe that our localization mechanism is fully trainable and does not add any hyperparameters." }, { "heading": "3 DNFS AND TREES – A VC ANALYSIS", "text": "The basic unit in our construction is a (soft) DNF formula instead of a tree. Here we provide a theoretical perspective on this design choice. Specifically, we analyze the VC-dimension of Boolean DNF formulas and compare it to that of decision trees. With this analysis we gain some insight into the generalization ability of formulas and trees, and argue numerically that the generalization of a DNF can be superior to a tree when the input dimension is not small (and vice versa).\nThroughout this discussion, we consider binary classification problems whose instances are Boolean vectors in {0, 1}n. The first simple observation is that every decision tree has an equivalent DNF formula. Simply, each tree path from the root to a positively labeled leaf can be expressed by a conjunction of the conditions over the features appearing along the path to the leaf, and the whole tree can be represented by a disjunction of the resulting conjunctions. However, DNFs and decision trees are not equivalent, and we demonstrate that in the lense of VC-dimension. Simon (1990) presented an exact expression for the VC-dimension of decision trees as a function of the tree rank. Definition 1 (Rank). Consider a binary tree T . If T consists of a single node, its rank is defined as 0. If T consists of a root, a left subtree T0 of rank r0, and a right subtree T1 of rank r1, then\nrank(T ) = { 1 + r0 if r0 = r1 max{r0, r1} else\nClearly, for any decision tree T over n variables, 1 ≤ rank(T ) ≤ n. Also, it is not hard to see that a binary tree T has a rank greater than r iff the complete binary tree of depth r + 1 can be embedded into T . Theorem 1 (Simon (1990)). Let DT rn denote the class of decision trees of rank at most r on n Boolean variables. Then it holds that V CDim(DT rn) = ∑r i=0 ( n i ) .\nThe following theorem, whose proof appears in Appendix B, upper bounds the VC-dimension of a Boolean DNF formula. Theorem 2 (DNF VC-dimension bound). Let DNF kn be the class of DNF formulas with k conjunctions on n Boolean variables. Then it holds that V CDim(DNF kn ) ≤ 2(n+ 1)k log(3k).\nIt is evident that in the case of DNF formulas the upper bound on the VC-dimension grows linearly with the input dimension, whereas in the case of decision trees, if the rank is greater than 1, the VC-dimension grows polynomially (with degree at least 2) with the input dimension. In the worst case, this growth is exponential. A direct comparison of these dimensions is not trivial because there is a complex dependency between the rank r of a decision tree, and the number k of the conjunctions of an equivalent DNF formula. Even if we compare large-k DNF formulas to small-rank trees, it is clear that the VC-dimension of the trees can be significantly larger. For example, in Figure 1, we plot the upper bounds on the VC-dimension of large formulas (solid curves), and the exact VC-dimensions of small-rank trees (dashed curves). With the exception of rank-2 trees, the VC-dimension of decision trees dominates the dimension of DNFs, when the input dimension exceeds 100. Trees, however, may have an advantage over DNF formulas for low-dimensional inputs. Since the VC-dimension is a qualitative proxy of the sample complexity of a hypothesis class, the above analysis provides theoretical motivation for expressing trees using DNF formulas when the input dimension is not small. Having said that, the disclaimer is that in the present discussion we have only considered binary problems. Moreover, the final hypothesis classes of both Net-DNFs and GBDTs are more complex in structure." }, { "heading": "4 EMPIRICAL STUDY", "text": "In this section, we present an empirical study that substantiates the design of Net-DNFs and convincingly shows its significant advantage over FCN architectures. The datasets used in this study\nare from Kaggle competitions and OpenML (Vanschoren et al., 2014). A summary of these datasets appears in Appendix C. All results presented in this work were obtained using a massive grid search for optimizing each model’s hyperparameters. A detailed description of the grid search process with additional details can be found in Appendices D.1, D.2. We present the scores for each dataset according to the score function defined in the Kaggle competition we used, log-loss and area under ROC curve (AUC ROC) for multiclass datasets and binary datasets, respectively. All results are the mean of the test scores over five different partitions, and the standard error of the mean is reported.2\nIn addition, we also conducted a preliminary study of TabNet (Arik & Pfister, 2019) (see Section 5) over our datasets using its PyTorch implementation3, but failed to produce competitive results.4\nThe merit of the different Net-DNF components. We start with two different ablation studies, where we evaluate the contributions of the three Net-DNF components. In the first study, we start with a vanilla three-hidden-layer FCN and gradually add each component separately. In the second study, we start each experiment with the complete Net-DNF and leave one component out each time. In each study, we present the results on three real-world datasets, where all results are test log-loss scores (lower is better), out-of-memory (OOM) entries mean that the network was too large to execute on our machine (see Appendix D.2). More technical details can be found in Appendix D.4.\nConsider Table 1. In Exp 1 we start with a vanilla three-hidden-layer FCN with a tanh activation. To make a fair comparison, we defined the widths of the layers according to the widths in the Net-DNF with the corresponding formulas. In Exp 2, we added the DNF structure to the networks from Exp 1 (see Section 2.1). In Exp 3 we added the feature selection component (Section 2.3). It is evident that performance is monotonically improving, where the best results are clearly obtained on the complete Net-DNF (Exp 4). A subtle but important observation is that in all of the first three experiments, for all datasets, the trend is that the lower the number of formulas, the better the score. This trend is reversed in Exp 4, where the localization component (Section 2.4) is added, highlighting the importance of using all components of the Net-DNF representation in concert.\nNow consider Table 2. In Exp 5 we took the complete Net-DNF (Exp 4) and removed the feature selection component. When considering the Gesture Phase dataset, an interesting phenomenon is observed. In Exp 3 (128 formulas), we can see that the contribution of the feature selection component is negligible, but in Exp 5 (2048 formulas) we see the significant contribution of this component. We believe that the reason for this difference lies in the relationship of the feature selection component with the localization component, where this connection intensifies the contribution of the feature selection component. In Exp 6 we took the complete Net-DNF (Exp 4) and removed the localization component (identical to Exp 3). We did the same in Exp 7 where we removed the DNF structure. In general, it can be seen that removing each component results in a decrease in performance.\nAn analysis of the feature selection component. Having studied the contribution of the three components to Net-DNF, we now focus on the learnable part of the feature selection component (Section 2.3) alone, and examine its effectiveness using a series of synthetic tasks with a varying percentage of irrelevant features. Recall that when considering a single DNNF block, the feature\n2Our code is available at https://github.com/amramabutbul/DisjunctiveNormalFormNet. 3https://github.com/dreamquark-ai/tabnet 4For example, for the Gas Concentration dataset (see below), TabNet results were slightly inferior to the\nresults we obtained for XGBoost (4.89 log-loss for TabNet vs. 2.22 log-loss for XGBoost.\nselection is a learnable binary mask that multiplies the input element-wise. Here we examine the effect of this mask on a vanilla FCN network (see technical details in Appendix D.5). The synthetic tasks we use were introduced by Yoon et al. (2019); Chen et al. (2018), where they were used as synthetic experiments to test feature selection. There are six different dataset settings; exact specifications appear in Appendix D.5. For each dataset, we generated seven different instances that differ in their input size. While increasing the input dimension d, the same logit is used for prediction, so the new features are irrelevant, and as d gets larger, the percentage of relevant features becomes smaller.\nWe compare the performance of a vanilla FCN on three different cases: (1) oracle (ideal) feature selection (2) our (learned) feature selection mask, and (3) no feature selection. (See details in Appendix D.5). Consider the graphs in Figure 2, which demonstrate several interesting insights. In all tasks the performance of the vanilla FCN is sensitive to irrelevant features, probably due to the representation power of the FCN, which is prone to overfitting. On the other hand, by adding the feature selection component, we obtain near oracle performance on the first three tasks, and a significant improvement on the three others. Moreover, these results support our observation from the ablation studies: that the application of localization together with feature selection increases the latter’s contribution. We can see that in Syn1-3 where there is a single interaction, the results are better than in Syn4-6 where the input space is divided into two ‘local’ sub-spaces with different interactions. These experiments emphasize the importance of the learnable feature selection in itself.\nComparative Evaluation. Finally, we compare the performance of Net-DNF vs. the baselines. Consider Table 3 where we examine the performance of Net-DNFs on six real-life tabular datasets (We add three larger datasets to those we used in the ablation studies). We compare our performance to XGboost Chen & Guestrin (2016), the widely used implementation of GBDTs, and to FCNs. For each model, we optimized its critical hyperparameters. This optimization process required many computational resources: thousands of configurations have been tested for FCNs, hundreds of configurations for XGBoost, and only a few dozen for Net-DNF. A detailed description of the grid search we used for each model can be found in Appendix D.3. In Table 3, we see that Net-DNF consistently and significantly outperforms FCN over all the six datasets. While obtaining better than or indistinguishable results from XGBoost over two datasets, on the other datasets, Net-DNF is slightly inferior but in the same ball park as XGBoost." }, { "heading": "5 RELATED WORK", "text": "There have been a few attempts to construct neural networks with improved performance on tabular data. A recurring idea in some of these works is the explicit use of conventional decision tree induction algorithms, such as ID3 (Quinlan, 1979), or conventional forest methods, such as GBDT (Friedman, 2001) that are trained over the data at hand, and then parameters of the resulting decision trees are explicitly or implicitly “imported” into a neural network using teacher-student distillation (Ke et al., 2018), explicit embedding of tree paths in a specialized network architecture with some kind of DNF structure (Seyedhosseini & Tasdizen, 2015), and explicit utilization of forests as the main building block of layers (Feng et al., 2018). This reliance on conventional decision tree or forest methods as an integral part of the proposed solution prevents end-to-end neural optimization, as we propose here. This deficiency is not only a theoretical nuisance but also makes it hard to use such models on very large datasets and in combination with other neural modules.\nA few other recent techniques aimed to cope with tabular data using pure neural optimization as we propose here. Yang et al. (2018) considered a method to approximate a single node of a decision tree using a soft binning function that transforms continuous features into one-hot features. While this method obtained results comparable to a single decision tree and an FCN (with two hidden layers), it is limited to settings where the number of features is small. Popov et al. (2019) proposed a network that combines elements of oblivious decision forests with dense residual networks. While this method achieved better results than GBDTs on several datasets, also FCNs achieved better than or indistinguishable results from GBDTs on most of these cases as well. Arik & Pfister (2019) presented TabNet, a neural architecture for tabular data that implements feature selection via sequential attention that offers instance-wise feature selection. It is reported that TabNet achieved results that are comparative or superior to GBDTs. Both TabNet and Net-DNF rely on sparsity inducing and feature selection, which are implemented in different ways. While TabNet uses an attention mechanism to achieve feature selection, Net-DNF uses DNF formulas and elastic net regularization. Focusing on microbiome data, a recent study Shavitt & Segal (2018) presented an elegant regularization technique, which produces extremely sparse networks that are suitable for microbiome tabular datasets. Finally, soft masks for feature selection have been considered before and the advantage of using elastic net regularization in a variable selection task was presented by Zou & Hastie (2005); Li et al. (2016)." }, { "heading": "6 CONCLUSIONS", "text": "We introduced Net-DNF, a novel neural architecture whose inductive bias revolves around a disjunctive normal neural form, localization and feature selection. The importance of each of these elements has been demonstrated over real tabular data. The results of the empirical study convincingly indicate that Net-DNFs consistently outperform FCNs over tabular data. While Net-DNFs do not consistently beat XGBoost, our results indicate that their performance score is not far behind GBDTs. Thus, Net-DNF offers a meaningful step toward effective usability of processing tabular data with neural networks\nWe have left a number of potential incremental improvements and bigger challenges to future work. First, in our work we only considered classification problems. We expect Net-DNFs to also be effective in regression problems, and it would also be interesting to consider applications in reinforcement learning over finite discrete spaces. It would be very interesting to consider deeper Net-DNF architectures. For example, instead of a single DNNF block, one can construct a stack of such blocks to allow for more involved feature generation. Another interesting direction would be to consider training Net-DNFs using a gradient boosting procedure similar to that used in XGBoost. Finally, a most interesting challenge that remains open is what would constitute the ultimate inductive bias for tabular prediction tasks, which can elicit the best architectural designs for these data. Our successful application of DNNFs indicates that soft DNF formulas are quite effective, and are strictly significantly superior to fully connected networks, but we anticipate that further effective biases will be identified, at least for some families of tabular tasks." }, { "heading": "ACKNOWLEDGMENTS", "text": "This research was partially supported by the Israel Science Foundation, grant No. 710/18." }, { "heading": "A OR AND AND GATES", "text": "The (soft) neural OR and AND gates were defined as\nOR(x) , tanh ( d∑ i=1 xi + d− 1.5 ) , AND(x) , tanh ( d∑ i=1 xi − d+ 1.5 ) .\nBy replacing the tanh activation with a sign activation, and setting the bias term to 1 (instead of 1.5), we obtain exact binary gates,\nOR(x) , sign ( d∑ i=1 xi + d− 1 ) , AND(x) , sign ( d∑ i=1 xi − d+ 1 ) .\nConsider a binary vector x ∈ {±1}d. We prove that\nAND(x) ≡ d∧ i=1 xi,\nwhere, in the definition of the logical “and”, −1 is equivalent to 0. If for any 1 ≤ i ≤ d, xi = 1, then ∧di=1xi = 1. Conversely, we have,\nAND(x) = d∑ i=1 xi − d+ 1 = d− d+ 1 = 1,\nand the application of the sign activation yields 1. In the case of the soft neural AND gate, we get tanh(1) ≈ 0.76; therefore, we set the bias term to 1.5 to get an output closer to 1 (tanh(1.5) ≈ 0.9). Otherwise, there exists at least one index 1 ≤ j ≤ d, such that xj = −1, and ∧di=1xi = −1. In this case,\nAND(x) = d∑ i=1 xi − d+ 1 = xj + ∑ i6=j xi − d+ 1 ≤ −1 + (d− 1)− d+ 1 = −1,\nand by applying the sign activation we obtain −1. This proves that the AND(x) neuron is equivalent to a logical “AND” gate in the binary case. A very similar proof shows that\nOR(x) ≡ d∨ i=1 xi." }, { "heading": "B PROOF OF THEOREM 2", "text": "We bound the VC-dimension of a DNF formula in two steps. First, we derive an upper bound on the VC-dimension of a single conjunction, and then extend it to a disjunction of k conjunctions. We use the following simple lemma. Lemma 1. For every two hypothesis classes, H ′ ⊆ H , it holds that V CDim(H ′) ≤ V CDim(H).\nProof. Let d = V CDim(H ′). By definition, there exist d points that can be shattered by H ′. Therefore, there exist 2d hypotheses {h′i}2 d i=1 in H ′, which shatter these points. By assumption, {h′i}2 d i=1 ⊆ H , so V CDim(H) ≥ d.\nFor any conjunction on n Boolean variables (regardless of the number of literals), it is possible to construct an equivalent decision tree of rank 1. The construction is straightforward. If ∧` i=1 xi is the conjunction, the decision tree consists of a single main branch of ` internal decision nodes connected sequentially. Each left child in this tree corresponds to decision “1”, and each right child corresponds to decision “0”. The root is indexed 1 and contains the literal x1. For 1 ≤ i < `, internal node i contains the decision literal xi and its left child is node i+ 1 (whose decision literal is xi+1). See the example in Figure 3.\nIt follows that the hypothesis class of conjunctions is contained in the class of rank-1 decision trees. Therefore, by Lemma 1 and Theorem 1, the VC-dimension of conjunctions is bounded above by n+ 1.\nWe now derive the upper bound on the VC-dimension of a disjunction of k conjunctions. Let C be the class of conjunctions, and let Dk(C) be the class of a disjunction of k conjunctions. Clearly, Dk(C) is a k-fold union of the class C, namely,\nDk(C) = { k⋃ i=0 ci |ci ∈ C } .\nBy Lemma 3.2.3 in (Blummer et al. 1989), if d = V CDim(C), then for all k ≥ 1, V CDim(Dk(C)) ≤ 2dk log(3k). Therefore, for the class DNF kn , of DNF formulas with k conjunctions on n Boolean variables, we have\nV CDim(DNF kn ) ≤ 2(n+ 1)k log(3k)." }, { "heading": "C TABULAR DATASET DESCRIPTION", "text": "We use datasets (See Table 4) that differ in several aspects such as in the number of features (from 16 up to 200), the number of classes (from 2 up to 9), and the number of samples (from 10k up to 200k). To keep things simple, we selected datasets with no missing values, and that do not require preprocessing. All models were trained on the raw data without any feature or data engineering and without any kind of data balancing or weighting. Only feature wise standardization was applied." }, { "heading": "D EXPERIMENTAL PROTOCOL", "text": "" }, { "heading": "D.1 DATA PARTITION AND GRID SEARCH PROCEDURE", "text": "All experiments in our work, using both synthetic and real datasets, were done through a grid search process. Each dataset was first randomly divided into five folds in a way that preserved the original distribution. Then, based on these five folds, we created five partitions of the dataset as follows. Each fold is used as the test set in one of the partitions, while the other folds are used as the training and validation sets. This way, each partition was 20% test, 10% validation, and 70% training. This division was done once 5, and the same partitions were used for all models. Based on these partitions, the following grid search process was repeated three times with three different seeds6 (with the exact same five partitions as described before).\nAlgorithm 1: Grid Search Procedure Input: model, configurations_list results_list = [ ] for i=1 to n_partitions do\nval_scores_list = [ ] test_scores_list = [ ] train, val, test = read_data(partition_index=i) for c in configurations_list do\ntrained_model = model.train(train_data=train, val_data=val, configuration=c) trained_model.load_weights_from_best_epoch() val_score = trained_model.predict(data=val) test_score = trained_model.predict(data=test) val_scores_list.append(val_score) test_scores_list.append(test_score)\nend best_val_index = get_index_of_best_val_score(val_scores_list) test_res = test_scores_list[best_val_index] results_list.append(test_res)\nend mean = mean(results_list) sem = standard_error_of_the_mean(results_list) Return: mean, sem\nThe final mean and sem7 that we presents in all experiments are the average across the three seeds. Additionally, as can be seen from Algorithm 1, the model that was trained on the training set (70%) is the one that is used to evaluate performance on the test set (20%). This was done to keep things simple. The loading wights command is relevant for the neural network models. While for the XGBoost, the framework handles the optimal number of estimators on prediction time (accordingly to early stopping on training time)." }, { "heading": "D.2 TRAINING PROTOCOL", "text": "The Net-DNF and the FCN were implemented using Tesnorflow. To make a fair comparison, for both models, we used the same batch size8 of 2048, and the same learning rate scheduler (reduce on plateau) that monitors the training loss. We set a maximum of 1000 epochs and used the same early stopping protocol (30 epochs) that monitors the validation score. Moreover, for both of them, we used the same loss function (softmax-cross-entropy for multi-class datasets and sigmoid-cross-entropy for binary datasets) and the same optimizer (Adam with default parameters).\n5We used seed number 1. 6We used seed numbers 1, 2, 3. 7For details, see: docs.scipy.org/doc/scipy/reference/generated/scipy.stats.sem.html 8For Net-DNF , when using 3072 formulas, we set the batch size to 1024 on the Santander Transaction and Gas datasets and when using 2048 formulas, we set the batch size to 1024 on the Santander Transaction dataset. This was done due to memory issues.\nFor Net-DNF we used an initial learning rate of 0.05. For FCN, we added the initial learning rate to the grid search with values of {0.05, 0.005, 0.0005}. For XGBoost we set the maximal number of estimators to be 2500, and used an early stopping of 50 estimators that monitors the validation score.\nAll models were trained on GPUs - Titan Xp 12GB RAM.\nAdditionally, in the case of Net-DNF, we took a symmetry-breaking approach between the different DNNFs. This is reflected by the DNNF group being divided equally into four subgroups where, for each subgroup, the number of conjunctions is equal to one of the following values [6, 9, 12, 15], and the group of conjunctions of each DNNF was divided equally into three subgroups where, for each subgroup, the conjunction length is equal to one of the following values [2, 4, 6]. The same approach was used for the parameter p of the random mask. The DNNF group was divided equally into five subgroups where, for each subgroup, p is equal to one of the following values [0.1, 0.3, 0.5, 0.7, 0.9]. In all experiments we used the same values." }, { "heading": "D.3 GRID PARAMETERS – TABULAR DATASETS", "text": "D.3.1 NET-DNF" }, { "heading": "Net-DNF (42 configs)", "text": "hyperparameter values\nnumber of formulas {64, 128, 256, 512, 1024, 2048, 3072} feature selection beta {1.6, 1.3, 1., 0.7, 0.4, 0.1}\nD.3.2 XGBOOST" }, { "heading": "XGBoost (864 configs)", "text": "hyperparameter values\nnumber of estimators {2500} learning rate {0.001, 0.005, 0.01, 0.05, 0.1, 0.5} max depth {2, 3, 4, 5, 7, 9, 11, 13, 15}\ncolsample by tree {0.25, 0.5, 0.75, 1.} sub sample {0.25, 0.5, 0.75, 1.}\nTo summarize, we performed a crude but broad selection (among 42 hyper-parameter configurations) for our Net-DNF. Results were quite strong, so we avoided further fine tuning. To ensure extra fairness w.r.t. the baselines, we provided them with significantly more hyper-parameter tuning resources (864 configurations for XGBoost, and 3300 configurations for FCNs)." }, { "heading": "D.3.3 FULLY CONNECTED NETWORKS", "text": "The FCN networks are constructed using Dense-RELU-Dropout blocks with L2 regularization. The network’s blocks are defined in the following way. Given depth and width parameters, we examine two different configurations: (1) the same width is used for the entire network (e.g., if the width is 512 and the depth is four, then the network blocks are [512, 512, 512, 512]), and (2) the width parameter defines the width of the first block, and the subsequent blocks are reduced by a factor of 2 (e.g., if the width is 512 and the depth is four, then the network blocks are [512, 256, 128, 64]). On top of the last block we add a simple linear layer that reduce the dimension into the output dimension. The dropout and L2 values are the same for all blocks.\nFCN (3300 configs) hyperparameter values\ndepth {1, 2, 3, 4, 5, 6} width {128, 256, 512, 1024, 2048}\nL2 lambda {10−2, 10−4, 10−6, 10−8, 0.} dropout {0., 0.25, 0.5, 0.75}\ninitial learning rate {0.05, 0.005, 0.0005}" }, { "heading": "D.4 ABLATION STUDY", "text": "All ablation studies experiments were conducted using the grid search process as described in D.1. In all experiments, we used the same training details as described on D.2 for Net-DNF. Where the only difference between the different experiments is the addition or removal of the components.\nThe single hyperparameter that was fine-tuned using the grid search is the ‘feature selection beta’ on the range {1.6, 1.3, 1., 0.7, 0.4, 0.1}, in experiments in which the feature selection component is involved. In the other cases, only one configuration was tested in the grid search process for a specific number of formulas." }, { "heading": "D.5 FEATURE SELECTION ANALYSIS", "text": "The input features x ∈ Rd of all six datasets were generated from a d-dimensional Gaussian distribution with no correlation across the features, x ∼ N(0, I). The label y is sampled as a Bernoulli random variable with P(y = 1|x) = 11+logit(x) , where logit(x) is varied to create the different synthetic datasets (xi refers to the ith entry):\n1. Syn1: logit(x) = exp(x1x2) 2. Syn2: logit(x) = exp( ∑6 i=3 x 2 i − 4) 3. Syn3: logit(x) = −10 sin(2x7) + 2|x8|+ x9 + exp(−x10)− 2.4 4. Syn4: if x11 < 0, logit follows Syn1, else, logit follows Syn2 5. Syn5: if x11 < 0, logit follows Syn1, else, logit follows Syn3 6. Syn6: if x11 < 0, logit follows Syn2, else, logit follows Syn3\nWe compare the performance of a basic FCN on three different cases: (1) oracle (ideal) feature selection – where the input feature vector is multiplied element-wise with an input oracle mask, whose ith entry equals 1 iff the ith feature is relevant (e.g., on Syn1, features 1 and 2 are relevant, and on Syn4, features 1-6, and 11 are relevant), (2) our (learned) feature selection mask – where the input feature vector is multiplied element-wise with the mask mt, i.e., the entries of the mask ms (see Section 2.3) are all fixed to 1, and (3) no feature selection.\nFrom each dataset, we generated seven different instances that differ in their input size, d ∈ [11, 50, 100, 150, 200, 250, 300]. Where when the input dimension d increases, the same logit function is used. Each instance contains 10k samples that were partitioned as described in Section D.1. We treated each instance as an independent dataset, and the grid search process that is described in Section D.1 was done for each one.\nThe FCN that we used has two dense hidden layers [64, 32] with a RELU activation. To keep things simple, we have not used drouput or any kind of regularization. The same training protocol was used for all three models. We used the same learning rate scheduler, early stopping protocol, loss function and optimizer as appear in Section D.29. We use a batch size of 256, and an initial learning rate of 0.001. The only hyperparameter that was fine-tuned is the ‘feature selection beta’ in the case of ‘FCN with feature selection’ on the range {1.3, 1., 0.7, 0.4}. For the two other models, only a single configuration was tested in the grid search process.\n9We noticed that in this scenario, a large learning rate or large batch size leads to a decline in the performance of the ’FCN with the feature selection’. While the simple FCN and the ’FCN with oracle mask’ remains approximately the same." } ]
2,021
NET-DNF: EFFECTIVE DEEP MODELING OF TABULAR DATA
SP:9962a592fe8663bbcfe752b83aa9b666fe3a9456
[ "The paper suggests an improvement over double-Q learning by applying the control variates technique to the target Q, in the form of $(q1 - \\beta (q2 - E(q2))$ (eqn (8)). To minimize the variance, it suggests minimizing the correlation between $q1$ and $q2$. In addition, it applies the TD3 trick. The resulting algorithm, D2Q, outperforms DDPG and competes with TD3." ]
Q-learning with value function approximation may have the poor performance because of overestimation bias and imprecise estimate. Specifically, overestimation bias is from the maximum operator over noise estimate, which is exaggerated using the estimate of a subsequent state. Inspired by the recent advance of deep reinforcement learning and Double Q-learning, we introduce the decorrelated double Q-learning (D2Q). Specifically, we introduce Q-value function utilizing control variates and the decorrelated regularization to reduce the correlation between value function approximators, which can lead to less biased estimation and low variance. The experimental results on a suite of MuJoCo continuous control tasks demonstrate that our decorrelated double Q-learning can effectively improve the performance.
[]
[ { "authors": [ "Oron Anschel", "Nir Baram", "Nahum Shimkin" ], "title": "Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning, ICML 2017,", "year": 2017 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "In ICML, volume 80 of JMLR Workshop and Conference Proceedings,", "year": 2018 }, { "authors": [ "Evan Greensmith", "Peter L. Bartlett", "Jonathan Baxter" ], "title": "Variance reduction techniques for gradient estimates in reinforcement learning", "venue": "In Journal of Machine Learning Research,", "year": 2001 }, { "authors": [ "S. Gu", "T. Lillicrap", "R.E. Turner", "Z. Ghahramani", "B. Schölkopf", "S. Levine" ], "title": "Interpolated policy gradient: Merging on-policy and off-policy gradient estimation for deep reinforcement learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In ICML,", "year": 2018 }, { "authors": [ "T. Jaakkola", "M.I. Jordan", "S.P. Singh" ], "title": "On the convergence of stochastic iterative dynamic programming algorithms", "venue": "Neural Computation,", "year": 1994 }, { "authors": [ "Vijay R. Konda", "John N. Tsitsiklis" ], "title": "Actor-critic algorithms. In Advances in Neural Information Processing Systems, pp. 1008–1014", "venue": null, "year": 1999 }, { "authors": [ "Qingfeng Lan", "Yangchen Pan", "Alona Fyshe", "Martha White" ], "title": "Maxmin q-learning: Controlling the estimation bias of q-learning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "CoRR, abs/1509.02971,", "year": 2015 }, { "authors": [ "Hao Liu", "Yihao Feng", "Yi Mao", "Dengyong Zhou", "Jian Peng", "Qiang Liu" ], "title": "Action-dependent control variates for policy optimization via stein identity", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "In NIPS Deep Learning Workshop", "year": 2013 }, { "authors": [ "Volodymyr Mnih", "Adrià Puigdomènech Badia", "Mehdi Mirza", "Alex Graves", "Tim Harley", "Timothy P. Lillicrap", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48,", "year": 2016 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "Highdimensional continuous control using generalized advantage estimation", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "David Silver", "Guy Lever" ], "title": "Deterministic policy gradient algorithms", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Alexander L. Strehl", "Lihong Li", "Eric Wiewiora", "John Langford", "Michael L. Littman" ], "title": "Pac modelfree reinforcement learning", "venue": "Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement learning - an introduction. Adaptive computation and machine learning", "venue": null, "year": 1998 }, { "authors": [ "Sebastian Thrun", "Anton Schwartz" ], "title": "Issues in using function approximation for reinforcement learning", "venue": "Proceedings of the 1993 Connectionist Models Summer School,", "year": 1993 }, { "authors": [ "Hado van Hasselt" ], "title": "Double q-learning", "venue": "Annual Conference on Neural Information Processing Systems", "year": 2010 }, { "authors": [ "Ian H. Witten" ], "title": "An adaptive optimal controller for discrete-time markov environments", "venue": "Information and Control,", "year": 1977 } ]
[ { "heading": "1 INTRODUCTION", "text": "Q-learning Watkins & Dayan (1992) as a model free reinforcement learning approach has gained popularity, especially under the advance of deep neural networks Mnih et al. (2013). In general, it combines the neural network approximators with the actor-critic architectures Witten (1977); Konda & Tsitsiklis (1999), which has an actor network to control how the agent behaves and a critic to evaluate how good the action taken is.\nThe Deep Q-Network (DQN) algorithm Mnih et al. (2013) firstly applied the deep neural network to approximate the action-value function in Q-learning and shown remarkably good and stable results by introducing a target network and Experience Replay buffer to stabilize the training. Lillicrap et al. proposes DDPG Lillicrap et al. (2015), which extends Q-learning to handle continuous action space with target networks. Except the training stability, another issue Q-learning suffered is overestimation bias, which was first investigated in Thrun & Schwartz (1993). Because of the noise in function approximation, the maximum operator in Q-learning can lead to overestimation of state-action values. And, the overestimation property is also observed in deterministic continuous policy control Silver & Lever (2014). In particular, with the imprecise function approximation, the maximization of a noisy value will induce overestimation to the action value function. This inaccuracy could be even worse (e.g. error accumulation) under temporal difference learning Sutton & Barto (1998), in which bootstrapping method is used to update the value function using the estimate of a subsequent state.\nGiven overestimation bias caused by maximum operator of noise estimate, many methods have been proposed to address this issue. Double Q-learning van Hasselt (2010) mitigates the overestimation effect by introducing two independently critics to estimate the maximum value of a set of stochastic values. Averaged-DQN Anschel et al. (2017) takes the average of previously learned Q-values estimates, which results in a more stable training procedure, as well as reduces approximation error variance in the target values. Recently, Twin Delayed Deep Deterministic Policy Gradients (TD3) Fujimoto et al. (2018) extends the Double Q-learning, by using the minimum of two critics to limit the overestimated bias in actor-critic network. A soft Q-learning algorithm Haarnoja et al. (2018), called soft actor-critic, leverages the similar strategy as TD3, while including the maximum entropy to balance exploration and exploitation. Maxmin Q-learning Lan et al. (2020) proposes the use of an ensembling scheme to handle overestimation bias in Q-Learning.\nThis work suggests an alternative solution to the overestimation phenomena, called decorrelated double Q-learning, based on reducing the noise estimate in Q-values. On the one hand, we want to make the two value function approximators as independent as possible to mitigate overestima-\ntion bias. On the other hand, we should reduce the variance caused by imprecise estimate. Our decorrelated double Q-learning proposes an objective function to minimize the correlation of two critics, and meanwhile reduces the target approximation error variance with control variate methods. Finally, we provide experimental results on MuJoCo games and show significant improvement compared to competitive baselines.\nThe paper is organized as follows. In Section 2, we introduce reinforcement learning problems, notations and two existed Q-learning variants to address overestimation bias. Then we present our D2Q algorithm in Section 3 and also prove that in the limit, this algorithm converges to the optimal solution. In Section 4 we show the experimental results on MuJoCo continuous control tasks, and compare it to the current state of the art. Some related work and discussion is presented in Section 5 and finally Section 6 concludes the paper." }, { "heading": "2 BACKGROUND", "text": "In this section, we introduce the reinforcement learning problems and Q-learning, as well as notions that will be used in the following sections." }, { "heading": "2.1 PROBLEM SETTING AND NOTATIONS", "text": "We consider the model-free reinforcement learning problem (i.e. optimal policy existed) with sequential interactions between an agent and its environment Sutton & Barto (1998) in order to maximize a cumulative return. At every time step t, the agent selects an action at in the state st according its policy and receives a scalar reward rt(st, at), and then transit to the next state st+1. The problem is modeled as Markov decision process (MDP) with tuple: (S,A, p(s0), p(st+1|st, at), r(st, at), γ). Here, S and A indicate the state and action space respectively, p(s0) is the initial state distribution. p(st+1|st, at) is the state transition probability to st+1 given the current state st and action at, r(st, at) is reward from the environment after the agent taking action at in state st and γ is discount factor, which is necessary to decay the future rewards ensuring finite returns. We model the agent’s behavior with πθ(a|s), which is a parametric distribution from a neural network. Suppose we have the finite length trajectory while the agent interacting with the environment. The return under the policy π for a trajectory τ = (st, at) T t=0\nJ(θ) = Eτ∼πθ(τ)[r(τ)] = Eτ∼πθ(τ)[R T 0 ]\n= Eτ∼πθ(τ)[ T∑ t=0 γtr(st, at)] (1)\nwhere πθ(τ) denotes the distribution of trajectories, p(τ) = π(s0, a0, s1, ..., sT , aT )\n= p(s0) T∏ t=0 πθ(at|st)p(st+1|st, at) (2)\nThe goal of reinforcement learning is to learn a policy π which can maximize the expected returns\nθ = arg max θ\nJ(θ) = arg maxEτ∼πθ(τ)[R T 0 ] (3)\nThe action-value function describes what the expected return of the agent is in state s and action a under the policy π. The advantage of action value function is to make actions explicit, so we can select actions even in the model-free environment. After taking an action at in state st and thereafter following policy π, the action value function is formatted as:\nQπ(st, at) = Esi∼pπ,ai∼π[Rt|st, at] = Esi∼pπ,ai∼π[ T∑ i=t γ(i−t)r(si, ai)|st, at] (4)\nTo get the optimal value function, we can use the maximum over actions, denoted as Q∗(st, at) = maxπ Q\nπ(st, at), and the corresponding optimal policy π can be easily derived by π∗(s) ∈ arg maxat Q ∗(st, at)." }, { "heading": "2.2 Q-LEARNING", "text": "Q-learning, as an off-policy RL algorithm, has been extensively studied since it was proposed Watkins & Dayan (1992). Suppose we use neural network parametrized by θQ to approximate Q-value in the continuous environment. To update Q-value function, we minimize the follow loss:\nL(θQ) = Esi∼pπ,ai∼π[(Q(st, at; θQ)− yt)2] (5)\nwhere yt = r(st, at) + γmaxat+1 Q(st+1, at+1; θ Q) is from Bellman equation, and its action at+1 is taken from frozen policy network (actor) to stabilizing the learning. In actor-critic methods, the policy π : S 7→ A, known as the actor with parameters θπ , can be updated through the chain rule in the deterministic policy gradient algorithm Silver & Lever (2014)\n∇J(θπ) = Es∼pπ [∇aQ(s, a; θQ)|a=π(s;θπ)∇θπ (π(s; θπ))] (6)\nwhere Q(s, a) is the expected return while taking action a in state s, and following π after.\nOne issue has attracted great attention is overestimation bias, which may exacerbate the situation into a more significant bias over the following updates if left unchecked. Moreover, an inaccurate value estimate may lead to poor policy updates. To address it, Double Q-learning van Hasselt (2010) use two independent critics q1(st, at) and q2(st, at), where policy selection uses a different critic network than value estimation\nq1(st, at) = r(st, at) + γq2(st+1, arg max at+1\nq1(st+1, at+1; θ q1); θq2)\nq2(st, at) = r(st, at) + γq1(st+1, arg max at+1\nq2(st+1, at+1; θ q2); θq1)\nRecently, TD3 Fujimoto et al. (2018) uses the similar two q-value functions, but taking the minimum of them below:\nyt = r(st, at) + γmin ( q1(st+1, π(st+1)), q2(st+1, π(st+1)) ) (7)\nThen the same square loss in Eq. 5 can be used to learn model parameters." }, { "heading": "3 DECORRELATED DOUBLE Q-LEARNING", "text": "In this section, we present Decorrelated Double Q-learning (D2Q) for continuous action control with attempt to address overestimation bias. Similar to Double Q-learning, we use two q-value functions to approximate Q(st, at). Our main contribution is to borrow the idea from control variates to decorrelate these two value functions, which can further reduce the overestimation risk." }, { "heading": "3.1 Q-VALUE FUNCTION", "text": "Suppose we have two approximators q1(st, at) and q2(st, at), D2Q uses the weighted difference of double q-value functions to approximate the action-value function at (st, at). Thus, we define Q-value as following:\nQ(st, at) = q1(st, at)− β ( q2(st, at)− E(q2(st, at)) ) (8)\nwhere q2(st, at)−E(q2(st, at)) is to model the noise in state st and action at, and β is the correlation coefficient of q1(st, at) and q2(st, at). To understand the expectation E(q2(st, at)), it is the average over all possible runs. Thus, the weighted difference between q1(st, at) and q2(st, at) attempts to reduce the variance and remove the noise effects in Q-learning.\nTo update q1 and q2, we minimize the following loss:\nL(θQ) = Esi∼pπ,ai∼π[(q1(st, at; θq1)− yt)2] + Esi∼pπ,ai∼π[(q2(st, at; θq2)− yt)2] + λEsi∼pπ,ai∼π[corr(q1(st, at; θq1), q2(st, at; θq2))]2 (9)\nwhere θQ = {θq1 , θq2}, and yt can be defined as\nyt = r(st, at) + γQ(st+1, at+1) (10)\nwhere Q(st+1, at+1) is the action-value function defined in Eq. 8 to decorrelate q1(st+1, at+1) and q2(st+1, at+1), which are both from the frozen target networks. In addition, we want these two qvalue functions as independent as possible. Thus, we introduce corr(q1(st, at; θq1), q2(st, at; θq1)), which measures similarity between these two q-value approximators. In the experiment, our method using Eq. 10 can get good results on Halfcheetah, but it did not perform well on other MuJoCo tasks.\nTo stabilize the target value, we take the minimum of Q(st+1, at+1) and q2(st+1, at+1) in Eq. 10 as TD3 Fujimoto et al. (2018). Then, it gives the target update of D2Q algorithm below\nyt = r(st, at) + γmin(Q(st+1, at+1), q2(st+1, at+1)) (11)\nAnd the action at+1 is from policy at+1 = π(st+1; θπ), which can take a similar policy gradient as in Eq. 6. Our D2Q leverages the parametric actor-critic algorithm, which maintains two q-value approixmators and a single actor. Thus, the loss in Eq. 9 tries to minimize the three terms below, as\ncorr(q1(st, at; θ q1), q2(st, at; θ q2))→ 0 q1(st, at; θ\nq1)→ yt q2(st, at; θ q2)→ yt\nAt each time step, we update the pair of critics towards the minimum target value in Eq. 11, while reducing the correlation between them. The purposes that we introduce control variate q2(st, at) are following: (1) Since we use q2(st, at)− E(q2(st, at)) to model noise, if there is no noise, such that q2(st, at) − E(q2(st, at)) = 0, then we have yt = r(st, at) + min(Qπ(st, at), q2(st, at)) = r(st, at) + min(q1(st, at), q2(st, at)) via Eq. 11, which is exactly the same as TD3. (2) In fact, because of the noise in value estimate, we have q2(st, at) − E(q2(st, at)) 6= 0. The purpose we introduce q2(st, at) is to mitigate overestimate bias in Q-learning. The control variate introduced by q2(st, at) will reduce the variance of Q(st, at) to stabilize the learning of value function.\nConvergence analysis: we claim that our D2Q algorithm is to converge the optimal in the finite MDP settings. There is existed theorem in Jaakkola et al. (1994), given the random process {∆t} taking value in Rn and defined as\n∆t+1(st, at) = (1− αt(st, at))∆t(st, at) + αt(st, at)Ft(st, at) (12)\nThen ∆t converges to zero with probability 1 under the following assumptions: 1. 0 < αt < 1, ∑ t αt(x) =∞ and ∑ t α 2 t (x) <∞\n2. ||E[Ft(x)|Ft]||W ≤ γ||∆t||W + ct with 0 < γ < 1 and ct p→ 0 = 1\n3. var[Ft(x)|Ft] ≤ C(1 + ||∆t||2W ) for C > 0\nwhere Ft is a sequence of increasing σ-field such that αt(st, at) and ∆t are Ft measurable for t = 1, 2, ....\nBased on the theorem above, we provide sketch of proof which borrows heavily from the proof of convergence of Double Q-learning and TD3 as below: Firstly, the learning rate αt satisfies the condition 1. Secondly, variance of r(st, at) is limit, so condition 3 holds. Finally, we will prove that condition 2 holds below.\n∆t+1(st, at) = (1− αt(st, at))(Q(st, at)−Q∗(st, at)) + αt(st, at) ( rt + γmin(Q(st, at), q2(st, at))−Q∗(st, at) ) = (1− αt(st, at))∆t(st, at) + αt(st, at)Ft(st, at) (13)\nwhere Ft(st, at) is defined as:\nFt(st, at) = rt + γmin(Q(st, at), q2(st, at))−Q∗(st, at) = rt + γmin(Q(st, at), q2(st, at))−Q∗(st, at) + γQ(st, at)− γQ(st, at) = rt + γQ(st, at)−Q∗(st, at) + γmin(Q(st, at), q2(st, at))− γQ(st, at) = FQt (st, at) + ct (14)\nSince we have E[FQt (st, at)|Ft] ≤ γ||∆t|| under Q-learning, so the condition 2 holds. Then we need to prove ct = min(Q(st, at), q2(st, at))−Q(st, at) converges to 0 with probability 1.\nmin(Q(st, at), q2(st, at))−Q(st, at) = min(Q(st, at), q2(st, at))− q2(st, at) + q2(st, at)−Q(st, at) = min(Q(st, at)− q2(st, at), 0)− (Q(st, at)− q2(st, at)) = min(q1(st, at)− q2(st, at)− β(q2(st, at)− E(q2(st, at))), 0)\n+ q1(st, at)− q2(st, at)− β(q2(st, at)− E(q2(st, at))) (15)\nSuppose there exists very small δ1 and δ2, such that |q1(st, at)− q2(st, at)| ≤ δ1 and |q2(st, at)− E(q2(st, at))| ≤ δ2, then we have\nmin(Q(st, at), q2(st, at))−Q(st, at) ≤2(|q1(st, at)− q2(st, at)|+ β|q2(st, at)− E(q2(st, at))|) =2(δ1 + βδ2) < 4δ (16)\nwhere δ = max(δ1, δ2). Note that ∃δ1, |q1(st, at) − q2(st, at)| ≤ δ1 holds because ∆t(q1, q2) = |q1(st, at) − q2(st, at)| converges to zero. According Eq. 9, both q1(st, at) and q2(st, at) are updated with following\nqt+1(st, at) = qt(st, at) + αt(st, at)(yt − qt(st, at)) (17) Then we have ∆t+1(q1, q2) = ∆t(q1, q2) − αt(st, at)∆t(q1, q2) = (1 − αt(st, at))∆t(q1, q2) converges to 0 as the learning rate satisfies 0 < αt(st, at) < 1." }, { "heading": "3.2 CORRELATION COEFFICIENT", "text": "The purpose we introduce corr(q1(st, at), q2(st, at)) in Eq. 9 is to reduce the correlation between two value approximators q1 and q2. In other words, we hope q1(st, at) and q2(st, at) to be as independent as possible. In this paper, we define corr(q1, q2) as:\ncorr(q1(st, at), q2(st, at)) = cosine(fq1(st, at), fq2(st, at))\nwhere cosine(a, b) is the cosine similarity between two vectors a and b. fq(st, at) is the vector representation of the last hidden layer in the value approximator q(st, at). In other words, we constrain the hidden representation learned from q1(st, at) and q2(st, at) in the loss function, with attempt to make them independent.\nAccording to control variates, the optimal β in Eq. 8 is:\nβ = cov(q1(st, at), q2(st, at))\nvar(q1(st, at))\nwhere cov is the symbol of covariance, and var represents variance. Considering it is difficult to estimate β in continuous action space, we take an approximation here. In addition, to reduce the number of hyper parameters, we set β = corr(q1(st, at), q2(st, at)) in Eq. 8 to approximate the correlation coefficient of q1(st, at) and q2(st, at) since it is hard to get covariance in the continuous action space." }, { "heading": "3.3 ALGORITHM", "text": "We summarize our approach in Algorithm. 1. Similar to Double Q-learning, we use the target networks with a slow updating rate to keep stability under temporal difference learning. Our contributions are two folder: (1) introduce the loss to minimize the correlation between two critics, which can make q1(st, at) and q2(st, at) as random as possible, and then effectively reduce the overestimation risk; (2) add control variates to reduce variance in the learning procedure." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "In this section, we evaluate our method on the suite of MuJoCo continuous control tasks. We downloaded the OpenAI Gym environment, and used the MuJoCo v2 version of all tasks to test our\nAlgorithm 1 Decorrelated Double Q-learning Initialize a pair of critic networks q1(s, a; θq1), q2(s, a; θq2) and actor π(s; θπ) with weights θQ = {θq1 , θq2} and θπ Initialize corresponding target networks for both critics and actor θQ′ and θπ ′; Initialize the total number of episodes N , batch size and the replay buffer R Initialize the coefficient λ in Eq. 9 Initialize the updating rate τ for target networks for episode = 1 to N do\nReceive initial observation state s0 from the environment for t = 0 to T do\nSelect action according to at = π(st; θπ) + , ∼ N (0, σ) Execute action at and receive reward rt, done, and further observe new state st+1 Push the tuple (st, at, rt, done, st+1) into R //sample from replay buffer Sample a batch of D = (st, at, rt, done, st+1) from R at+1 = π(st+1; θ\nπ) + with clip, ∼ N (0, σ̃) Compute Q(st, at) with target critic networks according to Eq. 8 Compute target value yt via Eq. 11 Update critics q1 and q2 by minimizing L(θQ) in Eq. 9 Update actor a = π(s; θπ) by maximizing Q(st, at) value in Eq. 8\nend for Update the target critics θQ′ = (1− τ)θQ′ + τθQ Update the target actor θπ ′ = (1− τ)θπ ′ + τθπ\nend for Return parameters θ = {θQ, θπ}.\nmethod. We compared our approach against the state of the art off-policy continuous control algorithms, including DDPG, SAC and TD3. Since SAC requires the well-tuned hyperparameters to get the maximum reward across different tasks, we used the existed results from its training logs published by its authors. To obtain consistent results, we use the author’s implementation for TD3 and DDPG. In practice, while we minimize the loss in Eq. 9, we constrain β ∈ (0, 1). In addition, we add Gaussian noise to action selected by the target policy in Eq. 11. Specifically, the target policy adds noise as at+1 = π(st+1; θπ) + , where = clip(N (0, σ),−c, c) with c = 0.5. Without other specification, we use the same parameters below for all environments. The deep architecture for both actor and critic uses the same networks as TD3 Fujimoto et al. (2018), with hidden layers [400, 300, 300]. Note that the actor adds the noise N (0, 0.1) to its action space to enhance exploration and the critic networks have two Q-functions q1(s, a) and q2(s, a). The minibatch size is 100, and both network parameters are updated with Adam using the learning rate 10−3. In addition, we also use target networks including the pair of critics and a single actor to improve the performance as in DDPG and TD3. The target policy is smoothed by adding Gaussian noise N (0, 0.2) as in TD3, and both target networks are updated with τ = 0.005. We set the balance weight λ = 2 for all tasks except Walker2d which we set λ = 10. In addition, the off-policy algorithm uses the replay buffer R with size 106 for all experiments.\nWe run each task for 1 million time steps and evaluate it every 5000 time steps with no exploration noise. We repeat each task 5 times with random seeds and get its mean and standard deviation respectively. And we report our evaluation results by averaging the returns with window size 10. The evaluation curves are shown in Figures 1, 2 and 3. Our D2Q consistently achieves much better performance than TD3 on most continuous control tasks, including InvertedDoublePendulum, Walker2d, Ant, Halfcheetah and Hopper environments. Other methods such as TD3 perform well on one task Reacher, but perform poorly on other tasks compared to our algorithm.\nWe also evaluated our approach on high dimensional continuous action space task. The Humanoidv2 has 376 dimensional state space and 17 dimensional action space. In the task, we set the learning rate on Humanoid to be 3 × 10−4, and compared to DDPG and TD3. The result in Figure 1(b) demonstrates that our performance on this task is on a par with TD3.\nThe quantitative results over 5 trials are presented in Table 1. Compared to SAC Haarnoja et al. (2018), our approach shows better performance with lower variance given the same size of training samples. It demonstrates that our approach can yield competitive results, compared to TD3 and DDPG. Specifically, our D2Q method outperforms all other algorithms with much low variance on Ant, HalfCheetah, InvertedDoublePendulum and Walker2d. In the Hopper task, our method achieve maximum reward competitive with the best methods such as TD3, with comparable variance." }, { "heading": "5 RELATED WORK", "text": "Q-learning can suffer overestimation bias because it uses the maximum to estimate the maximum expected value. To address the overestimation issue Thrun & Schwartz (1993) in Q-learning, many approaches have been proposed to avoid the maximization operator of a noisy value estimate. Delayed Q-learning Strehl et al. (2006) tries to find -optimal policy, which determines how frequent to update state-action function. However, it can suffer from overestimation bias, although it guarantees to converge in polynomial time. Double Q-learning van Hasselt (2010) introduces two indepen-\ndently trained critics to mitigate the overestimation effect. Averaged-DQN Anschel et al. (2017) takes the average of previously learned Q-values estimates, which results in a more stable training procedure, as well as reduces approximation error variance in the target values. A clipped Double Qlearning called TD3 Fujimoto et al. (2018) extends the deterministic policy gradient Silver & Lever (2014); Lillicrap et al. (2015) to address overestimation bias. In particular, TD3 uses the minimum of two independent critics to approximate the value function suffering from overestimation. Soft actor critic Haarnoja et al. (2018) takes a similar approach as TD3, but with better exploration with maximum entropy method. Maxmin Q-learning Lan et al. (2020) extends Double Q-learning and TD3 to multiple critics to handle overestimation bias and variance.\nAnother side effect of consistent overestimation Thrun & Schwartz (1993) in Q-learning is that the accumulated error of temporal difference Sutton & Barto (1998) can cause high variance. To reduce the variance, there are two popular approaches: baseline and actor-critic methods Witten (1977); Konda & Tsitsiklis (1999). In policy gradient, we can minus baseline in Q-value function to reduce variance without bias. Further, the advantage actor-critic (A2C) Mnih et al. (2016) introduces the average value to each state, and leverages the difference between value function and the average to update the policy parameters. Schulman et al proposed the generalized advantage value estimation Schulman et al. (2016), which considered the whole episode with an exponentially-weighted estimator of the advantage function that is analogous to TD(λ) to substantially reduce the variance of policy gradient estimates at the cost of some bias.\nFrom another point of view, baseline and actor-critic methods can be categories into control variate methods Greensmith et al. (2001). Greensmith et al. analyze the two additive control variate methods theoretically including baseline and actor-critic method to reduce the variance of performance gradient estimates in reinforcement learning problems. Interpolated policy gradient (IPG) Gu et al. (2017) based on control variate methods merges on- and off-policy updates to reduce variance for deep reinforcement learning. Motivated by the Stein’s identity, Liu et al. introduce more flexible and general action-dependent baseline functions Liu et al. (2018) by extending the previous control variate methods used in REINFORCE and advantage actor-critic. In this paper, we present a novel variant of Double Q-learning to constrain possible overestimation. We limit the correlation between the pair of q-value functions, and also introduce the control variates to reduce variance and improve performance." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose the Decorrelated Double Q-learning approach for off-policy value-based reinforcement learning. We use a pair of critics for value estimate, but we introduce a regularization term into the loss function to decorrelate these two approixmators. While minimizing the loss function, it constrains the two q-value functions to be as independent as possible. In addition, considering the overestimation derived from the maximum operator over positive noise, we leverage control variates to reduce variance and stabilize the learning procedure. The experimental results on a suite of challenging tasks in the continuous control environment demonstrate our approach yields on par or better performance than competitive baselines. Although we leverage control variates in our q-value function, we approximate the correlation coefficient with a simple strategy based on the similarity of these two q-functions. In the future work, we will consider a better estimation of correlation coefficient in control variate method." }, { "heading": "A APPENDIX", "text": "We add additional experiments on how our model will perform by varying λ in this Appendix. We set λ = [1, 2, 5, 10] respectively to run 1 Million steps and evaluate its performance every 5000 steps, while keeping all other parameters same." } ]
2,020
DECORRELATED DOUBLE Q-LEARNING
SP:73f0f92f476990989fa8339f789a77fadb5c1e26
[ "This work empirically studies the relationship between robustness and class selectivity, a measure of neuron variability between classes. Robustness to both adversarial (\"worst-case\") perturbations and corruptions (\"average-case\") are considered. This work builds off the recent work of Leavitt and Morcos (2020) (currently in review at ICLR 2021) who claim empirical evidence that class selectivity may be harmful for generalization. The experiments in this paper examine the robustness (in both senses) of networks explicitly regularized for class selectivity. The main empirical claims are that (1) class sensitivity is negatively correlated with robustness to corruptions (2) class sensitivity is positively correlated with robustness to adversarial perturbations." ]
Representational sparsity is known to affect robustness to input perturbations in deep neural networks (DNNs), but less is known about how the semantic content of representations affects robustness. Class selectivity—the variability of a unit’s responses across data classes or dimensions—is one way of quantifying the sparsity of semantic representations. Given recent evidence that class selectivity may not be necessary for, and in some cases can impair generalization, we sought to investigate whether it also confers robustness (or vulnerability) to perturbations of input data. We found that class selectivity leads to increased vulnerability to average-case (naturalistic) perturbations in ResNet18, ResNet50, and ResNet20, as measured using Tiny ImageNetC (ResNet18 and ResNet50) and CIFAR10C (ResNet20). Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are more vulnerable. In contrast, we found that class selectivity increases robustness to multiple types of worst-case (i.e. white box adversarial) perturbations, suggesting that while decreasing class selectivity is helpful for average-case perturbations, it is harmful for worst-case perturbations. To explain this difference, we studied the dimensionality of the networks’ representations: we found that the dimensionality of early-layer representations is inversely proportional to a network’s class selectivity, and that adversarial samples cause a larger increase in early-layer dimensionality than corrupted samples. We also found that the input-unit gradient was more variable across samples and units in high-selectivity networks compared to low-selectivity networks. These results lead to the conclusion that units participate more consistently in low-selectivity regimes compared to high-selectivity regimes, effectively creating a larger attack surface and hence vulnerability to worst-case perturbations.
[]
[ { "authors": [ "Rana Ali Amjad", "Kairen Liu", "Bernhard C. Geiger" ], "title": "Understanding Individual Neuron Importance Using Information Theory. April 2018", "venue": "URL https://arxiv.org/abs/1804.06679v3", "year": 2018 }, { "authors": [ "Alessio Ansuini", "Alessandro Laio", "Jakob H Macke", "Davide Zoccolan" ], "title": "Intrinsic dimension of data representations in deep neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Emilio Balda", "Niklas Koep", "Arash Behboodi", "Rudolf Mathar" ], "title": "Adversarial Risk Bounds through Sparsity based Compression", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "David Bau", "Jun-Yan Zhu", "Hendrik Strobelt", "Agata Lapedriza", "Bolei Zhou", "Antonio Torralba. Understanding the role of individual units in a deep neural network. Proceedings of the National Academy of Sciences", "September" ], "title": "ISSN 0027-8424, 1091-6490", "venue": "doi:", "year": 2020 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec ’17,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards Evaluating the Robustness of Neural Networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Fahim Dalvi", "Nadir Durrani", "Hassan Sajjad", "Yonatan Belinkov", "Anthony Bau", "James Glass" ], "title": "What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models", "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Fahim Dalvi", "Avery Nortonsmith", "Anthony Bau", "Yonatan Belinkov", "Hassan Sajjad", "Nadir Durrani", "James Glass" ], "title": "NeuroX: A Toolkit for Analyzing Individual Neurons in Neural Networks", "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 1985 }, { "authors": [ "Kedar Dhamdhere", "Mukund Sundararajan", "Qiqi Yan" ], "title": "How Important is a Neuron", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Guneet S. Dhillon", "Kamyar Azizzadenesheli", "Zachary C. Lipton", "Jeremy D. Bernstein", "Jean Kossaifi", "Aran Khanna", "Animashree Anandkumar" ], "title": "Stochastic Activation Pruning for Robust Adversarial Defense", "venue": "URL https://openreview.net/forum?id=H1uR4GZRZ", "year": 2018 }, { "authors": [ "Jonathan Donnelly", "Adam Roegiest" ], "title": "On Interpretability and Feature Representations: An Analysis of the Sentiment Neuron", "venue": "Advances in Information Retrieval, Lecture Notes in Computer Science,", "year": 2019 }, { "authors": [ "Harris Drucker", "Yann Le Cun" ], "title": "Improving generalization performance using double backpropagation", "venue": "IEEE Transactions on Neural Networks,", "year": 1992 }, { "authors": [ "Dumitru Erhan", "Yoshua Bengio", "Aaron C. Courville", "Pascal Vincent" ], "title": "Visualizing Higher-Layer Features of a Deep Network", "venue": null, "year": 2009 }, { "authors": [ "Brian Everitt" ], "title": "The Cambridge Dictionary of Statistics, volume 106", "venue": null, "year": 2002 }, { "authors": [ "Elena Facco", "Maria d’Errico", "Alex Rodriguez", "Alessandro Laio" ], "title": "Estimating the intrinsic dimension of datasets by a minimal neighborhood information", "venue": "Scientific Reports,", "year": 2017 }, { "authors": [ "Nicolas Ford", "Justin Gilmer", "Nicholas Carlini", "Ekin Cubuk" ], "title": "Adversarial Examples Are a Natural Consequence of Test Error in Noise", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A. Wichmann", "Wieland Brendel" ], "title": "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "URL https://openreview.net/ forum?id=Bygh9j09KX", "year": 2018 }, { "authors": [ "Justin Gilmer", "Ryan P. Adams", "Ian Goodfellow", "David Andersen", "George E. Dahl" ], "title": "Motivating the Rules of the Game for Adversarial Example Research", "venue": "URL https://arxiv.org/ abs/1807.06732v2", "year": 2018 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and Harnessing Adversarial Examples", "venue": "URL http://arxiv.org/abs/ 1412.6572", "year": 2015 }, { "authors": [ "Yiwen Guo", "Chao Zhang", "Changshui Zhang", "Yurong Chen" ], "title": "Sparse DNNs with Improved Adversarial Robustness", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas G. Dietterich" ], "title": "Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations. arXiv:1807.01697 [cs, stat], April 2019", "venue": "URL http:// arxiv.org/abs/1807.01697", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin D. Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty. arXiv:1912.02781 [cs, stat", "venue": "February 2020a. URL http://arxiv.org/abs/1912.02781", "year": 1912 }, { "authors": [ "Dan Hendrycks", "Kevin Zhao", "Steven Basart", "Jacob Steinhardt", "Dawn Song" ], "title": "Natural Adversarial Examples. arXiv:1907.07174 [cs, stat], January 2020b", "venue": "URL http://arxiv.org/abs/ 1907.07174", "year": 1907 }, { "authors": [ "Judy Hoffman", "Daniel A. Roberts", "Sho Yaida" ], "title": "Robust Learning with Jacobian Regularization. arXiv:1908.02729 [cs, stat], August 2019", "venue": "URL http://arxiv.org/abs/1908.02729", "year": 1908 }, { "authors": [ "Ruitong Huang", "Bing Xu", "Dale Schuurmans", "Csaba Szepesvari" ], "title": "Learning with a Strong Adversary", "venue": "[cs],", "year": 2016 }, { "authors": [ "E R Kandel", "J H Schwartz", "Jessica Chao" ], "title": "Principles of neural science", "venue": null, "year": 2000 }, { "authors": [ "Andrej Karpathy", "Justin Johnson", "Li Fei-Fei" ], "title": "Visualizing and Understanding Recurrent Networks", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning Multiple Layers of Features from Tiny Images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian J. Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "URL https://openreview.net/forum?id=S1OufnIlx", "year": 2016 }, { "authors": [ "Alexey Kurakin", "Ian J. Goodfellow", "Samy Bengio" ], "title": "Adversarial Machine Learning at Scale", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Peter Langeberg", "Emilio Rafael Balda", "Arash Behboodi", "Rudolf Mathar" ], "title": "On the Effect of Low-Rank Weights on Adversarial Robustness of Neural Networks. arXiv:1901.10371 [cs, stat], January 2019", "venue": "URL http://arxiv.org/abs/1901.10371", "year": 1901 }, { "authors": [ "Matthew L. Leavitt", "Ari Morcos" ], "title": "Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs. arXiv:2003.01262 [cs, q-bio, stat], March 2020", "venue": "URL http: //arxiv.org/abs/2003.01262", "year": 2003 }, { "authors": [ "Elizaveta Levina", "Peter Bickel" ], "title": "Maximum Likelihood Estimation of Intrinsic Dimension", "venue": "Advances in Neural Information Processing Systems,", "year": 2005 }, { "authors": [ "Peter E. Lillian", "Richard Meyes", "Tobias Meisen" ], "title": "Ablation of a Robot’s Brain: Neural Networks Under a Knife", "venue": "URL https://arxiv.org/abs/1812.05687v2", "year": 2018 }, { "authors": [ "Lu Lu", "Yeonjong Shin", "Yanhui Su", "George Em Karniadakis" ], "title": "Dying ReLU and Initialization: Theory and Numerical Examples. arXiv:1903.06733 [cs, math, stat], November 2019", "venue": "URL http://arxiv.org/abs/1903.06733", "year": 1903 }, { "authors": [ "Pei-Hsuan Lu", "Pin-Yu Chen", "Chia-Mu Yu" ], "title": "On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples", "venue": "URL https: //openreview.net/forum?id=HytESwywf", "year": 2018 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M. Erfani", "Sudanthi Wijewickrema", "Grant Schoenebeck", "Dawn Song", "Michael E. Houle", "James Bailey" ], "title": "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality", "venue": "URL https://openreview.net/forum?id= B1gJ1L2aW&noteId=B1gJ1L2aW", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards Deep Learning Models Resistant to Adversarial Attacks. February 2018", "venue": "URL https: //openreview.net/forum?id=rJzIBfZAb", "year": 2018 }, { "authors": [ "Richard Meyes", "Melanie Lu", "Constantin Waubert de Puiseau", "Tobias Meisen" ], "title": "Ablation Studies in Artificial Neural Networks. arXiv:1901.08644 [cs, q-bio], February 2019", "venue": "URL http:// arxiv.org/abs/1901.08644", "year": 1901 }, { "authors": [ "Ari S. Morcos", "David G.T. Barrett", "Neil C. Rabinowitz", "Matthew Botvinick" ], "title": "On the importance of single directions for generalization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Seil Na", "Yo Joong Choe", "Dong-Hyun Lee", "Gunhee Kim" ], "title": "Discovery of Natural Language Concepts in Individual Units of CNNs", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Aran Nayebi", "Surya Ganguli" ], "title": "Biologically inspired protection of deep networks from adversarial attacks. arXiv:1703.09202 [cs, q-bio, stat], March 2017", "venue": "URL http://arxiv.org/abs/ 1703.09202", "year": 2017 }, { "authors": [ "Roman Novak", "Yasaman Bahri", "Daniel A. Abolafia", "Jeffrey Pennington", "Jascha Sohl-Dickstein" ], "title": "Sensitivity and Generalization in Neural Networks: an Empirical Study", "venue": "URL http://arxiv.org/abs/1802.08760", "year": 2018 }, { "authors": [ "Chris Olah", "Alexander Mordvintsev", "Ludwig Schubert" ], "title": "Feature Visualization. Distill, 2(11):e7, November 2017", "venue": "ISSN 2476-0757. doi: 10.23915/distill.00007. URL https://distill.pub/", "year": 2017 }, { "authors": [ "Alec Radford", "Rafal Jozefowicz", "Ilya Sutskever" ], "title": "Learning to Generate Reviews and Discovering Sentiment", "venue": "[cs],", "year": 2017 }, { "authors": [ "Ivet Rafegas", "Maria Vanrell", "Luis A. Alexandre", "Guillem Arias" ], "title": "Understanding trained CNNs by indexing neuron selectivity", "venue": "Pattern Recognition Letters, page S0167865519302909,", "year": 2019 }, { "authors": [ "Salah Rifai", "Pascal Vincent", "Xavier Muller", "Xavier Glorot", "Yoshua Bengio" ], "title": "Contractive autoencoders: explicit invariance during feature extraction", "venue": "In Proceedings of the 28th International Conference on International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Amartya Sanyal", "Varun Kanade", "Philip H.S. Torr", "Puneet K. Dokania" ], "title": "Robustness via Deep LowRank Representations", "venue": "URL http://arxiv.org/ abs/1804.07090", "year": 2020 }, { "authors": [ "Alexandru Constantin Serban", "Erik Poll", "Joost Visser" ], "title": "Adversarial Examples - A Complete Characterisation of the Phenomenon", "venue": "[cs],", "year": 2019 }, { "authors": [ "Charles S. Sherrington" ], "title": "The integrative action of the nervous system. The integrative action of the nervous system", "venue": null, "year": 1906 }, { "authors": [ "Jure Sokolic", "Raja Giryes", "Guillermo Sapiro", "Miguel R.D. Rodrigues" ], "title": "Robust Large Margin Deep Neural Networks", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "URL https: //arxiv.org/abs/1312.6199v4", "year": 2013 }, { "authors": [ "Yusuke Tsuzuku", "Issei Sato" ], "title": "On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Igor Vasiljevic", "Ayan Chakrabarti", "Gregory Shakhnarovich" ], "title": "Examining the Impact of Blur on Recognition by Convolutional Networks. November 2016", "venue": "URL https://arxiv.org/abs/ 1611.05760v2", "year": 2016 }, { "authors": [ "tero", "Charles R. Harris", "Anne M. Archibald", "Antônio H. Ribeiro", "Fabian Pedregosa", "Paul van" ], "title": "Mulbregt, and SciPy 1 0 Contributors. SciPy 1.0–Fundamental Algorithms for Scientific Computing in Python. arXiv:1907.10121 [physics], July 2019", "venue": "URL http://arxiv.org/abs/ 1907.10121", "year": 1907 }, { "authors": [ "David Warde-Farley", "Ian Goodfellow" ], "title": "Adversarial Perturbations of Deep Neural Networks. In Perturbations, Optimization, and Statistics, pages 311–342", "venue": "MITP, 2017", "year": 2017 }, { "authors": [ "Shaokai Ye", "Siyue Wang", "Xiao Wang", "Bo Yuan", "Wujie Wen", "Xue Lin" ], "title": "Defending DNN Adversarial Attacks with Pruning and Logits Augmentation", "venue": "URL https: //openreview.net/forum?id=S1qI2FJDM", "year": 2018 }, { "authors": [ "Dong Yin", "Raphael Gontijo Lopes", "Jon Shlens", "Ekin Dogus Cubuk", "Justin Gilmer" ], "title": "A Fourier Perspective on Model Robustness in Computer Vision", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Matthew D. Zeiler", "Rob Fergus" ], "title": "Visualizing and Understanding Convolutional Networks", "venue": "Computer Vision – ECCV", "year": 2014 }, { "authors": [ "Stephan Zheng", "Yang Song", "Thomas Leung", "Ian Goodfellow" ], "title": "Improving the Robustness of Deep Neural Networks via Stability Training", "venue": "URL https://arxiv.org/abs/ 1604.04326v1", "year": 2016 }, { "authors": [ "B. Zhou", "D. Bau", "A. Oliva", "A. Torralba" ], "title": "Interpreting Deep Visual Representations via Network Dissection", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Object Detectors Emerge in Deep Scene CNNs", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Bolei Zhou", "Yiyou Sun", "David Bau", "Antonio Torralba" ], "title": "Revisiting the Importance of Individual Units in CNNs via Ablation", "venue": "[cs],", "year": 2018 }, { "authors": [ "He" ], "title": "The maxpool layer after the first batchnorm layer", "venue": null, "year": 2016 }, { "authors": [ "Virtanen" ], "title": "2019), and visualized using Seaborn (Waskom et al., 2017)", "venue": null, "year": 2017 }, { "authors": [ "gradient-masking (Athalye" ], "title": "2018) by generating worst-case perturbations using each of the replicate models trained with no selectivity regularization (α = 0), then testing selectivity-regularized models on these samples. We found that high-selectivity models were less vulnerable to the α = 0 samples than low-selectivity models for high-intensity perturbations (Appendix A14", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Methods for understanding deep neural networks (DNNs) often attempt to find individual neurons or small sets of neurons that are representative of a network’s decision (Erhan et al., 2009; Zeiler and Fergus, 2014; Karpathy et al., 2016; Amjad et al., 2018; Lillian et al., 2018; Dhamdhere et al., 2019; Olah et al., 2020). Selectivity in individual units (i.e. variability in a neuron’s activations across semantically-relevant data features) has been of particular interest to researchers trying to better understand deep neural networks (DNNs) (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020; Leavitt and Morcos, 2020). However, recent work has shown that selective neurons can be irrelevant, or even detrimental to network performance, emphasizing the importance of examining distributed representations for understanding DNNs (Morcos et al., 2018; Donnelly and Roegiest, 2019; Dalvi et al., 2019b; Leavitt and Morcos, 2020).\nIn parallel, work on robustness seeks to build models that are robust to perturbed inputs (Szegedy et al., 2013; Carlini and Wagner, 2017a;b; Vasiljevic et al., 2016; Kurakin et al., 2017; Gilmer et al., 2018; Zheng et al., 2016). Hendrycks and Dietterich (2019) distinguish between two types of robustness: corruption robustness, which measures a classifier’s performance on low-quality or naturalistically-perturbed inputs—and thus is an \"average-case\" measure—and adversarial robustness,\nwhich measures a classifier’s performance on small, additive perturbations that are tailored to the classifier—and thus is a \"worst-case\" measure.1\nResearch on robustness has been predominantly focused on worst-case perturbations, which is affected by weight and activation sparsity (Madry et al., 2018; Balda et al., 2020; Ye et al., 2018; Guo et al., 2018; Dhillon et al., 2018) and representational dimensionality (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017). But less is known about the mechanisms underlying average-case perturbation robustness and its common factors with worst-case robustness. Some techniques for improving worst-case robustness also improve average-case robustness (Hendrycks and Dietterich, 2019; Ford et al., 2019; Yin et al., 2019), thus it is possible that sparsity and representational dimensionality also contribute to average-case robustness. Selectivity in individual units can be also be thought of a measure of the sparsity with which semantic information is represented.2 And because class selectivity regularization provides a method for controlling selectivity, and class selectivity regularization has been shown to improve test accuracy on unperturbed data (Leavitt and Morcos, 2020), we sought to investigate whether it could be utilized to improve perturbation robustness and elucidate the factors underlying it.\nIn this work we pursue a series of experiments investigating the causal role of selectivity in robustness to worst-case and average-case perturbations in DNNs. To do so, we used a recently-developed class selectivity regularizer (Leavitt and Morcos, 2020) to directly modify the amount of class selectivity learned by DNNs, and examined how this affected the DNNs’ robustness to worst-case and average-case perturbations. Our findings are as follows:\n• Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are generally less robust to average-case perturbations, as measured in ResNets using the Tiny ImageNetC and CIFAR10C datasets. The corruption robustness imparted by regularizing against class selectivity was consistent across nearly all tested corruptions. • In contrast to its impact on average-case perturbations, decreasing class selectivity reduces robustness to worst-case perturbations in both tested models, as assessed using gradient-based white-box attacks. • The variability of the input-unit gradient across samples and units is proportional to a network’s overall class selectivity, indicating that high variability in perturbability within and across units may facilitate worst-case perturbation robustness.\n• The dimensionality of activation changes caused by corruption markedly increases in early layers for both perturbation types, but is larger for worst-case perturbations and low-selectivity networks. This implies that representational dimensionality may present a trade-off between worst-case and average-case perturbation robustness.\nOur results demonstrate that changing class selectivity, and hence the sparsity of semantic representations, can confer robustness to average-case or worst-case perturbations, but not both simultaneously. They also highlight the roles of input-unit gradient variability and representational dimensionality in mediating this trade-off." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 PERTURBATION ROBUSTNESS", "text": "The most commonly studied form of robustness in DNNs is robustness to adversarial attacks, in which an input is perturbed in a manner that maximizes the change in the network’s output while\n1We use the terms \"worst-case perturbation\" and \"average-case perturbation\" instead of \"adversarial attack\" and \"corruption\", respectively, because this usage is more general and dispenses with the implied categorical distinction of using seemingly-unrelated terms. Also note that while Hendrycks and Dietterich (2019) assign specific and distinct meanings to \"perturbation\" and \"corruption\", we use the term \"perturbation\" more generally to refer to any change to an input.\n2Class information is semantic. And because class selectivity measures the degree to which class information is represented in individual neurons, it can be considered a form of sparsity. For example, if a network has high test accuracy on a classification task, it is necessarily representing class (semantic) information. But if the mean class selectivity across units is low, then the individual units do not contain much class information, thus the class information must be distributed across units; the semantic representation in this case is not sparse, it is distributed.\nattempting to minimize or maintain below some threshold the magnitude of the change to the input (Serban et al., 2019; Warde-Farley and Goodfellow, 2017) . Because white-box adversarial attacks are optimized to best confuse a given network, robustness to adversarial attacks are a \"worst-case\" measure of robustness. Two factors that have been proposed to account for DNN robustness to worst-case perturbations are particularly relevant to the present study: sparsity and dimensionality.\nMultiple studies have linked activation and weight sparsity with robustness to worst-case perturbations. Adversarial training improves worst-case robustness Goodfellow et al. (2015); Huang et al. (2016) and results in sparser weight matrices (Madry et al., 2018; Balda et al., 2020). Methods for increasing the sparsity of weight matrices (Ye et al., 2018; Guo et al., 2018) and activations (Dhillon et al., 2018) likewise improve worst-case robustness, indicating that the weight sparsity caused by worst-case perturbation training is not simply a side-effect.\nResearchers have also attempted to understand the nature of worst-case robustness from a perspective complementary to that of sparsity: dimensionality. Like sparsity, worst-case perturbation training reduces the rank of weight matrices and representations, and regularizing weight matrices and representations to be low-rank can improve worst-case perturbation robustness (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017). Taken together, these studies support the notion that networks with low-dimensional representations are more robust to worst-case perturbations.\nComparatively less research has been conducted to understand the factors underlying averagecase robustness. Certain techniques for improving worst-case perturbation robustness also help against average-case perturbations (Hendrycks and Dietterich, 2019; Geirhos et al., 2018; Ford et al., 2019). Examining the frequency domain has elucidated one mechanism: worst-case perturbations for \"baseline\" models tend to be in the high frequency domain, and improvements in averagecase robustness resulting from worst-case robustness training are at least partially ascribable to models becoming less reliant on high-frequency information (Yin et al., 2019; Tsuzuku and Sato, 2019; Geirhos et al., 2018). But it remains unknown whether other factors such as sparsity and dimensionality link these two forms of robustness." }, { "heading": "2.2 CLASS SELECTIVITY", "text": "One technique that has been of particular interest to researchers trying to better understand deep (and biological) neural networks is examining the selectivity of individual units (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020; Leavitt and Morcos, 2020; Sherrington, 1906; Kandel et al., 2000). Evidence regarding the importance of selectivity has mostly relied on single unit ablation, and has been equivocal (Radford et al., 2017; Morcos et al., 2018; Amjad et al., 2018; Zhou et al., 2018; Donnelly and Roegiest, 2019; Dalvi et al., 2019a). However Leavitt and Morcos (2020) examined the role of single unit selectivity in network performance by regularizing for or against class selectivity in the loss function, which sidesteps the limitations of single unit ablation and correlative approaches and allowed them to investigate the causal effect of class selectivity. They found that reducing class selectivity has little negative impact on—and can even improve—test accuracy in CNNs trained on image recognition tasks, but that increasing class selectivity has significant negative effects on test accuracy. However, their study focused on examining the effects of class selectivity on test accuracy in unperturbed (clean) inputs. Thus it remains unknown how class selectivity affects robustness to perturbed inputs, and whether class selectivity can serve as or elucidate a link between worst-case and average-case robustness." }, { "heading": "3 APPROACH", "text": "A detailed description of our approach is provided in Appendix A.1.\nModels and training protocols Our experiments were performed on ResNet18 and ResNet50 (He et al., 2016) trained on Tiny ImageNet (Fei-Fei et al., 2015), and ResNet20 (He et al., 2016) trained on CIFAR10 (Krizhevsky, 2009). We focus primarily on the results for ResNet18 trained on Tiny ImageNet in the main text for space, though results were qualitatively similar for ResNet50, and ResNet20 trained on CIFAR10. Experimental results were obtained with model parameters from the epoch that achieved the highest validation set accuracy over the training epochs, and 20 replicate\nmodels (ResNet18 and ResNet20) or 5 replicate models (Resnet50) with different random seeds were run for each hyperparameter set.\nClass selectivity index Following (Leavitt and Morcos, 2020). A unit’s class selectivity index is calculated as follows: At every ReLU, the activation in response to a single sample was averaged across all elements of the filter map (which we refer to as a \"unit\"). The class-conditional mean activation was then calculated across all samples in the clean test set, and the class selectivity index (SI) was calculated as follows:\nSI = µmax − µ−max µmax + µ−max\n(1)\nwhere µmax is the largest class-conditional mean activation and µ−max is the mean response to the remaining (i.e. non-µmax) classes. The selectivity index ranges from 0 to 1. A unit with identical average activity for all classes would have a selectivity of 0, and a unit that only responds to a single class would have a selectivity of 1.\nAs Morcos et al. (2018) note, the selectivity index is not a perfect measure of information content in single units. For example, a unit with a litte bit of information about many classes would have a low selectivity index. However, it identifies units that are class-selective similarly to prior studies (Zhou et al., 2018). Most importantly, it is differentiable with respect to the model parameters.\nClass selectivity regularization We used (Leavitt and Morcos, 2020)’s class selectivity regularizer to control the levels of class selectivity learned by units in a network during training. Class selectivity regularization is achieved by minimizing the following loss function during training:\nloss = − C∑ c yc· log(ŷc)− αµSI (2)\nThe left-hand term in the loss function is the standard classification cross-entropy, where c is the class index, C is the number of classes, yc is the true class label, and ŷc is the predicted class probability. The right-hand component of the loss function, −αµSI , is the class selectivity regularizer. The regularizer consists of two terms: the selectivity term,\nµSI = 1\nL L∑ l 1 U U∑ u SIu,l (3)\nwhere l is a convolutional layer, L is number of layers, u is a unit, U is the number of units in a given layer, and SIu is the class selectivity index of unit u. The selectivity term of the regularizer is obtained by computing the selectivity index for each unit in a layer, then computing the mean selectivity index across units within each layer, then computing the mean selectivity index across layers. Computing the mean within layers before computing the mean across layers (as compared to computing the mean across all units in the network) mitigates the biases induced by the larger numbers of units in deeper layers. The other term in the regularizer is α, the regularization scale, which determines whether class selectivity is promoted or discouraged. Negative values of α discourage class selectivity in individual units and positive values encourage it. The magnitude of α controls the contribution of the selectivity term to the overall loss. During training, the class selectivity index was computed for each minibatch. The final (logit) layer was not subject to selectivity regularization or included in our analyses because by definition, the logit layer must be class selective in a classification task.\nMeasuring average-case robustness To evaluate robustness to average-case perturbations, we tested our networks on CIFAR10C and Tiny ImageNetC, two benchmark datasets consisting of the CIFAR10 or Tiny ImageNet data, respectively, to which a set of naturalistic corruptions have been applied (Hendrycks and Dietterich, 2019, examples in Figure A1). We average across all corruption types and severities (see Appendix A.1.2 for details) when reporting corrupted test accuracy.\nMeasuring worst-case robustness We tested our models’ worst-case (i.e. adversarial) robustness using two methods. The fast gradient sign method (FGSM) (Goodfellow et al., 2015) is a simple attack that computes the gradient of the loss with respect to the input image, then scales the image’s pixels (within some bound) in the direction that increases the loss. The second method, projected gradient descent (PGD) (Kurakin et al., 2016; Madry et al., 2018), is an iterated version of FGSM. We used a step size of 0.0001 and an l∞ norm perturbation budget ( ) of 16/255.\nComputing the stability of units and layers To quantify variation in networks’ perturbability, we first computed the l2 norm of the input-unit gradient for each unit u in a network. We then computed the mean (µu) and standard deviation (σu) of the norm across samples for each unit. σu/µu yields\na)\nShot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur\n0\n5\n10\n15\n20\n25\n30\n35\nTe st\nA cc\nur ac\ny\nClass Selectivity Regularization Scale (α)\n-2.0 Low\nSelectivity\n2.0 High\nSelectivity\n0 Baseline Selectivity -1.0 -0.7 -0.4 -0.2 0.2 0.4 0.7 1.0\nb)\n-2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α)\n13\n14\n15\n16\n17\nTe st\nA cc\nur ac\ny\nFigure 1: Reducing class selectivity improves average-case robustness. Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis; corruption severity 1 (least severe) is at the top, corruption severity 5 (most severe) at the bottom. (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). Results shown are for ResNet18 trained on Tiny ImageNet and tested on Tiny ImageNetC. Error bars = 95% confidence intervals of the mean. See Figure A6 for CIFAR10C results.\nthe coefficient of variation (Everitt, 2002) for a unit (CVu), a measure of variation in perturbability for individual units. We also quantified the variation across units in a layer by computing the standard deviation of µu across units in a layer l, σ(µu) = σl, and dividing this by the corresponding mean across units µ(µu) = µl, to yield the CV across units σl/µl = CVl." }, { "heading": "4 RESULTS", "text": "" }, { "heading": "4.1 AVERAGE-CASE ROBUSTNESS IS INVERSELY PROPORTIONAL TO CLASS SELECTIVITY", "text": "Certain kinds of sparsity—including reliance on single directions (Morcos et al., 2018), and the semantic sparsity measured by class selectivity (Leavitt and Morcos, 2020)—have been shown to impair network performance. We sought to extend this question to robustness: how does the sparsity of semantic representations affect robustness to average-case perturbations of the input data? We used a recently-introduced method (Leavitt and Morcos (2020); Approach 3) to modulate the amount of class selectivity learned by DNNs (Figure A2 demonstrates effects of selectivity regularization). We then examined how this affected performance on Tiny ImageNetC and CIFAR10C, two benchmark datasets for average-case corruptions (Approach 3; example images in Figure A1).\nChanging the level of class selectivity across neurons in a network could one of the following effects on corruption robustness: If concentrating semantic representations into fewer neurons (i.e. promoting semantic sparsity) provides fewer potent dimensions on which perturbed inputs can act, then increasing class selectivity should confer networks with robustness to average-case perturbations, while reducing class selectivity should render networks more vulnerable. Alternatively, if distributing semantic representations across more units (i.e. reducing sparsity) dilutes the changes induced by perturbed inputs, then reducing class selectivity should increase a network’s robustness to averagecase perturbations, while increasing class selectivity should reduce robustness.\nWe found that decreasing class selectivity leads to increased robustness to average-case perturbations for both ResNet18 tested on Tiny ImageNetC (Figure 1) and ResNet20 tested on CIFAR10C (Figure A6). In ResNet18, we found that mean test accuracy on corrupted inputs increases as class selectivity decreases (Figure 1), with test accuracy reaching a maximum at regularization scale α = −2.0 (mean test accuracy across corruptions and severities at α−2.0 =17), representing a 3.5 percentage point (pp) increase relative to no selectivity regularization (i.e. α0; test accuracy at α0 = 13.5). In contrast, regularizing to increase class selectivity has either no effect or a negative impact on corruption robustness. Corrupted test accuracy remains relatively stable until α = 1.0, after which point it declines. The results are qualitatively similar for ResNet50 tested on Tiny ImageNetC (Figure A9), and for ResNet20 tested on CIFAR10C (Figure A6), except the vulnerability to corruption caused by increasing selectivity is even more dramatic in ResNet20. We also found similar results when controlling for the difference in clean accuracy for models with different α (Appendix A.3).\nWe observed that regularizing to decrease class selectivity causes robustness to average-case perturbations. But it’s possible that the causality is unidirectional, leading to the question of whether the\nconverse is also true: does increasing robustness to average-case perturbations cause class selectivity to decrease? We investigated this question by training with AugMix, a technique known to improve worst-case robustness (Hendrycks et al., 2020a). We found that AugMix does indeed decrease the mean level of class selectivity across neurons in a network (Appendix A.4; Figure A11). AugMix decreases overall levels of selectivity similarly to training with a class selectivity regularization scale of approximately α = −0.1 or α = −0.2 in both ResNet18 trained on Tiny ImageNet (Figures A11a and A11b) and ResNet20 trained on CIFAR10 (Figures A11c and A11d). These results indicate that the causal relationship between average-case perturbation robustness and class selectivity is bidirectional: not only does decreasing class selectivity improve average-case perturbation robustness, but improving average-case perturbation-robustness also causes class selectivity to decrease.\nWe also found that the effect of class selectivity on perturbed robustness is consistent across corruption types. Regularizing against selectivity improves perturbation robustness in all 15 Tiny ImageNetC corruption types for ResNet18 (Figure A4) and 14 of 15 Tiny ImageNetC corruption types in ResNet50 (Figure A10), and 14 of 19 corruption types in CIFAR10C for ResNet20 (Figure A7). Together these results demonstrate that reduced class selectivity confers robustness to average-case perturbations, implying that distributing semantic representations across neurons—i.e. low sparsity—may dilute the changes induced by average-case perturbations." }, { "heading": "4.2 CLASS SELECTIVITY IMPARTS WORST-CASE PERTURBATION ROBUSTNESS", "text": "We showed that the sparsity of a network’s semantic representations, as measured with class selectivity, is causally related to a network’s robustness to average-case perturbations. But how does the sparsity of semantic representations affect worst-case robustness? We addressed this question by testing our class selectivity-regularized networks on inputs that had been perturbed using using one of two gradient-based methods (see Approach 3).\nIf distributing semantic representations across units provides more dimensions upon which a worstcase perturbation is potent, then worst-case perturbation robustness should be proportional to class selectivity. However, if increasing the sparsity of semantic representations creates more responsive individual neurons, then worst-case robustness should be inversely proportional to class selectivity.\nUnlike average-case perturbations, decreasing class selectivity decreases robustness to worst-case perturbations for ResNet18 (Figure 2) and ResNet50 (Figure A13) trained on Tiny ImageNet, and ResNet20 trained on CIFAR10 (Figures A12). For small perturbations (i.e. close to x=0), the effects of class selectivity regularization on test accuracy (class selectivity is inversely correlated with unperturbed test accuracy) appear to overwhelm the effects of perturbations. But as the magnitude of perturbation increases, a stark ordering emerges: test accuracy monotonically decreases as a function of class selectivity in ResNet18 and ResNet50 for both FGSM and PGD attacks (ResNet18: Figures 2a and 2b; ResNet50: Figures A13a and A13b). The ordering is also present for ResNet20, though less consistent for the two networks with the highest class selectivity (α = 0.7 and α = 1.0). However, increasing class selectivity is much more damaging to test accuracy in ResNet20 trained on CIFAR10 compared to ResNet18 trained on Tiny ImageNet (Leavitt and Morcos, 2020, Figure A2), so the the substantial performance deficits of extreme selectivity in ResNet20 likely mask the perturbation-robustness. This result demonstrates that networks with sparse semantic representations are less vulnerable to worst-case perturbation than networks with distributed semantic representations. We also verified that the worst-case robustness of high-selectivity networks is not fully explained by gradient-masking (Athalye et al., 2018, Appendix A.5).\nInterestingly, class selectivity regularization does not appear to affect robustness to \"natural\" adversarial examples (Appendix A.6), which are \"unmodified, real-world examples...selected to cause a\nmodel to make a mistake\" (Hendrycks et al., 2020b). Performance on ImageNet-A, a benchmark of natural adversarial examples (Hendrycks et al., 2020b), was similar across all tested values of α for both ResNet18 (Figure A15a) and ResNet50 (Figure A15b), indicating that class selectivity regularization may share some limitations with other methods for improving both worst-case and average-case robustness, many of which also fail to yield significant robustness improvements against ImageNet-A (Hendrycks et al., 2020b).\nWe found that regularizing to increase class selectivity causes robustness to worst-case perturbations. But is the converse true? Does increasing robustness to worst-case perturbations also cause class selectivity to increase? We investigated this by training networks with a commonly-used technique to improve worst-case perturbation robustness, PGD training. We found that PGD training does indeed increase the mean level of class selectivity across neurons in a network, and this effect is proportional to the strength of PGD training: networks trained with more strongly-perturbed samples have higher class selectivity (Appendix A.7). This effect was present in both ResNet18 trained on Tiny ImageNet (Figure A16c) and ResNet20 trained on CIFAR10 (Figure A16f), indicating that the causal relationship between worst-case perturbation robustness and class selectivity is bidirectional.\nNetworks whose outputs are more stable to small input perturbations are known to have improved generalization performance and worst-case perturbation robustness (Drucker and Le Cun, 1992; Novak et al., 2018; Sokolic et al., 2017; Rifai et al., 2011; Hoffman et al., 2019). To examine whether increasing class selectivity improves worst-case perturbation robustness by increasing network stability, we analyzed each network’s input-output Jacobian, which is proportional to its stability—a large-magnitude Jacobian means that a small change to the network’s input will cause a large change to its output. If class selectivity induces worst-case robustness by increasing network stability, then networks with higher class selectivity should have smaller Jacobians. But if increased class selectivity induces adversarial robustness through alternative mechanisms, then class selectivity should have no effect on the Jacobian. We found that the l2 norm of the input-output Jacobian is inversely proportional to class selectivity for ResNet18 (Figure 2c), ResNet50 (Figure A13c), and ResNet20 (Figure A12c), indicating that distributed semantic representations are more vulnerable to worst-case perturbation because they are less stable than sparse semantic representations." }, { "heading": "4.3 VARIABILITY OF THE INPUT-UNIT GRADIENT ACROSS SAMPLES AND UNITS", "text": "We observed that the input-output Jacobian is proportional to worst-case vulnerability and inversely proportional to class selectivity, but focusing on input-output stability potentially overlooks phenomena present in hidden layers and units. If class selectivity imparts worst-case robustness by making individual units less reliably perturbable—because each unit is highly tuned to a particular subset of images—then we should expect to see more variation across input-unit gradients for units in high-selectivity networks compared to units in low-selectivity networks. Alternatively, worst-case robustness in high-selectivity networks could be achieved by reducing both the magnitude and variation of units’ perturbability, in which case we would expect to observe lower variation across input-unit gradients for units in high-selectivity networks compared to low-selectivity networks.\nWe quantified variation in unit perturbability using the coefficient of variation of the input-unit gradient across samples for each unit (CVu; Approach 3). The CV is a measure of variability that normalizes the standard deviation of a quantity by the mean. A large CV indicates high variability, a small CV indicates low variability. To quantify variation in perturbability across units, we computed the CV across units in each layer, (CVl; Approach 3).\nWe found that units in high-selectivity networks exhibited greater variation in their perturbability than units in low-selectivity networks, both within individual units and across units in each layer. This effect was present in both ResNet18 trained on Tiny ImageNet (Figure 3) and ResNet20 trained on CIFAR10 (Figure A18), although the effect was less consistent for across-unit variability in later layers in ResNet18 (Figure 3b). Interestingly, class selectivity affects both the numerator (σ) and denominator (µ) of the CV calculation for both the CV across samples and CV across units (Appendix A.8). These results indicate that that high class selectivity imparts worst-case robustness by increasing the variation in perturbability within and across units, while the worst-case vulnerability associated with low class selectivity results from more consistently perturbable units. It is worth noting that the inverse can be stated with regards to average-case robustness: low variation in perturbability both within and across units in low-selectivity networks is associated with robustness to average-case perturbations, despite the these units (and networks) being more perturbable on average.\n4.4 DIMENSIONALITY IN EARLY LAYERS PREDICTS PERTURBATION VULNERABILITY\na)\n0 2 4 6 8 10 12 14 16 Layer\n10−1\n100\nIn pu\ntU\nni t G\nra di\nen t\nVa ria\nbi lit\ny, U\nni t (\nCV u)\nClass Selectivity Regularization Scale (α)\n-2.0 -1.0 2.01.00.70-0.4-0.7 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4\nb)\n0 2 4 6 8 10 12 14 16 Layer\n10−1\n100\nIn pu\ntU\nni t G\nra di\nen t\nVa ria\nbi lit\ny, L\nay er\n(C V l\n)\nality would be unaffected by class selectivity.\nWe found that the sparsity of a DNN’s semantic representations corresponds directly to the dimensionality of those representations. Dimensionality is inversely proportional to class selectivity in early ResNet18 layers (≤layer 9; Figure 4a), and across all of ResNet20 (Figure A21d). Networks with higher class selectivity tend to have lower dimensionality, and networks with lower class selectivity tend to have higher dimensionality. These results show that the sparsity of a network’s semantic representations is indeed reflected in those representations’ dimensionality.\nWe next examined the dimensionality of perturbation-induced changes in representations by subtracting the perturbed activation matrix from the clean activation matrix and computing the dimensionality of this \"difference matrix\" (see Appendix A.1.4). Intuitively, this metric quantifies the dimensionality of the change in the representation caused by perturbing the input. If it is small, the perturbation impacts fewer units, while if it is large, more units are impacted. Interestingly, we found that the dimensionality of the changes in activations induced by both average-case (Figure 4b) and worst-case perturbations (Figure 4c) was notably higher for networks with reduced class-selectivity, suggesting that decreasing class selectivity causes changes in input to become more distributed.\nWe found that the activation changes caused by average-case perturbations are higher-dimensional than the representations of the clean data in both ResNet18 (compare Figures 4b and 4a) and ResNet20 (Figures A21e and A21d), and that this effect is inversely proportional to class selectivity (Figures 4b and A21e); the increase in dimensionality from average-case perturbations was more pronounced in low-selectivity networks than in high-selectivity networks. These results indicate that class selectivity not only predicts the dimensionality of a representation, but also the change in dimensionality induced by an average-case perturbation.\nNotably, however, the increase in early-layer dimensionality was much larger for worst-case perturbations than average-case perturbations (Figure 4c; Figure A21f) . These results indicate that, while the changes in dimensionality induced by both naturalistic and adversarial perturbations are proportional\nto the dimensionality of the network’s representations, these changes do not consistently project onto coding-relevant dimensions of the representations. Indeed, the larger change in early-layer dimensionality caused by worst-case perturbations likely reflects targeted projection onto codingrelevant dimensions and provides intuition as to why low-selectivity networks are more susceptible to worst-case perturbations.\nHidden layer representations in DNNs are known to lie on non-linear manifolds that are of lower dimensionality than the space in which they’re embedded (Goodfellow et al., 2016; Ansuini et al., 2019). Consequently, linear methods such as PCA can provide misleading estimates of hidden layer dimensionality. Thus we also quantified the intrinsic dimensionality (ID) of each layer’s representations (see Appendix A.1.4). Interestingly, the results were qualitatively similar to what we observed when examining linear dimensionality (Figure A22) in both ResNet18 trained on Tiny ImageNet (Figure A22a-A22c) and ResNet20 trained on CIFAR10 (Figure A22d-A22f). Thus both linear and non-linear measures of dimensionality imply that representational dimensionality may present a trade-off between worst-case and average-case perturbation robustness." }, { "heading": "5 DISCUSSION", "text": "Our results demonstrate that changes in the sparsity of semantic representations, as measured with class selectivity, induce a trade-off between robustness to average-case vs. worst-case perturbations: highly-distributed semantic representations confer robustness to average-case perturbations, but their increased dimensionality and consistent perturbability result in vulnerability to worst-case perturbations. In contrast, sparse semantic representations yield low-dimensional representations and inconsistently-perturbable units, imparting worst-case robustness. Furthermore, the dimensionality of the difference in early-layer activations between clean and perturbed samples is larger for worst-case perturbations than for average-case perturbations. More generally, our results link average-case and worst-case perturbation robustness through class selectivity and representational dimensionality.\nWe hesitate to generalize too broadly about our findings, as they are limited to CNNs trained on image classification tasks. It is possible that the results we report here are specific to our models and/or datasets, and also may not extend to other tasks. Scaling class selectivity regularization to datasets with large numbers of classes also remains an open problem (Leavitt and Morcos, 2020).\nOur findings could be utilized for practical ends and to clarify findings in prior work. Relevant to both of these issues is the task of adversarial example detection. There is conflicting evidence that intrinsic dimensionality can be used to characterize or detect adversarial (worst-case) samples (Ma et al., 2018; Lu et al., 2018). The finding that worst-case perturbations cause a marked increase in both intrinsic and linear dimensionality indicates that there may be merit in continuing to study these quantities for use in worst-case perturbation detection. And the observation that the causal relationship between class-selectivity and worst- and average-case robustness is bidirectional helps clarify the known benefits of sparsity (Madry et al., 2018; Balda et al., 2020; Ye et al., 2018; Guo et al., 2018; Dhillon et al., 2018) and dimensionality (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017) on worst-case robustness. It furthermore raises the question of whether enforcing low-dimensional representations also causes class selectivity to increase.\nOur work may also hold practical relevance to developing robust models: class selectivity could be used as both a metric for measuring model robustness and a method for achieving robustness (via regularization). We hope future work will more comprehensively assess the utility of class selectivity as part of the deep learning toolkit for these purposes." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DETAILED APPROACH", "text": "Unless otherwise noted: all experimental results were derived from the corrupted or adversarial test set with the parameters from the epoch that achieved the highest clean validation set accuracy over the training epochs; 20 replicates with different random seeds were run for each hyperparameter set; error bars and shaded regions denote bootstrapped 95% confidence intervals; selectivity regularization was not applied to the final (output) layer, nor was the final layer included in any of our analyses." }, { "heading": "A.1.1 MODELS", "text": "All models were trained using stochastic gradient descent (SGD) with momentum = 0.9 and weight decay = 0.0001. The maxpool layer after the first batchnorm layer in ResNet18 (see He et al. (2016)) was removed because of the smaller size of Tiny ImageNet images compared to standard ImageNet images (64x64 vs. 256x256, respectively). ResNet18 and ResNet50 were trained for 90 epochs with a minibatch size of 4096 (ResNet18) or 1400 (ResNet50) samples with a learning rate of 0.1, multiplied (annealed) by 0.1 at epochs 35, 50, 65, and 80.\nResNet20 (code modified from Idelbayev (2020)) were trained for 200 epochs using a minibatch size of 256 samples and a learning rate of 0.1, annealed by 0.1 at epochs 100 and 150." }, { "heading": "A.1.2 DATASETS", "text": "Tiny Imagenet (Fei-Fei et al., 2015) consists of 500 training images and 50 images for each of its 200 classes. We used the validation set for testing and created a new validation set by taking 50 images per class from the training set, selected randomly for each seed. We split the 50k CIFAR10 training samples into a 45k sample training set and a 5k validation set, similar to our approach with Tiny Imagenet.\nAll experimental results were derived from the test set with the parameters from the epoch that achieved the highest validation set accuracy over the training epochs. 20 replicates with different random seeds were run for each hyperparameter set. Selectivity regularization was not applied to the final (output) layer, nor was the final layer included any of our analyses.\nCIFAR10C consists of a dataset in which 19 different naturalistic corruptions have been applied to the CIFAR10 test set at 5 different levels of severity. Tiny ImageNetC also has 5 levels of corruption severity, but consists of 15 corruptions.\nWe would like to note that Tiny ImageNetC does not use the Tiny ImageNet test data. While the two datasets were created using the same data generation procedure—cropping and scaling images from the same 200 ImageNet classes—they differ in the specific ImageNet images they use. It is possible that the images used to create Tiny ImageNetC are out-of-distribution with regards to the Tiny ImageNet training data, in which case our results from testing on Tiny ImageNetC actually underestimate the corruption robustness of our networks. The creators of Tiny ImageNetC kindly provided the clean (uncorrupted) Tiny ImageNetC data necessary for the dimensionality analysis, which relies on matches corrupted and clean data samples." }, { "heading": "A.1.3 SOFTWARE", "text": "Experiments were conducted using PyTorch (Paszke et al., 2019), analyzed using the SciPy ecosystem (Virtanen et al., 2019), and visualized using Seaborn (Waskom et al., 2017)." }, { "heading": "A.1.4 QUANTIFYING DIMENSIONALITY", "text": "We quantified the dimensionality of a layer’s representations by applying PCA to the layer’s activation matrix for the clean test data and counting the number of dimensions necessary to explain 95% of the variance, then dividing by the total number of dimensions (i.e. the fraction of total dimensionality; we also replicated our results using the fraction of total dimensionality necessary to explain 90% and 99% of the variance). The same procedure was applied to compute the dimensionality of perturbationinduced changes in representations, except the activations for a perturbed data set were subtracted\na) b) c) d) e)\nFigure A1: Example naturalistic corruptions from the Tiny ImageNetC dataset. (a) Clean (no corruption). (b) Brightness. (c) Contrast. (d) Elastic transform. (e) Shot noise. All corruptions are shown at severity level 5/5. from the corresponding clean activations prior to applying PCA. For average-case perturbations, we performed this analysis for every corruption type and severity, and for the worst-case perturbations we used PGD with 40 steps.\nHidden layer representations in DNNs are known to lie on non-linear manifolds that are of lower dimensionality than the space in which they’re embedded (Goodfellow et al., 2016; Ansuini et al., 2019). Consequently, linear methods such as PCA can fail to capture the \"intrinsic\" dimensionality of hidden layer representations. Thus we also quantified the intrinsic dimensionality (ID) of each layer’s representations using the method of (Facco et al., 2017). The method, based on that of Levina and Bickel (2005), estimates ID by computing the ratio between the distances to the second and first nearest neighbors of each data point. We used the implementation of Ansuini et al. (2019). Our procedure was otherwise identical as when computing the linear dimensionality: we computed the dimensionality across all test data for each layer, then divided by the number of units per layer. We then computed the dimensionality of perturbation-induced changes in representations, except the activations for a perturbed data set were subtracted from the corresponding clean activations prior to computed ID. For average-case perturbations, we performed this analysis for every corruption type and severity, and for the worst-case perturbations we used PGD with 40 steps.\nA.2 EFFECTS OF CLASS SELECTIVITY REGULARIZATION ON TEST ACCURACY\na)\n0.0 0.2 0.4 0.6 0.8 1.0 Mean Class Selectivity\n0\n10\n20\n30\n40\n50\nTe st\nA cc\nur ac y -100.0 -30.0 -10.0 -5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0 10.0 30.0 100.0\nRegularization Scale (α)\nb)\n0.0 0.2 0.4 0.6 0.8 Mean Class Selectivity\n10\n20\n30\n40\n50\n60\n70\n80\n90\nTe st\nA cc\nur ac\ny\nFigure A2: Effects of class selectivity regularization on test accuracy. Replicated as in Leavitt and Morcos (2020). (a) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for ResNet18 trained on Tiny ImageNet. α denotes the sign and intensity of class selectivity regularization. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Each data point represents the mean class selectivity across all units in a single trained model. (b) Same as (a), but for ResNet20 trained on CIFAR10." }, { "heading": "A.3 ADDITIONAL RESULTS FOR AVERAGE-CASE PERTURBATION ROBUSTNESS", "text": "Because modifying class selectivity can affect performance on clean (unperturbed) inputs (Leavitt and Morcos (2020); Figure A2), it is possible that the effects we observe of class selectivity on perturbed test accuracy are not caused by changes in perturbation robustness per se, but simply by changes in baseline model accuracy. We controlled for this by normalizing each model’s perturbed test accuracy by its clean (unperturbed) test accuracy. The results are generally consistent even after controlling for clean test accuracy, although increasing class selectivity does not cause the same deficits in as measured using non-normalized perturbed test accuracy in ResNet18 trained on Tiny ImageNet (Figure A3a). Interestingly, in ResNet20 trained on CIFAR10, normalizing perturbed test accuracy reveals a more dramatic improvement in perturbation robustness caused by reducing class selectivity (Figure A6c). The results for Resnet50 trained on Tiny ImageNet are entirely consistent between raw vs. normalized measures (Figure A9b vs. Figure A9c)\na)\n-2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α)\n0.34\n0.35\n0.36\n0.37\n0.38\n0.39\n0.40\n0.41\n0.42\nCo rr\nup te\nd Te\nst A\ncc ur\nac y\nCl ea\nn Te\nst A\ncc ur\nac y\nb)\n-2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α)\n10\n15\n20\n25\nTe st\nA cc\nur ac\ny\nCorruption Severity\n1 Weakest 2 3 4 5 Strongest\nFigure A3: Controlling for clean test accuracy, and effect of corruption severity across corruptions. (a) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Normalized perturbed test accuracy appears higher in networks with high class selectivity (large α), but this is likely due to a floor effect: clean test accuracy is already much closer to the lower bound—chance—in networks with very high class selectivity, which may reflect a different performance regime, making direct comparison difficult. (b) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean.\nShot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur\n5\n10\n15\n20\n25\n30\nTe st\nA cc\nur ac\ny\nClass Selectivity Regularization Scale (α)\n-2.0 Low\nSelectivity\n2.0 High\nSelectivity\n0 Baseline Selectivity -1.0 -0.7 -0.4 -0.2 0.2 0.4 0.7 1.0\nFigure A4: Mean test accuracy across corruption intensities for each corruption type for ResNet18 tested on Tiny ImageNetC. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against all 15/15 corruption types.\nα\nFigure A5: Trade-off between clean and perturbed test accuracy in ResNet18 tested on Tiny ImageNetC. Clean test accuracy (x-axis) vs. perturbed test accuracy (y-axis) for different corruption severities (border color) and regularization scales (α, fill color). Mean is computed across all corruption types. Error bars = 95% confidence intervals of the mean.\na)\nFog Jpeg Compression Zoom Blur Speckle Noise Glass Blur Spatter Shot Noise Defocus Blur Elastic Transform Gaussian Blur Frost Saturate Brightness Snow Gaussian Noise Motion Blur Contrast Impulse Noise Pixelate\n20\n30\n40\n50\n60\n70\n80\n90\nTe st\nA cc\nur ac\ny\nClass Selectivity Regularization Scale (α)\n-1.0 Low\nSelectivity\n1.0 High\nSelectivity\n0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7\nb)\n-1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α)\n60\n62\n64\n66\n68\n70\n72\nTe st\nA cc\nur ac\ny\nc)\n-1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α)\n0.73\n0.74\n0.75\n0.76\n0.77\n0.78\n0.79\nCo rr\nup te\nd Te\nst A\ncc ur\nac y\nCl ea\nn Te\nst A\ncc ur\nac y\nd)\n-1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α)\n45\n50\n55\n60\n65\n70\n75\n80\n85\nTe st\nA cc\nur ac\ny\nCorruption Severity\n1 Weakest 2 3 4 5 Strongest\nFigure A6: Reducing class selectivity confers robustness to average-case perturbations in ResNet20 tested on CIFAR10C. (a) Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis, with corruption severity 1 (least severe) at the top and corruption severity 5 (most severe) at the bottom. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect (see Figure A2b and Approach 3). (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). (c) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Normalized perturbed test accuracy appears higher in networks with higher class selectivity (larger α), but this is likely due to a floor effect: clean test accuracy is already much closer to the lower bound—chance—in networks with very high class selectivity, which may reflect a different performance regime, making direct comparison difficult. (d) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean.\nFog Jpeg Compression Zoom Blur Speckle Noise Glass Blur Spatter Shot Noise Defocus Blur Elastic Transform Gaussian Blur Frost SaturateBrightness Snow Gaussian Noise Motion Blur Contrast Impulse Noise Pixelate\n40\n50\n60\n70\n80\n90\nTe st\nA cc\nur ac\ny\nClass Selectivity Regularization Scale (α)\n-1.0 Low\nSelectivity\n1.0 High\nSelectivity\n0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7\nFigure A7: Mean test accuracy across corruption intensities for each corruption type for ResNet20 tested on CIFAR10C. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against 14/19 corruption types. Error bars = 95% confidence intervals of the mean.\nα\nFigure A8: Trade-off between clean and corrupted test accuracy in ResNet20 tested on CIFAR10C. Clean test accuracy (x-axis) vs. corrupted test accuracy (y-axis) for different corruption severities (border color) and regularization scales (α, fill color). Mean is computed across all corruption types.\na)\nShot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur\n0\n5\n10\n15\n20\n25\n30\n35\n40\nTe st\nA cc\nur ac\ny\nClass Selectivity Regularization Scale (α)\n-1.0 Low\nSelectivity\n1.0 High\nSelectivity\n0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7\nb)\n-1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α)\n15\n16\n17\n18\n19\n20\n21\nTe st\nA cc\nur ac\ny\nc)\n-1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α)\n0.36\n0.38\n0.40\n0.42\n0.44\nCo rr\nup te\nd Te\nst A\ncc ur\nac y\nCl ea\nn Te\nst A\ncc ur\nac y\nd)\n-1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α)\n10\n15\n20\n25\n30\nTe st\nA cc\nur ac\ny\nFigure A9: Reducing class selectivity confers robustness to average-case perturbations in ResNet50 tested on Tiny ImageNetC. (a) Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis, with corruption severity 1 (least severe) at the top and corruption severity 5 (most severe) at the bottom. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect (see Approach 3). (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). (c) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. (d) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Note that confidence intervals are larger in part due to a smaller sample size—only 5 replicates per α instead of 20.\nShot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur\n5\n10\n15\n20\n25\n30\n35\nTe st\nA cc\nur ac\ny\nClass Selectivity Regularization Scale (α)\n-1.0 Low\nSelectivity\n1.0 High\nSelectivity\n0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7\nFigure A10: Mean test accuracy across corruption intensities for each corruption type for ResNet50 tested on Tiny ImageNetC. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against 14/15 corruption types. Error bars = 95% confidence intervals of the mean. Note that confidence intervals are larger in part due to a smaller sample size—only 5 replicates per α instead of 20." }, { "heading": "A.4 THE CAUSAL RELATIONSHIP BETWEEN CLASS SELECTIVITY AND AVERAGE-CASE", "text": "ROBUSTNESS IS BIDIRECTIONAL\nWe found that regularizing to decrease class selectivity causes robustness to average-case perturbations. But is the converse is also true? Does increasing robustness to average-case perturbations also cause class selectivity to increase? We investigated this question by training with AugMix, a technique known to improve worst-case robustness (Hendrycks et al., 2020a). Briefly, AugMix stochastically applies a diverse set of image augmentations and uses a Jensen-Shannon Divergence consistency loss. Our AugMix parameters were as follows: mixture width: 3; mixture depth: stochastic; augmentation probability: 1; augmentation severity: 2. We found that AugMix does indeed decrese the mean level of class selectivity across neurons in a network (Figure A11). AugMix decreases overall levels of selectivity similarly to training with a class selectivity regularization scale of approximately α = −0.1 or α = −0.2 in both ResNet18 trained on Tiny ImageNet (Figures A11a and A11b) and ResNet20 trained on CIFAR10 (Figures A11c and A11d). These results indicate that the causal relationship between average-case perturbation robustness and class selectivity is bidirectional: not only does decreasing class selectivity cause average-case perturbation robustness to increase, but increasing average-case perturbation-robustness also causes class selectivity to decrease." }, { "heading": "A.5 WORST-CASE PERTURBATION ROBUSTNESS", "text": "We also confirmed that the worst-case robustness of high-selectivity ResNet18 and ResNet20 networks was not simply due to gradient-masking (Athalye et al., 2018) by generating worst-case perturbations using each of the replicate models trained with no selectivity regularization (α = 0), then testing selectivity-regularized models on these samples. We found that high-selectivity models were less vulnerable to the α = 0 samples than low-selectivity models for high-intensity perturbations (Appendix A14, indicating that gradient-masking does not fully account for the worst-case robustness of high-selectivity models." }, { "heading": "A.6 CLASS SELECTIVITY REGULARIZATION DOES NOT AFFECT ROBUSTNESS TO NATURAL ADVERSARIAL EXAMPLES", "text": "We also examined whether class selectivity regularization affects robustness to \"natural\" adversarial examples, images that are \"natural, unmodified, real-world examples...selected to cause a fixed model to make a mistake\" (Hendrycks et al., 2020b). We tested robustness to natural adversarial examples using ImageNet-A, a dataset of natural adversarial examples that belong to ImageNet classes but consistently cause misclassification errors with high confidence (Hendrycks et al., 2020b). We adapted ImageNet-A to our models trained on Tiny ImageNet (ResNet18 and ResNet50) by only testing on the 74 image classes that overlap between ImageNet-A and Tiny ImageNet (yielding a total of 2957 samples), and downsampling the images to 64 x 64. Test accuracy was similar across all tested values of α for both ResNet18 (Figure A15a) and ResNet50 (Figure A15b), indicating that class selectivity regularization may share some limitations with other methods for improving robustness, many of which also fail to yield significant robustness against ImageNet-A (Hendrycks et al., 2020b)." }, { "heading": "A.7 THE CAUSAL RELATIONSHIP BETWEEN CLASS SELECTIVITY AND WORST-CASE", "text": "ROBUSTNESS IS BIDIRECTIONAL\nWe observed that regularizing to increase class selectivity causes robustness to worst-case perturbations. But is the converse is also true? Does increasing robustness to worst-case perturbations cause class selectivity to increase? We investigated this question using PGD training, a common technique for improving worst-case robustness. PGD training applies the PGD method of sample perturbation (see Approach 3) to samples during training. We used the same parameters for PGD sample generation when training our models as when testing (Approach 3). The number of PGD iterations controls the intensity of the perturbation, and the degree of perturbation-robustness in the trained model (Madry et al., 2018). We found that PGD training does indeed increase the mean level of class selectivity across neurons in a network, and this effect is proportional to the strength of PGD training: networks trained with more strongly-perturbed samples have higher class selectivity (Figure A16). Interestingly, PGD training also appear to cause units to die (Lu et al., 2019), and the number of dead untis is proportional to the intensity of PGD training (Figures A16b and A16e). Removing dead units, which have a class selectivity index of 0, from the calculation of mean class selectivity results in a clear, monotonic effect of PGD training intensity on class selectivity in both ResNet18 trained on Tiny ImageNet (Figure A16c) and ResNet20 trained on CIFAR10 (Figure A16f). These results indicate that the causal relationship between worst-case perturbation robustness and class selectivity is bidirectional: increasing class selectivity not only causes increased worst-case perturbation robustness, but increasing worst-case perturbation-robustness also causes increased class selectivity.\nA.8 STABILITY TO INPUT PERTURBATIONS IN UNITS AND LAYERS\nA.9 REPRESENTATIONAL DIMENSIONALITY\na)\n0 2 4 6 8 10 12 14 16 Layer\n0.0\n0.1\n0.2\n0.3\n0.4 0.5 Fr ac tio n of T ot al D im en si on\nal ity Class Selectivity Regularization Scale (α)\n-2.0 -1.0 2.01.00-0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4\nb)\n0 2 4 6 8 10 12 14 16 Layer\n0.0\n0.1\n0.2\n0.3\n0.4\n0.5\nFr ac\ntio n\nof T\not al\nD im\nen sio\nna lit\ny c)\n0 2 4 6 8 10 12 14 16 Layer\n0.0\n0.1\n0.2\n0.3\n0.4\n0.5\nFr ac\ntio n\nof T\not al\nD im\nen sio\nna lit\ny\nd)\n0 2 4 6 8 10 12 14 16 Layer\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nFr ac\ntio n\nof T\not al\nD im\nen si\non al\nity Class Selectivity Regularization Scale (α)\n-2.0 -1.0 2.01.00-0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4\ne)\n0 2 4 6 8 10 12 14 16 Layer\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nFr ac\ntio n\nof T\not al\nD im\nen sio\nna lit\ny f)\n0 2 4 6 8 10 12 14 16 Layer\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nFr ac\ntio n\nof T\not al\nD im\nen sio\nna lit\ny\nFigure A20: Dimensionality in early layers predicts worst-case vulnerability in ResNet18 trained on Tiny ImageNet. Identical to Figure 4, but dimensionality is computed as the number of principal components needed to explain 90% of variance in (a) - (c), and 99% of variance in (d) - (f). (a) Fraction of dimensionality (y-axis; see Appendix A.1.4) as a function of layer (x-axis). (b) Dimensionality of difference between clean and average-case perturbation activations (y-axis) as a function of layer (x-axis). (c) Dimensionality of difference between clean and worst-case perturbation activations (y-axis) as a function of layer (x-axis). (d) - (f), identical to (a) - (c), but for 99% explained variance threshold." } ]
2,020
null
SP:8fe8ad33a783b2f98816e57e88d20b67fed50e8d
[ "The authors investigate the token embedding space of a variety of contextual embedding models for natural language. Using techniques based on nearest neighbors, clustering, and PCA, they report a variety of results on local dimensionality / anisotropy / clustering / manifold structure in these embedding models which are of general interest to scientists and practitioners hoping to understand these models. These include findings of (local) isotropy in the embeddings when appropriately clustered and shifted, and an apparent manifold structure in the GPT models." ]
The geometric properties of contextual embedding spaces for deep language models such as BERT and ERNIE, have attracted considerable attention in recent years. Investigations on the contextual embeddings demonstrate a strong anisotropic space such that most of the vectors fall within a narrow cone, leading to high cosine similarities. It is surprising that these LMs are as successful as they are, given that most of their embedding vectors are as similar to one another as they are. In this paper, we argue that the isotropy indeed exists in the space, from a different but more constructive perspective. We identify isolated clusters and low dimensional manifolds in the contextual embedding space, and introduce tools to both qualitatively and quantitatively analyze them. We hope the study in this paper could provide insights towards a better understanding of the deep language models.
[ { "affiliations": [], "name": "Xingyu Cai" }, { "affiliations": [], "name": "Jiaji Huang" }, { "affiliations": [], "name": "Yuchen Bian" }, { "affiliations": [], "name": "Kenneth Church" } ]
[ { "authors": [ "Laurent Amsaleg", "Oussama Chelly", "Teddy Furon", "Stéphane Girard", "Michael E Houle", "Ken-ichi Kawarabayashi", "Michael Nett" ], "title": "Estimating local intrinsic dimensionality", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Alessio Ansuini", "Alessandro Laio", "Jakob H Macke", "Davide Zoccolan" ], "title": "Intrinsic dimension of data representations in deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Martin Aumüller", "Matteo Ceccarello" ], "title": "The role of local intrinsic dimensionality in benchmarking nearest neighbor search", "venue": "In International Conference on Similarity Search and Applications,", "year": 2019 }, { "authors": [ "Alexis Conneau", "Guillaume Lample" ], "title": "Cross-lingual language model pretraining", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "David L Davies", "Donald W Bouldin" ], "title": "A cluster separation measure", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 1979 }, { "authors": [ "David Demeter", "Gregory Kimmel", "Doug Downey" ], "title": "Stolen probability: A structural weakness of neural language models", "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Martin Ester", "Hans-Peter Kriegel", "Jörg Sander", "Xiaowei Xu" ], "title": "A density-based algorithm for discovering clusters in large spatial databases with noise", "venue": "In Kdd,", "year": 1996 }, { "authors": [ "Kawin Ethayarajh" ], "title": "How contextual are contextualized word representations? comparing the geometry of bert, elmo, and gpt-2 embeddings", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Jun Gao", "Di He", "Xu Tan", "Tao Qin", "Liwei Wang", "Tieyan Liu" ], "title": "Representation degeneration problem in training natural language generation models", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "John Hewitt", "Christopher D Manning" ], "title": "A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", "venue": null, "year": 2019 }, { "authors": [ "Michael E Houle" ], "title": "Dimensionality, discriminability, density and distance distributions", "venue": "IEEE 13th International Conference on Data Mining Workshops,", "year": 2013 }, { "authors": [ "Michael E Houle", "Hisashi Kashima", "Michael Nett" ], "title": "Generalized expansion dimension", "venue": "IEEE 12th International Conference on Data Mining Workshops,", "year": 2012 }, { "authors": [ "Jiaji Huang", "Xingyu Cai", "Kenneth Church" ], "title": "Improving bilingual lexicon induction for low frequency words", "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2020 }, { "authors": [ "Jeff Johnson", "Matthijs Douze", "Hervé Jégou" ], "title": "Billion-scale similarity search with gpus", "venue": "arXiv preprint arXiv:1702.08734,", "year": 2017 }, { "authors": [ "Tianlin Liu", "Lyle Ungar", "Joao Sedoc" ], "title": "Unsupervised post-processing of word vectors via conceptor negation", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M Erfani", "Sudanthi Wijewickrema", "Grant Schoenebeck", "Dawn Song", "Michael E Houle", "James Bailey" ], "title": "Characterizing adversarial subspaces using local intrinsic dimensionality", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Mitchell Marcus", "Beatrice Santorini", "Mary Ann Marcinkiewicz" ], "title": "Building a large annotated corpus of english: The penn treebank", "venue": null, "year": 1993 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "arXiv preprint arXiv:1609.07843,", "year": 2016 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "arXiv preprint arXiv:1301.3781,", "year": 2013 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "David Mimno", "Laure Thompson" ], "title": "The strange geometry of skip-gram with negative sampling", "venue": "In Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Jiaqi Mu", "Pramod Viswanath" ], "title": "All-but-the-top: Simple and effective post-processing for word representations", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "arXiv preprint arXiv:1802.05365,", "year": 2018 }, { "authors": [ "Steven T. Piantadosi" ], "title": "Zipf’s word frequency law in natural language: A critical review and future directions", "venue": "Psychonomic bulletin & review,", "year": 2014 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training, 2018", "venue": null, "year": 2018 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Emily Reif", "Ann Yuan", "Martin Wattenberg", "Fernanda B Viegas", "Andy Coenen", "Adam Pearce", "Been Kim" ], "title": "Visualizing and measuring the geometry of bert", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Peter J Rousseeuw" ], "title": "Silhouettes: a graphical aid to the interpretation and validation of cluster analysis", "venue": "Journal of computational and applied mathematics,", "year": 1987 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "venue": null, "year": 1910 }, { "authors": [ "Yu Sun", "Shuohuan Wang", "Yukun Li", "Shikun Feng", "Xuyi Chen", "Han Zhang", "Xin Tian", "Danxiang Zhu", "Hao Tian", "Hua Wu" ], "title": "Ernie: Enhanced representation through knowledge integration", "venue": null, "year": 1904 } ]
[ { "heading": "1 INTRODUCTION", "text": "The polysemous English word “bank” has two common senses: 1. the money sense, a place that people save or borrow money; 2. the river sense, a slope of earth that prevents the flooding. In modern usage, the two senses are very different from one another, though interestingly, both senses share similar etymologies (and both can be traced back to the same word in Proto-Germanic). In the static embedding, multiple instances of the same word (e.g. “bank”) will be represented using the same vector. On the contrary, the contextual embedding assigns different vectors to different instances of the same word, depending on the context. Historically, static embedding models like Word2vec (Mikolov et al., 2013b) and GloVe (Pennington et al., 2014), predated contextual embedding models such as ELMo (Peters et al., 2018), GPT (Radford et al., 2018), BERT (Devlin et al., 2018) and ERNIE (Sun et al., 2019). Much of the literature on language modeling has moved to contextual embeddings recently, largely because of their superior performance on the downstreaming tasks." }, { "heading": "1.1 RELATED WORK", "text": "The static embeddings are often found to be easier to interpret. For example, the Word2Vec and GloVe papers discuss adding and subtracting vectors, such as: vec(king) - vec(man) + vec(women) = vec(queen). Inspired by this relationship, researchers started to explore geometric properties of static embedding spaces. For example, Mu & Viswanath (2018) proposed a very counter-intuitive method that removes the top principle components (the dominating directions in the transformed embedding space), which surprisingly improved the word representations. Rather than completely discarding the principle components, Liu et al. (2019) proposed to use a technique called Conceptor Negation, to softly suppress transformed dimensions with larger variances. Both approaches, simply removing certain principle components as well as Conceptor Negation, produce significant improvements over vanilla embeddings obtained by static language models. In Huang et al. (2020), the authors studied how to effectively transform static word embeddings from one language to another.\nUnfortunately, the strong illustrative representation like the king-queen example above, is no longer obvious in a general contextual embedding space. Arguing that syntax structure indeed exists in the contextual embeddings, Hewitt & Manning (2019) proposed a structural probe to identify the syntax trees buried in the space, and found the evidence of implicit syntax tree in BERT and ELMo. The advantage of contextual embedding over the static counterpart, mainly come from its capability to assign different vectors to the same word, depending on the word sense in the context. Researchers in (Reif et al., 2019) found such a geometric representation of word senses in the BERT model. These papers reveal the existence of linguistic features embedded implicitly in the contextual vector spaces.\nThe geometric properties of contextual embedding space are also investigated and compared with the static embedding space. Mimno & Thompson (2017) found anisotropy when negative sampling is used. In (Ethayarajh, 2019), the authors characterize how vectors are distributed in the contextual space. They found that most vectors occupy in a relatively narrow cone in the space. Pairs of vectors within this cone have large cosines. This phenomenon can be found in most state-of-the-art contextual embedding models. In (Gao et al., 2019), the authors named this phenomenon ”representation degeneration”, and attempted to mitigate the problem by introducing a regularization term that minimizes cosine similarities between vectors. In a very recent work, Demeter et al. (2020) suggest there is a structure weakness in the space that leads to bias when using soft-max, as is common with deep language models." }, { "heading": "1.2 MOTIVATION AND CONTRIBUTIONS", "text": "Isotropy often makes the space more effectively utilized and more robust to perturbations (no extreme directions that lead to high condition number). It is counter-intuitive and not clear why those contextual embedding models perform remarkably well on many tasks given their anisotropic embeddings bring all the vectors close together, hard to distinguish one from another. On one hand, it is widely believed that contextual embeddings encode the relevant linguistic information (e.g. (Reif et al., 2019)), but on the other hand, it is also widely believed that the contextual space is anisotropic that representations become degenerated (e.g. (Mimno & Thompson, 2017), (Gao et al., 2019), (Ethayarajh, 2019)). These motivate us to find a reasonable understanding that bridges this gap.\nThis paper is similar in spirit to (Mu & Viswanath, 2018), but different in three aspects. First, we generalize their work on traditional static embeddings to more modern contextual embeddings. Second, we introduce clustering methods to isolate the space, whereas they used PCA to remove dominant dimensions (that tend to dominate the variance). Finally, we identify low dimensional manifolds in the space, and introduce an alternative approach (LID) to characterize local subspaces.\nKey Contributions: This paper takes a deeper look into the contextual embedding spaces of popular pre-trained models. It identifies the following facts that were misunderstood or not known before: 1) We find isotropy within clusters in the contextual embedding space, in contrast to previous reports of anisotropy (caused by misleading isolated clusters). We introduce clustering and center shifting to reveal the isotropy, and show more consistent layer-wise behavior across models. 2) We find a Swiss-Roll manifold in GPT/GPT2 embeddings, but not in BERT/DistilBERT embeddings. The manifold is related to word frequency, suggesting a difference in how models evolve as they see more data. We use approximate Local Intrinsic Dimension (LID) to characterize the manifold, and find contextual embedding models, including all BERT, GPT families and ELMo, often have small LIDs. The small LIDs can be viewed as the local anisotropy of the space. The code for this paper could be found at https://github.com/TideDancer/IsotropyContxt." }, { "heading": "2 ANALYSIS SETTINGS", "text": "" }, { "heading": "2.1 MODELS AND DATASETS", "text": "In this paper, we consider popular pre-trained contextual embedding models, including BERT, DistilBERT (Sanh et al., 2019) (or denoted as D-BERT in the rest of the paper), GPT, GPT2 (Radford et al., 2019) and ELMo. For the BERT and GPT families, we perform our evaluations on the pretrained uncased base models from Huggingface (https://huggingface.co/transformers/index.html#). The pre-trained ELMo model is from AllenNLP (https://docs.allennlp.org/v1.0.0/). BERT and DBERT are non-causal models because of their attention mechanism, where tokens can attend to any token in the input, regardless of their relative positions. In contrast, GPT and GPT2 are causal models because attention is limited to the tokens previously seen in the input.\nDifferent models achieve contextual embedding in different ways. For instance, BERT adds positional embeddings to the token embeddings, while ELMo performs vector concatenation. Most models start with an initial layer that maps token ids to vectors. This paper is not concerned with that lookup table layer, and only focuses on the layers after that. The base BERT, GPT and GPT2 models have 12 layers of interest, indexed from 0 to 11, while D-BERT has 6 layers and ELMo has two.\nWe use Penn Tree Bank (PTB) (Marcus et al., 1993) and WikiText-2 (Merity et al., 2016) datasets. The PTB has 0.88 million words and WikiText-2 has 2 million. Both of them are the standard datasets\nfor language models. In the rest of the paper, we report on PTB since we see similar results with both datasets. Details on WikiText-2 analysis could be found in Appendix." }, { "heading": "2.2 NOTATION", "text": "For each position in a corpus, we have a word. Words are converted into tokens, using the appropriate tokenizer for the model. Tokenizers could split some words into subwords, therefore, the number of obtained tokens (denoted as n) could be more than number of words in the corpus. PTB, for example, contains 0.88 million words, but has n = 1.2 million tokens, when processed by BERT’s tokenizer. Let V be the vocabulary, a set of distinct tokens. For any element in the vocabulary V , we call it a type. For example, BERT has a vocabulary of roughly 30, 000 types. We may mix using “word” and “type” for ease of reading. We denote the i-th type in V as ti. Let Φ(ti) = {φ1(ti), φ2(ti), . . .} be the set of all embedding instances of ti (note that different contexts in the corpus yield different embeddings of ti). By construction, ∑ t |Φ(t)| = n. We define the inter-type cosine similarity as\nSinter , Ei 6=j [cos (φ(ti), φ(tj))] (1)\nwhere φ(ti) is one random sample from Φ(ti), and the same for φ(tj) ∈ Φ(tj). The expectation is taken over all pairs of different types. Similarly, we define the intra-type cosine similarity as\nSintra , Ei [Ek 6=l [cos (φk(ti), φl(ti))]] (2)\nwhere the inner expectation is over different embeddings φ(ti) for the same type ti, and the outer expectation is over all types. Both Sinter and Sintra take values between −1 and 1. Note that for i.i.d. Gaussian random samples x, y, the expected cosine similarity E[cos(x, y)] = 0. A cosine value closer to 0 often indicates strong isotropy.\nClearly, the inter-type metric describes the similarity between different types, where the intra-type one measures similarity between same type’s embedding instances. Our definitions of Sinter and Sintra are similar to the measures used in Ethayarajh (2019), but at the corpus level. Note that some types are more frequent than others, especially under a Zipfian distribution (Piantadosi, 2014), and therefore, the size of Φ(t) varies dramatically with the frequency of type t." }, { "heading": "2.3 AN INITIAL LOOK AT ANISOTROPY", "text": "Inspired by Ethayarajh (2019), we follow their procedure and take a first look at the anisotropy identified by Mimno & Thompson (2017) and Ethayarajh (2019), in the contextual embedding space.\nFigure 1 shows strong anisotropy effects in a number of models. These findings are consistent with Ethayarajh (2019), though we use slightly different metrics. The plots show expected cosine (Sinter and Sintra) as a function of layer. For efficiency, we approximate Sintra by imposing a limit of 1,000 samples for frequent types, t, if |Φ(t)| > 1000. From the figure we can see the following: • Both Sinter and Sintra are high ( 0) across almost all the layers and all the models. In particular,\nthe same as reported in Ethayarajh (2019), GPT2 is relatively more anisotropic. • Sinter tends to increase with layer, in contrast with Sintra which in general decreases but with\nfluctuations. This means that embeddings for different types are moving closer to one another at deeper layers, while embeddings for the same type’s instances are spreading away.\n• The last layer is often special. Note that the last layer has smaller cosines than the second last in most cases, with the notable exception of GPT2.\nIn summary, we observe large cosines (across layers/models), especially for the GPT2 model. When cosines are close to 1, embeddings lie in a subspace defined by a very narrow cone (Ethayarajh, 2019). One might expect embeddings to be more effective if they took advantage of a larger subspace. Are these models missing an opportunity to have the benefits from isotropy (Mu & Viswanath, 2018)? We answer this question in the following sections." }, { "heading": "3 CLUSTERS IN THE EMBEDDING SPACE", "text": "" }, { "heading": "3.1 EFFECTIVE DIMENSIONS", "text": "There are m = 768 embedding dimensions for BERT, D-BERT, GPT and GPT2, and m = 1024 dimensions for ELMo. We perform PCA to reduce the number of dimensions from m down to k. For each layer of each model, we start with the data matrix, M ∈ Rn×m, where n is the number of input tokens (n = 1.2M for PTB dataset), and m is the original number of dimensions. After PCA, we end up with a smaller matrix, M̂ ∈ Rn×k. Let the explained variance ratio be: rk = ∑k−1 i=0 σi / ∑m−1 i=0 σi, where σi is the i-th largest eigen value of M ’s covariance matrix. In this way, we define the -effective-dimension to be: d( ) , arg mink rk ≥ . For example, d(0.8) = 2 means that 2 dimensions capture 80% of the variance. There is a direct connection between d and isotropy: the larger d often implies more isotropy, as data spreads in multiple dimensions.\nTable 1 reports d(0.8) for different layers and models. It is surprising that GPT2 has so few effective dimensions, especially, d(0.8) = 1 for layer 2 to 6. The surprisingly small effective dimensionality is another way of saying that GPT2 vectors fall in a narrow cone, and consequently, their pairwise cosines are large. If all the vectors lie on a 1-D line, all the cosines would be 1, and there would be hardly any model capacity. These observations motivate us to look deeper into the embedding space." }, { "heading": "3.2 ISOLATED CLUSTERS", "text": "By performing PCA to project the original data into a 3-D view, we can visualize GPT2’s layer 6’s embedding space in Figure 2a. The three axes refer to the first three principle components, which account for 82.8% of the total variance. All the explained variance ratio will be reported throughout the rest of the paper. The axes values are raw coordinates after PCA. In Figure 2a, there are two disconnected islands that are far away from each other. Note that the first dimension coordinate values\nspans from 0 to 3000, significantly wider than the other 2 dimensions. In fact this first principle dimension dominates the total variance. The left island is bigger than the one on the right. The fact that the two islands are so well separated by the first principle component suggests that classifying points by island membership accounts for much of the variance. This two-island property is exhibited in layers 2 through 10 for GPT2. The two islands merge into a single large cluster in the last layer.\nWe observe similar clustering behavior for all the models across all the layers, though the separations are less distinct, as illustrated in other panels of Figure 2. This is also consistent with Table 1, the less separation, the higher d( ) values. For GPT2, we had hoped to find that some types are associated with one cluster and other types are associated with the other cluster, but that is not verified in our experiments. Please refer to the supplementary for visualizations of all layers in all the models." }, { "heading": "3.3 CLUSTERING", "text": "Previous literature estimated the space isotropy on pairs of arbitrary tokens, which could reside in two disconnected clusters. But given that the variance is dominated by distances between clusters, such estimation would be biased by the inter-cluster distances. It is more meaningful to consider a per-cluster investigation rather than a global estimate.\nWe start by performing clustering on the embedding space. There are many methods for clustering. We chose K-Means (https://scikit-learn.org/stable/modules/classes.html#), because it is reasonably fast for large inputs (n = 1.2 million vectors) in high (m ≥ 768) dimensions. DBSCAN algorithm (Ester et al., 1996) could be an alternative as it is density based, but only works on small dataset. We use the Silhouette method (Rousseeuw, 1987) to determine the number of clusters, |C|. After running K-means, each point p (one of the n vectors in M ) is assigned to one of C clusters. For a data point p assigned to the cluster c ∈ C, calculate the following:\nap = 1 |c| − 1 ∑\nq∈c, p 6=q\ndist(p, q) ; bp = min c̃6=c ∑ q∈c̃ dist(p, q) ; sp =\n{ bp−ap\nmax(ap,bp) , if |c| > 1 0, otherwise\nwhere ap is the mean distance between p and other points in the same cluster; bp is the minimum (min over c̃) mean distance between p to points of another cluster c̃; and sp is the Silhouette score for point p ∈ c. The sp takes value ∈ [−1, 1]. The higher sp, the better assignment of p to its cluster. Better choices of |C| would lead to better values of sp (and better clustering). We define the MaximumMean-Silhouette (MMS) score for the embedding space as: MMS,maxdifferent |C| Ep [sp], where the maximum is over different |C| values for K-Means. Since it is not feasible to evaluate all choices of |C| ∈ [1, n], we consider |C| ∈ [1, 15]. The expectation Ep[sp] (the mean Silhouette score), is estimated from 20, 000 sample vectors in M . We select the best |C| that yields MMS. The MMS values provide a systematic way to describe how the clusters are distributed in the space. If the clusters are very distinct and splitted, this yields a higher MMS. On the other hand, if clusters are overlapping, blurring together, the MMS score will be low. Note that if MMS < 0.1, we set |C| to be 1, as the Silhouette score does not show significant evidence of more clusters.\nTable 2: Number of clusters |C|\nLayer BERT D-BERT GPT GPT2 ELMo 0 6 7 1 2 2 1 6 10 2 2 2 2 4 15 2 2 3 4 14 2 2 4 3 10 2 2 5 14 2 2 2 6 6 2 2 7 2 2 2 8 2 2 2 9 11 1 2 10 2 1 2 11 9 1 2\n0 2 4 6 8 10 layer id\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nM M S\nthe MMS score BERT D-BERT GPT GPT2 ELMo\nFigure 3: The MMS for all the models. GPT2 has significantly higher MMS scores than other models from layer 1 to layer 11. This means the cluster effects are more severe in GPT2.\nTable 2 makes it clear that clustering plays an important role in most layers of most models. Some models (BERT and D-BERT) have more clusters, and some have fewer (GPT, GPT2, ELMo). This dichotomy of models is also reflected in Figure 2.\nMaximum-Mean-Silhouette scores are shown in Figure 3. There are significantly higher MMS values for GPT2, starting from the 2nd layer. Recall in Figure 2a, we showed that two far-away islands exist in the space and their distance dominates the variance. In Figure 3, the high MMS scores also verifies that. Another interesting observation is, for causal models GPT, GPT2 and ELMo, they all have higher MMS in their middle layers but lower MMS in the end. This means their initial layer and final layers’ embeddings tend to merge. On the contrary, the BERT and DistilBERT have increasing MMS in deeper layers, meaning that the clusters in their embeddings are becoming clearer in deeper layers." }, { "heading": "3.4 ISOTROPY IN CENTERED SPACE WITHIN CLUSTERS", "text": "As suggest by Mu & Viswanath (2018), the embedding space should be measured after shifting the mean to the origin. We subtract the mean for each cluster, and calculate the adjusted Sinter. Assuming we have a total of |C| clusters, let Φc(t) = {φc1(t), φc2(t), . . .} be the set of type t’s embeddings in cluster c ∈ C, and φc(t) be one random sample in Φc(t). Define the adjusted similarity:\nS′inter , Ec [ Ei 6=j [ cos ( φ̄c(ti), φ̄ c(tj) )]] , where φ̄c(t) = φc(t)− Eφc [φc(t)] (3)\nHere Ec is the average over different clusters, and φ̄c(t) is the original embedding shifted by mean (subtract the mean), where the mean is taken over the samples in cluster c. Similarly we define\nS′intra , Ec [ Ei [ Ek 6=l [ cos ( φ̄ck(ti), φ̄ c l (ti) )]]] (4)\nThe Figure 4 illustrates the adjusted cosine similarities S′inter and S ′ intra. It reveals that: • For the adjusted inter-type cosine (the left plot), all models are having consistent near-zero S′inter. This means nearly perfect isotropy exists within each cluster, in each layer of all the models. The last layer of GPT2 and BERT has slightly worse isotropic behavior, nevertheless, general inter-type isotropy stays across all layers. This reveals the distinguishable embedding vectors. • The general decreasing trend of intra-type cosine (the right plot) shows that the multiple instances for the same type/word, is slowly spreading over the layers. This is consistent with the un-centered intra-type cosine shown in Figure 1." }, { "heading": "4 LOW-DIMENSIONAL MANIFOLDS", "text": "" }, { "heading": "4.1 SWISS ROLL MANIFOLD OF GPT/GPT2", "text": "While BERT and D-BERT tend to distribute embeddings along more dimensions, GPT and GPT2 embed tokens in low-dimensional manifolds in their contextual embedding spaces. More specifically, we discover that most of the tokens are embedded on a spiral band, and that band gets thicker in the later layers thereafter form a Swiss Roll shaped surface.\nFigure 5a and 5b show the 2-D front view of the manifold in GPT and GPT2. Figure 5a zooms into the large cluster illustrated in Figure 2a (the left one), and discards the smaller one (the right one).\n3-D plots are shown in Figure 5c and 5d to demonstrate two manifolds, a band shaped manifold and a Swiss Roll shaped manifold. These plots were computed over PTB dataset. Similar results have been obtained from WikiText-2 in supplementary. Figure 6 tracks the progression of a narrow band into a Swiss Roll. The Swiss Roll becomes taller and taller with deeper and deeper layers." }, { "heading": "4.2 TOKENS IN THE SPACE", "text": "To verify the manifold structure in GPT family, we study the token embeddings in the space. It is believed that similar embeddings (e.g. the embeddings for two instances of the same word) tend to stay close together in a Euclidean space, as they should have high cosine similarities. Figure 7 drills down into the embeddings for six frequent words: three punctuation symbols (“\\”, “&”, “.”) and three common words (“the”, “first”, “man”). Each panel uses four colors: three colors (black, red, green) for three words of interest, plus gold color for all the other tokens.\nAs shown in Figure 7a 7b, the BERT model indeed group similar embeddings into small regions in the space (the red, black and green clusters). However, the GPT models are assigning similar embeddings along the manifold we observed before. In Figure 7c 7d, the embeddings for the tokens occupy a spiral band that almost cross the entire space. It does not comply with the Euclidean space geometry as points in such a spiral band would not have high cosine similarity. A Riemannian metric must exist, such that the manifold has larger distance between two spiral bands, but smaller distance on the\nband. Note that the 3-D plots are obtained using PCA, so there is no density-based nor non-linear reduction involved. Therefore, the manifold structures in GPT embedding spaces are verified." }, { "heading": "4.3 WORD FREQUENCY", "text": "Another key finding is that all the models are trying to map the high frequent words/types to some specific region in the embedding spaces, rather than spreading them out to the entire space. In Figure 8, embeddings ( 8a 8c ) and corresponding word frequencies ( 8b 8d ) of GPT’s layer 8 and 9 are shown. The darker red denoted higher frequency and blue is lower frequency. The numbers at the colorbar show the number of occurrence (of a particular word / type).\nThe Figure 8a 8c are after PCA, and selecting the two most significant dimensions. From GPT layer 8 to layer 9, as the Swiss Roll becomes taller, more variance is accounted for along the height of the Swiss Roll. Thus, the perspective switches from a front view to a side view when moving to layer 9.\nFigures 8b and 8d show that the most frequent words appear at the head of the Swiss Roll, followed by bands of less and less frequent words. The least frequent words appear at the far end of the Swiss Roll. This pattern suggests the model distinguishes more frequent from less frequent words. As the model finds more and more rare words, it appends them at the end of the Swiss Roll." }, { "heading": "4.4 MANIFOLD LOCAL INTRINSIC DIMENSION", "text": "Although the original space dimension is 768 (1024 for ELMo), the manifold we observed has a lower intrinsic dimension. It means the data point on the manifold has fewer degrees of freedom to move around. For example, on a Swiss Roll in a 3-D space, any point can only have 2-D freedom thus the intrinsic dimension is only 2. A recent research on the intrinsic dimension for deep networks could be found at (Ansuini et al., 2019). In this section, we adopt the Local Intrinsic Dimension (LID) that estimates dimension locally with respect to a reference point. LID is introduced by Houle (2013), and being used in deep learning model characterization recently, e.g. (Ma et al., 2018). The LID is often derived using expansion models (Houle et al., 2012), which tries to obtain the local dimension in the vicinity of a reference point from the growth (expansion) characteristics. To illustrate this, we borrow an example from Ma et al. (2018). Let γ be the radius of an m-D ball in the Euclidean space, denote its volume as ν, then the volume’s growth rate is proportional to γm, i.e. ν2/ν1 = (γ2/γ1)m, from which we can infer the local dimension m̃ by m̃ = log(ν2/ν1) / log(γ2/γ1).\nAccurately computing LID is a hard problem which requires a tremendous amount of data samples and enough density around the reference point. So fewer-sample estimate of LID is being studied in the past decade. One of the efficient estimation is proposed by Amsaleg et al. (2015). This technique relies on K nearest neighbor search (K-NN). For a reference point p, denote the set of its K nearest neighbor points as Ψp = {q1, . . . , qK}. Then the estimate of LID is com-\nputed as: ˜LID(p) = − (\n1 K ∑K i=1 log dist(p,qi) maxi(dist(p,qi)) )−1 , where the term inside log is the ratio\nof distance between p to its neighbor, over the maximum distance among them. In our analysis, we use an efficient nearest neighbor computation package FAISS (Johnson et al., 2017) (https://github.com/facebookresearch/faiss) to perform the K-NN. We set K = 100, the same as in (Aumüller & Ceccarello, 2019). `2 distance is used, i.e. dist(p, q) = ‖p− q‖2. We report the mean LID over all the samples p, as Ep[ ˜LID(p)], in Figure 9.\nAs shown in Figure 9, the mean LIDs for all the models in all the layers are below 12. The small mean LID values reveals that the manifold’s intrinsic dimension is relatively low, especially considering that this is a 768-D (1024 for ELMo) embedding space. Since ELMo’s 1024-D is larger than other models 768-D dimension, its LID is also slightly higher than other models as shown in the figure. The existence of a low-dimensional embedding is also suggested in (Reif et al., 2019) when they study the BERT embedding geometry.\nIn all the contextual embedding layers, there is a clear trend of increasing LID values. In Figure 9, we can also see a nearly-linear relationship between layer id and LID. With deeper and deeper layers in the net, the manifold is diffusing and slowly loses concentration. This would lead to data samples spreading, consistent with Figure 4 (recall that intra-type cosines decrease with depth). Note that as layer goes deeper, each token embedding is collecting information from context by adding their embeddings (and non-linear transforms concatenated). This could explain the spreading / expanding of the local subspace, and therefore the LID increases in deeper layers.\nTable 3 compares LIDs for static and contextual embeddings. The table reports results for three static embeddings: GloVe / GloVe-2M (Pennington et al., 2014), and GNEWS (Mikolov et al., 2013a). Results for static embedding LIDs are based on Aumüller & Ceccarello (2019). Following Aumüller & Ceccarello (2019), we use cosine distance here: dist′(p, q) = 1− cos(p, q) = 1− 〈p,q〉‖p‖2‖q‖2 . Note that estimates for LID using cosines are very close to the estimates using `2 distances. Table 3 reports averages of LIDs over each model’s layers. Even though GloVe (Pennington et al., 2014) in Table 3 has much fewer embedding dimensions (100-D compared with BERT’s 768-D), the LID is still higher than all of the contextual embedding models. From the table we can find that static embedding spaces generally have higher LID than the contextual ones. This means that the data points are more isotropic in the static embeddings, possibly due to their large vocabularies." }, { "heading": "5 CONCLUSIONS AND FUTURE WORK", "text": "Previous works have reported the strong anisotropy in deep LMs, which is hard to explain the superior performance achieved by these models. We suggest that the anisotropy is a global view, being largely misled by distinct clusters resided in the space. Our analysis show that it is more constructive to isolate and transform the space to measure the isotropy. From this view, within the clusters, the spaces of different models all have nearly perfect isotropy that could explain the large model capacity. In addition, we investigate the space geometry for different models. Our visualization demonstrates a low-dimensional Swiss Roll manifold for GPT and GPT2 embeddings, that has not been reported before. The tokens and word frequencies are presented to qualitatively show the manifold structure. We propose to use the approximate LID to quantitatively measure the local subspace, and compared with static embedding spaces. The results show smaller LID values for the contextual embedding models, which can be seen as a local anisotropy in the space. We hope this line of research could bring a comprehensive geometric view of contextual embedding space, and gain insights on how the embeddings are affected by attention, compression, multilingualism, etc. Therefore the model performance could be further improved based on the findings." }, { "heading": "SUPPLEMENTARY: FULL RESULTS ON PTB AND WIKITEXT-2 DATASETS", "text": "" }, { "heading": "A RESULTS ON WIKITEXT-2 DATASET", "text": "" }, { "heading": "A.1 THE UNADJUSTED INTER AND INTRA COSINE SIMILARITY", "text": "Note that ”dist” in the following legends represents DistilBERT model.\n(a) Inter-type cosine similarity. As layers goes deeper, inter-type cosine goes higher. All models’ last layer behaves slightly differently. (b) Intra-type cosine similarity. The intratype cosine decreases showing the same type’s embedding instances are spreading in deeper layers." }, { "heading": "A.2 THE CENTER-SHIFTED AND CLUSTERED COSINE SIMILARITY", "text": "The inter-type and intra-type cosines are adjusted using the proposed center-shifting and clustering methods. Now it reflects the isotropy in almost all layers in all models." }, { "heading": "A.3 THE APPROXIMATE LOCAL INTRINSIC DIMENSIONS", "text": "" }, { "heading": "B FULL VISUALIZATION - PTB DATASET", "text": "" }, { "heading": "B.1 BERT", "text": "(a) BERT layer 0 (b) BERT layer 1 (c) BERT layer 2 (d) BERT layer 3\n(e) BERT layer 4 (f) BERT layer 5 (g) BERT layer 6 (h) BERT layer 7\n(i) BERT layer 8 (j) BERT layer 9 (k) BERT layer 10 (l) BERT layer 11" }, { "heading": "B.2 DISTILBERT AND ELMO", "text": "(a) DistilBERT layer 0 (b) DistilBERT layer 1 (c) DistilBERT layer 2 (d) DistilBERT layer 3\n(e) DistilBERT layer 4 (f) DistilBERT layer 5 (g) ELMo layer 0 (h) ELMo layer 1" }, { "heading": "B.3 GPT", "text": "(a) GPT layer 0 (b) GPT layer 1 (c) GPT layer 2 (d) GPT layer 3\n(e) GPT layer 4 (f) GPT layer 5 (g) GPT layer 6 (h) GPT layer 7\n(i) GPT layer 8 (j) GPT layer 9 (k) GPT layer 10 (l) GPT layer 11" }, { "heading": "B.4 GPT2", "text": "(a) GPT2 layer 0 (b) GPT2 layer 1 (c) GPT2 layer 2 (d) GPT2 layer 3\n(e) GPT2 layer 4 (f) GPT2 layer 5 (g) GPT2 layer 6 (h) GPT2 layer 7\n(i) GPT2 layer 8 (j) GPT2 layer 9 (k) GPT2 layer 10 (l) GPT2 layer 11" }, { "heading": "C FULL VISUALIZATION - WIKITEXT-2 DATASET", "text": "" }, { "heading": "C.1 BERT", "text": "(a) BERT layer 0 (b) BERT layer 1 (c) BERT layer 2 (d) BERT layer 3\n(e) BERT layer 4 (f) BERT layer 5 (g) BERT layer 6 (h) BERT layer 7\n(i) BERT layer 8 (j) BERT layer 9 (k) BERT layer 10 (l) BERT layer 11" }, { "heading": "C.2 DISTILBERT", "text": "(a) DistilBERT layer 0 (b) DistilBERT layer 1 (c) DistilBERT layer 2 (d) DistilBERT layer 3\n(e) DistilBERT layer 4 (f) DistilBERT layer 5 (g) ELMo layer 0 (h) ELMo layer 1" }, { "heading": "C.3 GPT", "text": "(a) GPT layer 0 (b) GPT layer 1 (c) GPT layer 2 (d) GPT layer 3\n(e) GPT layer 4 (f) GPT layer 5 (g) GPT layer 6 (h) GPT layer 7\n(i) GPT layer 8 (j) GPT layer 9 (k) GPT layer 10 (l) GPT layer 11" }, { "heading": "C.4 GPT2", "text": "(a) GPT2 layer 0 (b) GPT2 layer 1 (c) GPT2 layer 2 (d) GPT2 layer 3\n(e) GPT2 layer 4 (f) GPT2 layer 5 (g) GPT2 layer 6 (h) GPT2 layer 7\n(i) GPT2 layer 8 (j) GPT2 layer 9 (k) GPT2 layer 10 (l) GPT2 layer 11" }, { "heading": "D ADDITIONAL STUDIES", "text": "" }, { "heading": "D.1 K-MEANS CLUSTERING ACCURACY", "text": "We use K-Means to perform clustering, which raises two issues here. First, K-Means is very sensitive to initialization, different initialization could leads to different clustering results. However, note that in our task, we are not seeking for optimal clustering. Sub-optimal, e.g. treating two overlapping clusters as a big one, is totally fine.\nTo illustrate this, we add another metric, Davies-Boulding (DB) index (Davies & Bouldin, 1979), to show that slightly different K is fine. This DB index is the average similarity between each cluster and its closest cluster. The value closer to 0, the better clustering is done. We still search in [2, 15], and choose K with the minimum DB index (MDB). MDB sometimes gives different K than that by MMS metric. If MDB is > 4, we discard MDB and treat all data as one single cluster. We provide the comparison of selecting K using MMS (left) and MDB (right) here in Table 4. We can see that for less-distinct clusters, e.g. in BERT, two metric could yield different K values, due to merging or splitting. For very separated clusters, e.g. in GPT2, the two metric agrees. We plot the cosines using MDB’s K values, in Figure 21. It is similar to Figure 4, which uses slightly different K from MMS. The values are close to 0 indicating isotropy in the center-shifted clusters. This means that the procedure to reveal isotropy, is not sensitive to K in K-Means.\nTable 4: K by MMS(left) vs MDB(right)\nLayer BERT D-BERT GPT GPT2 ELMo\n0 6 14 7 9 1 8 2 5 2 1 1 6 6 10 15 2 2 2 2 2 1 2 4 12 15 5 2 2 2 2 3 4 15 14 11 2 2 2 2 4 3 14 10 2 2 2 2 2 5 14 13 2 4 2 2 2 2 6 6 14 2 2 2 2 7 2 15 2 2 2 2 8 2 7 2 2 2 2 9 11 6 1 2 2 2 10 2 4 1 10 2 2 11 9 3 1 15 2 2\nFigure 21: The adjusted inter-type cosines, computed using K from the criteria of minimizing DB index. The values are still close to 0.\nAnother issue is that K-Means implicitly assumes convex clusters, which often does not hold. In fact, it assumes isotropic convex clusters because we simply use `2 distance. However, density-based clustering such as DBSCAN, is too slow thus cannot handle these datasets (million level). This is a trade-off to use K-Means, and empirical results above show that it is efficient and very useful to distinguish separated clusters." }, { "heading": "D.2 CLUSTERS AND WORDS", "text": "We study the tokens and their relationship to the clusters existed in the contextual embedding spaces. We picked some representative tokens to see how they are distributed. We also study the very unique small cluster in GPT2, and how it connects to the main cluster that is far away. We obtain the following observations:\n• For BERT, high frequent words (e.g. ’the’) stays at one side of the main big cluster, while low-frequent words are at the other side. • For BERT, punctuation are random but occupy distinct islands: ’!’ is a small cluster close to\nthe main island; ’‘’ and ’áre distinct islands far away; ’?’ and some others are on the main cluster. • For GPT2, almost all single letters (a to z) and mid-to-high frequent would occupy both the\nleft (big) and right (small) islands. • For GPT2, we didn’t find any token that only appears in the right small island. It seems the\ntoken in the small island always has mirrors in the left big cluster.\n• For word types, e.g. noun, verb, etc, we didn’t find a clear pattern. We suspect word frequency affects more than categories.\nWe provide a few examples. Figure 22, 23 show BERT layer 3, Figure 24, 25 show GPT2 layer 3.\n(a) Punctuation are random but less concentrated. They also occupy distinct islands.\n(a) Frequent words and infrequent words are on the main cluster, but at two sides. An evidence that words are distributed based on the frequency.\nFigure 23: BERT Layer 3 Words\n(a) Punctuation are random. Some occupy both islands, some do not.\nFigure 24: GPT2 Layer 3 Punctuation\nBased on these observations, we have concluded that frequency plays an important role in the token distributions. High frequent words and low frequent words are often taking opposite sides of the space. This is also revealed in Section 4.3. We are yet not clear what causes this, but we suspect it is related to the training process. During training, high frequent words are updated more times. Also, since they are used in many many different context, they play a role as some shared embedding across context. Similar to the XLM model, the shared embedding are often more isotropic and more concentrated. However, this is early hypothesis and due to future research.\n(a) Mid-to-high frequent words often occupy both distinct islands (notice that the right small cluster is also colored), where a roll-shaped alignment can be observed on the larger island.\nFigure 25: GPT2 Layer 3 Words" }, { "heading": "D.3 EMBEDDING OF TRANSLATION LANGUAGE MODEL XLM", "text": "We also perform analysis and visualization on the XLM model (Conneau & Lample, 2019). BERT is mask language model (MLM), GPT is causal language model (CLM), and XLM is translation language model (TLM). We provide visualization of XLM’s 6 layers embeddings here. This is on WikiText-2 dataset.\n(a) XLM layer 0 (b) XLM layer 1 (c) XLM layer 2 (d) XLM layer 3\n(e) XLM layer 4 (f) XLM layer 5\nWe try to establish a systematic view of embedding geometry for different types of deep LMs. We have hypothesis and very preliminary results here. BERT (an MLM) show spreading clusters, but not very distinct. GPT (an CLM) shows highly separated clusters. XLM (an TLM) does not demonstrate clustering effect, and the embedding are centered.\nOne possible explanation for XLM’s behavior, is that this is a multi-lingual model, and the embedding space have to be shared between languages. This is forced during the training process of this translation language models. In that case, a single cluster residing in the center, would be a good shared embedding across languages. However, this is just hypothesis and requires further study on more models." }, { "heading": "D.4 LID ESTIMATION ROBUSTNESS", "text": "We follow (Aumüller & Ceccarello, 2019) to choose K = 100 for K-Nearest-Neighbor (K-NN) search for LID approximation, and make a direct comparison with them. It raises the concern that 100 samples might not be enough to effectively estimate the local dimension. We conduct additional experiments here to select K = 200, 500, 1000, and demonstrate that the LID estimation is robust. They provide similar LID estimates across all layers, in all the models. Though using more samples indeed obtain very slightly higher values of LID (in Figure 27, we can see a little bit up-shifting from left-most plot to the right-most plot). This is expected, as less number of samples often tends to under-estimate, and over-smoothing of LID. Nevertheless, the LID is still much smaller than the original dimension 768, so using 100 samples is a good trade-off to efficiently approximate LID.\nAs layer goes deeper, the LID increases. In other words, the local space dimension expands, at a cost of losing density. For example, the spiral band (1-D) in GPT’s front layer, becomes a Swiss Roll (2-D), and the roll surface get thickness (3-D), as layer increases. But we are not clear about the reasons yet, only suspect that data is spreading as more context info is added in later layers (the embedding for a token in deeper layer is based on summation of all embeddings in the context, due to attention). This is due to future study." }, { "heading": "D.5 ABLATION ANALYSIS ON CLUSTERING", "text": "To better study the clustering effect, we conduct experiment that computes the inter-type cosines, on clusterd-only embeddings and clustered plus center-shifted embeddings. The following figure shows GPT2’s cosine on original embeddings without adjustment (blue), the clustering-only embeddings (orange), and full (clustering + centering) adjustment (green).\n(a) GPT2 on PTB (b) GPT2 on WikiText-2\nIn the original embedding without adjustment, we see inconsistent behavior in the last layers. However, if we perform clustering and measure the Sinter within the clusters (orange), we can see much more consistent behavior across layers (more flat curve). This indicates that the clustering effect exists in all the layers, which is also verified by layer-wise visualization in Appendix B.4 and C.4.\nMeanwhile, the large values of cosines in the orange curve are expected. Now cosines are only computed within clusters, where those clusters are not at the origin. The higher values here, the more concentrated clusters are. These indicate that after clustering, the subspace within each cluster are now consistent, across all the layers. Finally, we shift those clusters to the origin, and get the green curve (values near 0), indicating isotropic cluster shapes." }, { "heading": "D.6 POSITIONAL ENCODING IN THE GEOMETRY", "text": "It is very interesting to investigate whether the positional encoding affects the geometry in the contextual embedding spaces. In particular, since GPT/GPT2 have a unique Swiss-Roll shaped manifold, that is not observed in other models, we look at how the manifold is related to the positional encoding in GPT2. Note that we truncate the whole PTB text into 512-length segments, and feed those segments into the models. The positional encoding is applied to each 512-length segment. We pick a few punctuation and words, and draw them in the space labeled by their relative positions in their corresponding segments. The position ID ranges from 0 to 511.\nWe select 4 punctuation, “, ’ & $”, and four words “the first super man”, draw them in Figure 29 30. The color bar on the side indicates the relative position ID in their segment. Darker color is smaller IDs and lighter color is bigger IDs. Clearly, for both punctuation and words, the center of the Swiss-Roll corresponds to lower position IDs, where the other end of the manifold are high IDs. Also, the distribution is monotonic. From the center to the far end, the position ID increases. This suggests that the positional encoding is indeed highly correlated with the Swiss-Roll manifold for GPT models. The reason causing this is deferred to future study.\nNote that this finding is consistent with that reported in (Reif et al., 2019), where they found that positions of the token matters (tokens take all neighbors’ information indiscriminately, rather than only attending up to their semantic boundaries) in the BERT embedding geometry. We also study the context/semantic influence of the embeddings in the next subsection." }, { "heading": "D.7 CONTEXT IN THE GEOMETRY", "text": "We also look at how the context information influences the geometry. It is more sophisticated to analyze the context, so we pick a few examples to look at their context and corresponding positions in the embedding spaces. In particular, we choose the common polysemous words “like” and “interest”, as two examples. The word “like” often has two different use cases: 1. favor; 2. similar to. There are also some fixed phrases such as “would like”. The word “interest” has two senses as well: 1. like to do something; 2. the money sense.\nWe identified the target word token (“like” or “interest”), and then print out 5 tokens before and after the target, as the context for illustration in Figure 31. From the figure we are not able to identify a clear pattern that word sense is correlated with the geometric space. However, this is only inspected by manually checking a few samples. A full statistical analysis should be carried out in the future work." } ]
2,021
null
SP:9e4a85fa5d76f345b5a38b6f86710a53e1d08503
[ "This paper critically re-examines research in domain generalisation (DG), ie building models that robustly generalise to out-of-distribution data. It observes that existing methods are hard to compare, in particular due to unclear hyper-parameter and model selection criteria. It introduces a common benchmark suite including a well designed model selection procedure, and re-evaluates existing methods on this suite. The results show that under such controlled evaluation, the benefit of existing DG methods over vanilla empirical risk minimisation (ERM) largely disappear. This raises the concern that existing DG methods might be over-tuned and hard to replicate. By releasing the controlled benchmark suite, future research progress can be more reliably measured. " ]
The goal of domain generalization algorithms is to predict well on distributions different from those seen during training. While a myriad of domain generalization algorithms exist, inconsistencies in experimental conditions—datasets, network architectures, and model selection criteria—render fair comparisons difficult. The goal of this paper is to understand how useful domain generalization algorithms are in realistic settings. As a first step, we realize that model selection is non-trivial for domain generalization tasks, and we argue that algorithms without a model selection criterion remain incomplete. Next we implement DOMAINBED, a testbed for domain generalization including seven benchmarks, fourteen algorithms, and three model selection criteria. When conducting extensive experiments using DOMAINBED we find that when carefully implemented and tuned, ERM outperforms the state-of-the-art in terms of average performance. Furthermore, no algorithm included in DOMAINBED outperforms ERM by more than one point when evaluated under the same experimental conditions. We hope that the release of DOMAINBED, alongside contributions from fellow researchers, will streamline reproducible and rigorous advances in domain generalization.
[ { "affiliations": [], "name": "Ishaan Gulrajani" }, { "affiliations": [], "name": "David Lopez-Paz" } ]
[ { "authors": [ "Kartik Ahuja", "Karthikeyan Shanmugam", "Kush Varshney", "Amit Dhurandhar" ], "title": "Invariant risk minimization", "venue": "games. arXiv,", "year": 2020 }, { "authors": [ "Kei Akuzawa", "Yusuke Iwasawa", "Yutaka Matsuo" ], "title": "Adversarial invariant feature learning with accuracy constraint for domain generalization", "venue": null, "year": 2019 }, { "authors": [ "Ehab A AlBadawy", "Ashirbani Saha", "Maciej A Mazurowski" ], "title": "Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing", "venue": "Medical physics,", "year": 2018 }, { "authors": [ "Isabela Albuquerque", "João Monteiro", "Tiago H Falk", "Ioannis Mitliagkas" ], "title": "Adversarial targetinvariant representation learning for domain generalization", "venue": null, "year": 2019 }, { "authors": [ "Isabela Albuquerque", "Nikhil Naik", "Junnan Li", "Nitish Keskar", "Richard Socher" ], "title": "Improving out-of-distribution generalization via multi-task self-supervised pretraining", "venue": null, "year": 2020 }, { "authors": [ "Michael A Alcorn", "Qi Li", "Zhitao Gong", "Chengfei Wang", "Long Mai", "Wei-Shinn Ku", "Anh Nguyen" ], "title": "Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects", "venue": null, "year": 2019 }, { "authors": [ "Martin Arjovsky", "Léon Bottou", "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "Invariant risk minimization", "venue": "arXiv preprint arXiv:1907.02893,", "year": 2019 }, { "authors": [ "Yogesh Balaji", "Swami Sankaranarayanan", "Rama Chellappa" ], "title": "MetaReg: Towards domain generalization using meta-regularization", "venue": null, "year": 2018 }, { "authors": [ "Sara Beery", "Grant Van Horn", "Pietro Perona" ], "title": "Recognition in terra incognita", "venue": null, "year": 2018 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine learning,", "year": 2010 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": null, "year": 2012 }, { "authors": [ "Gilles Blanchard", "Gyemin Lee", "Clayton Scott" ], "title": "Generalizing from several related classification tasks to a new unlabeled sample", "venue": null, "year": 2011 }, { "authors": [ "Gilles Blanchard", "Aniket Anand Deshmukh", "Urun Dogan", "Gyemin Lee", "Clayton Scott" ], "title": "Domain generalization by marginal transfer learning", "venue": null, "year": 2017 }, { "authors": [ "Victor Bouvier", "Philippe Very", "Céline Hudelot", "Clément Chastagnol" ], "title": "Hidden covariate shift: A minimal assumption for domain adaptation", "venue": null, "year": 2019 }, { "authors": [ "Fabio M Carlucci", "Antonio D’Innocente", "Silvia Bucci", "Barbara Caputo", "Tatiana Tommasi" ], "title": "Domain generalization by solving jigsaw puzzles", "venue": null, "year": 2019 }, { "authors": [ "Fabio Maria Carlucci", "Paolo Russo", "Tatiana Tommasi", "Barbara Caputo" ], "title": "Hallucinating agnostic images to generalize across domains. ICCVW, 2019b", "venue": null, "year": 2019 }, { "authors": [ "Daniel C Castro", "Ian Walker", "Ben Glocker" ], "title": "Causality matters in medical imaging", "venue": null, "year": 2019 }, { "authors": [ "Prithvijit Chattopadhyay", "Yogesh Balaji", "Judy Hoffman" ], "title": "Learning to balance specificity and invariance for in and out of domain generalization, 2020", "venue": null, "year": 2020 }, { "authors": [ "Dengxin Dai", "Luc Van Gool" ], "title": "Dark model adaptation: Semantic image segmentation from daytime to nighttime", "venue": null, "year": 2018 }, { "authors": [ "Aniket Anand Deshmukh", "Yunwen Lei", "Srinagesh Sharma", "Urun Dogan", "James W Cutler", "Clayton Scott" ], "title": "A generalization error bound for multi-class domain generalization", "venue": null, "year": 2019 }, { "authors": [ "Zhengming Ding", "Yun Fu" ], "title": "Deep domain generalization with structured low-rank constraint", "venue": "IEEE Transactions on Image Processing,", "year": 2017 }, { "authors": [ "Qi Dou", "Daniel Coelho de Castro", "Konstantinos Kamnitsas", "Ben Glocker" ], "title": "Domain generalization via model-agnostic learning of semantic features", "venue": null, "year": 2019 }, { "authors": [ "Cynthia Dwork", "Vitaly Feldman", "Moritz Hardt", "Toniann Pitassi", "Omer Reingold", "Aaron Roth" ], "title": "The reusable holdout: Preserving validity in adaptive data analysis", "venue": null, "year": 2015 }, { "authors": [ "Antonio D’Innocente", "Barbara Caputo" ], "title": "Domain generalization with domain-specific aggregation modules", "venue": "German Conference on Pattern Recognition,", "year": 2018 }, { "authors": [ "Chen Fang", "Ye Xu", "Daniel N Rockmore" ], "title": "Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias", "venue": null, "year": 2013 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": null, "year": 2017 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": null, "year": 2016 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A Wichmann", "Wieland Brendel" ], "title": "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": null, "year": 2018 }, { "authors": [ "Muhammad Ghifary", "W Bastiaan Kleijn", "Mengjie Zhang", "David Balduzzi" ], "title": "Domain generalization for object recognition with multi-task autoencoders", "venue": null, "year": 2015 }, { "authors": [ "Muhammad Ghifary", "David Balduzzi", "W Bastiaan Kleijn", "Mengjie Zhang" ], "title": "Scatter component analysis: A unified framework for domain adaptation and domain generalization", "venue": "IEEE TPAMI,", "year": 2016 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": null, "year": 2014 }, { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": null, "year": 2012 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Will D. Heaven" ], "title": "Google’s medical AI was super accurate in a lab. real life was a different story", "venue": "MIT Technology Review,", "year": 2020 }, { "authors": [ "Dan Hendrycks", "Steven Basart", "Norman Mu", "Saurav Kadavath", "Frank Wang", "Evan Dorundo", "Rahul Desai", "Tyler Zhu", "Samyak Parajuli", "Mike Guo", "Dawn Song", "Jacob Steinhardt", "Justin Gilmer" ], "title": "The many faces of robustness: A critical analysis of out-of-distribution generalization, 2020", "venue": null, "year": 2020 }, { "authors": [ "Shoubo Hu", "Kun Zhang", "Zhitang Chen", "Laiwan Chan" ], "title": "Domain generalization via multidomain discriminant analysis", "venue": null, "year": 2019 }, { "authors": [ "Weihua Hu", "Gang Niu", "Issei Sato", "Masashi Sugiyama" ], "title": "Does distributionally robust supervised learning give robust classifiers", "venue": null, "year": 2016 }, { "authors": [ "Zeyi Huang", "Haohan Wang", "Eric P Xing", "Dong Huang" ], "title": "Self-challenging improves cross-domain generalization", "venue": "arXiv preprint arXiv:2007.02454,", "year": 2020 }, { "authors": [ "Maximilian Ilse", "Jakub M Tomczak", "Christos Louizos", "Max Welling" ], "title": "DIVA: Domain invariant variational autoencoders", "venue": "arXiv preprint arXiv:1905.10427,", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate", "venue": "shift. arXiv,", "year": 2015 }, { "authors": [ "Fredrik D Johansson", "David Sontag", "Rajesh Ranganath" ], "title": "Support and invertibility in domaininvariant representations", "venue": null, "year": 2019 }, { "authors": [ "Aditya Khosla", "Tinghui Zhou", "Tomasz Malisiewicz", "Alexei A Efros", "Antonio Torralba" ], "title": "Undoing the damage of dataset bias", "venue": null, "year": 2012 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "ICLR,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "ICLR,", "year": 2014 }, { "authors": [ "David Krueger", "Ethan Caballero", "Joern-Henrik Jacobsen", "Amy Zhang", "Jonathan Binas", "Remi Le Priol", "Aaron Courville" ], "title": "Out-of-distribution generalization via risk extrapolation (REx)", "venue": null, "year": 2020 }, { "authors": [ "Yann LeCun" ], "title": "The mnist database of handwritten digits. http://yann", "venue": "lecun. com/exdb/mnist/,", "year": 1998 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy M. Hospedales" ], "title": "Deeper, broader and artier domain generalization", "venue": null, "year": 2017 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy M Hospedales" ], "title": "Learning to generalize: Metalearning for domain generalization", "venue": null, "year": 2018 }, { "authors": [ "Da Li", "Jianshu Zhang", "Yongxin Yang", "Cong Liu", "Yi-Zhe Song", "Timothy M Hospedales" ], "title": "Episodic training for domain generalization", "venue": null, "year": 2019 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy Hospedales" ], "title": "Sequential learning for domain generalization", "venue": null, "year": 2020 }, { "authors": [ "Haoliang Li", "Sinno Jialin Pan", "Shiqi Wang", "Alex C Kot" ], "title": "Domain generalization with adversarial feature learning", "venue": null, "year": 2018 }, { "authors": [ "Ya Li", "Mingming Gong", "Xinmei Tian", "Tongliang Liu", "Dacheng Tao" ], "title": "Domain generalization via conditional invariant representations", "venue": null, "year": 2018 }, { "authors": [ "Ya Li", "Xinmei Tian", "Mingming Gong", "Yajing Liu", "Tongliang Liu", "Kun Zhang", "Dacheng Tao" ], "title": "Deep domain generalization via conditional invariant adversarial networks. ECCV, 2018d", "venue": null, "year": 2018 }, { "authors": [ "Yiying Li", "Yongxin Yang", "Wei Zhou", "Timothy M Hospedales" ], "title": "Feature-critic networks for heterogeneous domain generalization", "venue": null, "year": 2019 }, { "authors": [ "Massimiliano Mancini", "Samuel Rota Bulò", "Barbara Caputo", "Elisa Ricci" ], "title": "Best sources forward: domain generalization through source-specific nets", "venue": null, "year": 2018 }, { "authors": [ "Massimiliano Mancini", "Samuel Rota Bulo", "Barbara Caputo", "Elisa Ricci" ], "title": "Robust place categorization with deep domain generalization", "venue": "IEEE Robotics and Automation Letters,", "year": 2018 }, { "authors": [ "Toshihiko Matsuura", "Tatsuya Harada" ], "title": "Domain generalization using a mixture of multiple latent domains", "venue": null, "year": 2019 }, { "authors": [ "Saeid Motiian", "Marco Piccirilli", "Donald A Adjeroh", "Gianfranco Doretto" ], "title": "Unified deep supervised domain adaptation and generalization", "venue": null, "year": 2017 }, { "authors": [ "Krikamol Muandet", "David Balduzzi", "Bernhard Schölkopf" ], "title": "Domain generalization via invariant feature representation", "venue": null, "year": 2013 }, { "authors": [ "Krikamol Muandet", "Kenji Fukumizu", "Bharath Sriperumbudur", "Bernhard Schölkopf" ], "title": "Kernel mean embedding of distributions: A review and beyond", "venue": "Foundations and Trends in Machine Learning,", "year": 2017 }, { "authors": [ "Hyeonseob Nam", "HyunJae Lee", "Jongchan Park", "Wonjun Yoon", "Donggeun Yoo" ], "title": "Reducing domain gap via style-agnostic networks", "venue": null, "year": 2019 }, { "authors": [ "Sinno Jialin Pan", "Qiang Yang" ], "title": "A survey on transfer learning", "venue": "IEEE TKDE,", "year": 2009 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "PyTorch: An imperative style, high-performance deep learning", "venue": null, "year": 2019 }, { "authors": [ "Vishal M Patel", "Raghuraman Gopalan", "Ruonan Li", "Rama Chellappa" ], "title": "Visual domain adaptation: A survey of recent advances", "venue": "IEEE Signal Processing,", "year": 2015 }, { "authors": [ "Xingchao Peng", "Qinxun Bai", "Xide Xia", "Zijun Huang", "Kate Saenko", "Bo Wang" ], "title": "Moment matching for multi-source domain adaptation", "venue": null, "year": 2019 }, { "authors": [ "Christian S Perone", "Pedro Ballester", "Rodrigo C Barros", "Julien Cohen-Adad" ], "title": "Unsupervised domain adaptation for medical imaging segmentation with self-ensembling", "venue": null, "year": 2019 }, { "authors": [ "Jonas Peters", "Peter Bühlmann", "Nicolai Meinshausen" ], "title": "Causal inference by using invariant prediction: identification and confidence intervals", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2016 }, { "authors": [ "Mohammad Mahfujur Rahman", "Clinton Fookes", "Mahsa Baktashmotlagh", "Sridha Sridharan" ], "title": "Correlation-aware adversarial domain adaptation and generalization", "venue": "Pattern Recognition,", "year": 2019 }, { "authors": [ "Mohammad Mahfujur Rahman", "Clinton Fookes", "Mahsa Baktashmotlagh", "Sridha Sridharan" ], "title": "Multi-component image translation for deep domain generalization", "venue": "WACV,", "year": 2019 }, { "authors": [ "Mateo Rojas-Carulla", "Bernhard Schölkopf", "Richard Turner", "Jonas Peters" ], "title": "Invariant models for causal transfer learning", "venue": null, "year": 2018 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "ImageNET large scale visual recognition challenge", "venue": null, "year": 2015 }, { "authors": [ "Shiori Sagawa", "Pang Wei Koh", "Tatsunori B Hashimoto", "Percy Liang" ], "title": "Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization", "venue": null, "year": 2019 }, { "authors": [ "Seonguk Seo", "Yumin Suh", "Dongwan Kim", "Geeho Kim", "Jongwoo Han", "Bohyung Han" ], "title": "Learning to optimize domain specific normalization for domain generalization, 2020", "venue": null, "year": 2020 }, { "authors": [ "Shiv Shankar", "Vihari Piratla", "Soumen Chakrabarti", "Siddhartha Chaudhuri", "Preethi Jyothi", "Sunita Sarawagi" ], "title": "Generalizing across domains via cross-gradient training", "venue": null, "year": 2018 }, { "authors": [ "Pierre Stock", "Moustapha Cisse" ], "title": "Convnets and imagenet beyond accuracy: Understanding mistakes and uncovering biases", "venue": null, "year": 2018 }, { "authors": [ "Baochen Sun", "Kate Saenko" ], "title": "Deep CORAL: Correlation alignment for deep domain adaptation", "venue": null, "year": 2016 }, { "authors": [ "Baochen Sun", "Jiashi Feng", "Kate Saenko" ], "title": "Return of frustratingly easy domain adaptation", "venue": null, "year": 2016 }, { "authors": [ "Damien Teney", "Ehsan Abbasnejad", "Anton van den Hengel" ], "title": "Unshuffling data for improved generalization", "venue": "arxiv,", "year": 2020 }, { "authors": [ "Antonio Torralba", "Alexei Efros" ], "title": "Unbiased look at dataset bias", "venue": null, "year": 2011 }, { "authors": [ "Vladimir Vapnik" ], "title": "Statistical learning theory wiley", "venue": "New York,", "year": 1998 }, { "authors": [ "Hemanth Venkateswara", "Jose Eusebio", "Shayok Chakraborty", "Sethuraman Panchanathan" ], "title": "Deep hashing network for unsupervised domain adaptation", "venue": null, "year": 2017 }, { "authors": [ "Georg Volk", "Stefan Müller", "Alexander von Bernuth", "Dennis Hospach", "Oliver Bringmann" ], "title": "Towards robust cnn-based object detection through augmentation with synthetic rain variations", "venue": null, "year": 2019 }, { "authors": [ "Riccardo Volpi", "Hongseok Namkoong", "Ozan Sener", "John C Duchi", "Vittorio Murino", "Silvio Savarese" ], "title": "Generalizing to unseen domains via adversarial data augmentation", "venue": null, "year": 2018 }, { "authors": [ "Haohan Wang", "Zexue He", "Zachary C Lipton", "Eric P Xing" ], "title": "Learning robust representations by projecting superficial statistics out", "venue": null, "year": 2019 }, { "authors": [ "Shujun Wang", "Lequan Yu", "Caizi Li", "Chi-Wing Fu", "Pheng-Ann Heng" ], "title": "Learning from extrinsic and intrinsic supervisions for domain generalization, 2020a", "venue": null, "year": 2020 }, { "authors": [ "Yufei Wang", "Haoliang Li", "Alex C Kot" ], "title": "Heterogeneous domain generalization via domain mixup", "venue": null, "year": 2020 }, { "authors": [ "Garrett Wilson", "Diane J Cook" ], "title": "A survey of unsupervised deep domain adaptation", "venue": null, "year": 2018 }, { "authors": [ "Minghao Xu", "Jian Zhang", "Bingbing Ni", "Teng Li", "Chengjie Wang", "Qi Tian", "Wenjun Zhang" ], "title": "Adversarial domain adaptation with domain mixup", "venue": null, "year": 2019 }, { "authors": [ "Shen Yan", "Huan Song", "Nanxiang Li", "Lincan Zou", "Liu Ren" ], "title": "Improve unsupervised domain adaptation with mixup training", "venue": null, "year": 2020 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": null, "year": 2018 }, { "authors": [ "Ling Zhang", "Xiaosong Wang", "Dong Yang", "Thomas Sanford", "Stephanie Harmon", "Baris Turkbey", "Holger Roth", "Andriy Myronenko", "Daguang Xu", "Ziyue Xu" ], "title": "When unseen domain generalization is unnecessary? rethinking data augmentation", "venue": null, "year": 2019 }, { "authors": [ "Marvin Zhang", "Henrik Marklund", "Abhishek Gupta", "Sergey Levine", "Chelsea Finn" ], "title": "Adaptive risk minimization: A meta-learning approach for tackling group", "venue": "shift. arXiv,", "year": 2020 }, { "authors": [ "Han Zhao", "Remi Tachet des Combes", "Kun Zhang", "Geoffrey J Gordon" ], "title": "On learning invariant representation for domain adaptation", "venue": null, "year": 2019 }, { "authors": [ "Kaiyang Zhou", "Yongxin Yang", "Timothy Hospedales", "Tao Xiang" ], "title": "Deep domain-adversarial image generation for domain generalisation", "venue": "arXiv preprint arXiv:2003.06054,", "year": 2020 }, { "authors": [ "Li" ], "title": "2018b) employ GANs and the maximum mean discrepancy criteria (Gretton et al., 2012) to align feature distributions across domains. Matsuura and Harada (2019) leverages clustering techniques to learn domaininvariant features even when the separation between training domains is not given. Li et al. (2018c;d) learns a feature transformation φ such that the conditional distributions P (φ(X) | Y d = y) match", "venue": null, "year": 2018 }, { "authors": [ "y. Shankar" ], "title": "2018) use a domain classifier to construct adversarial examples for a label classifier, and use a label classifier to construct adversarial examples for the domain classifier. This results in a label classifier with better domain generalization. Li et al. (2019a) train a robust feature extractor and classifier. The robustness comes from (i) asking the feature extractor to produce features such that a classifier trained on domain d can classify instances", "venue": null, "year": 2019 }, { "authors": [ "d. Li" ], "title": "d, and (ii) asking the classifier to predict labels on domain d using features produced by a feature extractor trained on domain d", "venue": null, "year": 2019 }, { "authors": [ "Sun" ], "title": "train a variational autoencoder (Kingma and Welling, 2014) where the bottleneck representation factorizes knowledge about domain, class label, and residual variations in the input space. Fang et al. (2013) learn a structural SVM metric such that the neighborhood of each example contains examples from the same category and all training domains. The algorithms of Sun and Saenko", "venue": null, "year": 2019 }, { "authors": [ "Hu" ], "title": "The algorithms of Ghifary et al", "venue": "Rojas-Carulla", "year": 2016 }, { "authors": [ "domains. Bouvier" ], "title": "2019) attack the same problem as IRM by re-weighting data samples", "venue": "SHARING PARAMETERS Blanchard et al", "year": 2011 }, { "authors": [ "Deshmukh" ], "title": "Published as a conference paper at ICLR 2021 identity of test instances is unknown, these embeddings are estimated using single test examples at test time", "venue": null, "year": 2020 }, { "authors": [ "Khosla" ], "title": "max-margin linear classifier w = w + ∆ per domain d, from which they distill their final, invariant predictor w. Ghifary et al. (2015) use a multitask autoencoder to learn invariances across domains. To achieve this, the authors assume that each training dataset contains the same examples; for instance, photographs", "venue": null, "year": 2018 }, { "authors": [ "Mancini" ], "title": "batch-normalization layers (Ioffe and Szegedy, 2015) per training dataset. Then, a softmax domain classifier predicts how to linearly-combine the batch-normalization layers at test time. Seo et al. (2020) combines instance normalization with batch-normalization to learn a normalization module per domain, enhancing out-of-distribution generalization", "venue": null, "year": 2018 }, { "authors": [ "D’Innocente", "Caputo" ], "title": "2017) extends Khosla et al. (2012) to deep neural networks by extending each of their parameter tensors with one additional dimension, indexed by the training domains, and set to a neutral value to predict domain-agnostic test examples. Ding and Fu (2017) implement parametertying and low-rank reconstruction losses to learn a predictor that relies on common knowledge across", "venue": null, "year": 2017 }, { "authors": [ "domains. Hu" ], "title": "2019) weight the importance of the minibatches of the training distributions proportional to their error. Chattopadhyay et al. (2020) overlays multiple weight masks over a single network to learn domain-invariant and domain-specific features", "venue": "META-LEARNING Li et al. (2018a) employ Model-Agnostic Meta-Learning, or MAML (Finn et al.,", "year": 2017 }, { "authors": [ "Dou" ], "title": "2019) use a similar MAML strategy, together with two regularizers that encourage features from different domains to respect inter-class relationships, and be compactly clustered by class labels. Li et al. (2019b) extend the MAML meta-learning strategy to instances of domain generalization where the categories vary from domain", "venue": null, "year": 2018 }, { "authors": [ "Wang" ], "title": "2020b) use mixup (Zhang et al., 2018) to blend examples from the different training distributions. Carlucci et al. (2019a) constructs an auxiliary classification task aimed at solving jigsaw puzzles of image patches. The authors show that this self-supervised learning task learns features that improve domain generalization", "venue": null, "year": 2020 }, { "authors": [ "Albuquerque" ], "title": "2020) introduce the self-supervised task of predicting responses to Gabor filter banks, in order to learn more transferrable features. Wang et al. (2019) remove textural information from images to improve domain generalization. Volpi et al. (2018) show that training with adversarial data augmentation on a single domain is sufficient to improve domain generalization", "venue": "Nam et al", "year": 2019 }, { "authors": [ "Zhou" ], "title": "2019a) are three alternatives that use GANs to augment the data available during training time. Representation Self-Challenging (Huang et al., 2020) learns robust neural networks by iteratively dropping-out important features", "venue": "Hendrycks et al", "year": 2020 }, { "authors": [ "Inter-domain Mixup (Mixup", "Xu" ], "title": "2020b)) performs ERM on linear interpolations of examples from random pairs of domains and their labels", "venue": null, "year": 2020 }, { "authors": [ "Class-conditional DANN (C-DANN", "Li" ], "title": "2018d)) is a variant of DANN matching the conditional distributions P (φ(X)|Y d = y) across domains, for all labels y", "venue": null, "year": 2018 }, { "authors": [ "Risk Extrapolation (VREx", "Krueger" ], "title": "2020)) approximates IRM with a variance penalty", "venue": null, "year": 2020 }, { "authors": [ "Adaptive Risk Min. (ARM", "Zhang" ], "title": "2020)) extends MTL with a separate embedding CNN", "venue": null, "year": 2020 }, { "authors": [ "Style-Agnostic Networks (SagNets", "Nam" ], "title": "2019)) learns neural networks by keeping image content and randomizing style", "venue": null, "year": 2019 }, { "authors": [ "Huang" ], "title": "2020)) learns robust neural networks by iteratively discarding (challenging) the most activated features", "venue": null, "year": 2020 }, { "authors": [ "OfficeHome (Venkateswara" ], "title": "2017) includes domains d ∈ { art, clipart, product, real ", "venue": "This dataset", "year": 2017 }, { "authors": [ "Terra Incognita (Beery" ], "title": "2018) contains photographs of wild animals taken by camera traps at locations d ∈ {L100,L38,L43,L46}. Our version of this dataset contains 24, 788 examples of dimension", "venue": null, "year": 2018 }, { "authors": [ "DomainNet (Peng" ], "title": "2019) has six domains d ∈ { clipart, infograph, painting, quickdraw, real, sketch ", "venue": "This dataset contains 586,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning systems often fail to generalize out-of-distribution, crashing in spectacular ways when tested outside the domain of training examples (Torralba and Efros, 2011). The overreliance of learning systems on the training distribution manifests widely. For instance, self-driving car systems struggle to perform under conditions different to those of training, including variations in light (Dai and Van Gool, 2018), weather (Volk et al., 2019), and object poses (Alcorn et al., 2019). As another example, systems trained on medical data collected in one hospital do not generalize to other health centers (Castro et al., 2019; AlBadawy et al., 2018; Perone et al., 2019; Heaven, 2020). Arjovsky et al. (2019) suggest that failing to generalize out-of-distribution is failing to capture the causal factors of variation in data, clinging instead to easier-to-fit spurious correlations prone to change across domains. Examples of spurious correlations commonly absorbed by learning machines include racial biases (Stock and Cisse, 2018), texture statistics (Geirhos et al., 2018), and object backgrounds (Beery et al., 2018). Alas, the capricious behaviour of machine learning systems out-of-distribution is a roadblock to their deployment in critical applications.\nAware of this problem, the research community has spent significant efforts during the last decade to develop algorithms able to generalize out-of-distribution. In particular, the literature in Domain Generalization (DG) assumes access to multiple datasets during training, each of them containing examples about the same task, but collected under a different domain or experimental condition (Blanchard et al., 2011; Muandet et al., 2013). The goal of DG algorithms is to incorporate the invariances across these training domains into a classifier, in hopes that such invariances will also hold in novel test domains. Different DG solutions assume different types of invariances, and propose algorithms to estimate them from data.\nDespite the enormous importance of DG, the literature is scattered: a plethora of different algorithms appear yearly, each of them evaluated under different datasets, neural network architectures, and model selection criteria. Borrowing from the success of standardized computer vision benchmarks\n∗Alphabetical order, equal contribution. Work done while IG was at Facebook AI Research. This paper is a living benchmark, always refer to the latest version available at https://arxiv.org/abs/2007.01434\nsuch as ImageNet (Russakovsky et al., 2015), the purpose of this work is to perform a rigorous comparison of DG algorithms, as well as to open-source our software for anyone to replicate and extend our analyses. This manuscript investigates the question: How useful are different DG algorithms when evaluated in a consistent and realistic setting?\nTo answer this question, we implement and tune fourteen DG algorithms carefully, to compare them across seven benchmark datasets and three model selection criteria. There are three major takeaways from our investigations:\n• Claim 1: A careful implementation of ERM outperforms the state-of-the-art in terms of average performance across common benchmarks (Table 1, full list in Appendix A.5).\n• Claim 2: When implementing fourteen DG algorithms in a consistent and realistic setting, no competitor outperforms ERM by more than one point (Table 3).\n• Claim 3: Model selection is non-trivial for DG, yet affects results (Table 3). As such, we argue that DG algorithms should specify their own model selection criteria.\nAs a result of our research, we release DOMAINBED, a framework to streamline rigorous and reproducible experimentation in DG. Using DOMAINBED, adding a new algorithm or dataset is a matter of a few lines of code. A single command runs all experiments, performs all model selections, and auto-generates all the result tables included in this work. DOMAINBED is a living project: we welcome pull requests from fellow researchers to update the available algorithms, datasets, model selection criteria, and result tables.\nSection 2 kicks off our exposition with a review of the DG setup. Section 3 discusses the difficulties of model selection in DG and makes recommendations for a path forward. Section 4 introduces DOMAINBED, describing the features included in the initial release. Section 5 discusses the experimental results of running the entire DOMAINBED suite, illustrating the competitive performance of ERM and the importance of model selection criteria. Finally, Section 6 offers our view on future research directions in DG. Appendix A reviews one hundred articles spanning a decade of research in DG, summarizing the experimental performance of over thirty algorithms." }, { "heading": "2 THE PROBLEM OF DOMAIN GENERALIZATION", "text": "The goal of supervised learning is to predict values y ∈ Y of a target random variable Y , given values x ∈ X of an input random variable X . Predictions ŷ = f(x) about x originate from a predictor f : X → Y . We often decompose predictors as f = w ◦ φ, where we call φ : X → H the featurizer, and w : H → Y the classifier. To solve the prediction task we collect the training dataset D = {(xi, yi)}ni=1, which contains identically and independently distributed (i.i.d.) examples from the joint probability distribution P (X,Y ). Given a loss function ` : Y × Y → [0,∞) measuring prediction error, supervised learning seeks the predictor minimizing the risk E(x,y)∼P [`(f(x), y)]. Since we only have access to the data distribution P (X,Y ) via the dataset D, we instead search a predictor minimizing the empirical risk 1n ∑n i=1 `(f(xi), yi) (Vapnik, 1998).\nThe rest of this paper studies the problem of Domain Generalization (DG), an extension of supervised learning where training datasets from multiple domains (or environments) are available to train our predictor (Blanchard et al., 2011). Each domain d produces a dataset Dd = {(xdi , ydi )} nd d=1 containing i.i.d. examples from some probability distribution P (Xd, Y d), for all training domains d ∈ {1, . . . , dtr}. The goal of DG is out-of-distribution generalization: learning a predictor able to perform well at some unseen test domain dtr + 1. Since no data about the test domain is available during training, we must assume the existence of statistical invariances across training and testing domains, and incorporate such invariances (but nothing else) into our predictor. The type of invariance assumed, as well as how to estimate it from the training datasets, varies between DG algorithms. We review a hundred articles in DG spanning a decade of research and thirty algorithms in Appendix A.5.\nDG differs from unsupervised domain adaptation. In the latter, unlabeled data from the test domain is available during training (Pan and Yang, 2009; Patel et al., 2015; Wilson and Cook, 2018). Table 2 compares different machine learning setups to highlight the nature of DG problems. The causality literature refers to DG as learning from multiple environments (Peters et al., 2016; Arjovsky et al., 2019). Although challenging, the DG framework can capture some of the difficulty of real prediction problems, where unforeseen distributional discrepancies between training and testing data are surely expected. At the same time, the framework can be limiting: in many real world scenarios there may be external variables informing about task relatedness (space, time, annotations) that the DG framework ignores." }, { "heading": "3 MODEL SELECTION AS PART OF THE LEARNING PROBLEM", "text": "Here we discuss issues surrounding model selection (choosing hyperparameters, training checkpoints, architecture variants) in DG and make specific recommendations for a path forward. Because we lack access to a validation set identically distributed to the test data, model selection in DG is not as straightforward as in supervised learning. Some works adopt heuristic strategies whose behavior is not well-studied, while others simply omit a description of how to choose hyperparameters. This leaves open the possibility that hyperparameters were chosen using the test data, which is not\nmethodologically sound. Differences in results arising from inconsistent tuning practices may be misattributed to the algorithms under study, complicating fair assessments.\nWe believe that much of the confusion surrounding model selection in DG arises from treating it as merely a question of experimental design. To the contrary, model selection requires making theoretical assumptions about how the test data relates to the training data. Different DG algorithms make different assumptions, and it is not clear a priori which ones are correct, or how they influence the model selection criterion. Indeed, choosing reasonable assumptions is at the heart of DG research. Therefore, a DG algorithm without a strategy to choose its hyperparameters should be regarded as incomplete.\nRecommendation 1 A DG algorithm should be responsible for specifying a model selection method.\nWhile algorithms without well-justified model selection methods are incomplete, they may be useful stepping-stones in a research agenda. In this case, instead of using an ad-hoc model selection method, we can evaluate incomplete algorithms by considering an oracle model selection method, where we select hyperparameters using some data from the test domain. Of course, it is important to avoid invalid comparisons between oracle results and baselines tuned without an oracle method. Also, unless we restrict access to the test domain data somehow, we risk obtaining meaningless results (we could just train on such test domain data using supervised learning).\nRecommendation 2 Researchers should disclaim any oracle-selection results as such and specify policies to limit access to the test domain." }, { "heading": "3.1 THREE MODEL SELECTION METHODS FOR DG", "text": "Having made broad recommendations, we review and justify three model selection criteria for DG. Appendix B.3 illustrates these with an specific example.\nTraining-domain validation We split each training domain into training and validation subsets. We train models using the training subsets, and choose the model maximizing the accuracy on the union of validation subsets. This strategy assumes that the training and test examples follow similar distributions. For example, Ben-David et al. (2010) bound the test error of a classifier with the divergence between training and test domains.\nLeave-one-domain-out validation Given dtr training domains, we train dtr models with equal hyperparameters, each holding one of the training domains out. We evaluate each model on its held-out domain, and average the accuracies of these dtr models over their held-out domains. Finally, we choose the model maximizing this average accuracy, retrained on all dtr domains. This strategy assumes that training and test domains follow a meta-distribution over domains, and that our goal is to maximize the expected performance under this meta-distribution. Note that leaving k > 1 domains out would increase greatly the number of experiments, and introduces a hyperparameter k.\nTest-domain validation (oracle) We choose the model maximizing the accuracy on a validation set that follows the distribution of the test domain. Following our earlier recommendation to limit test domain access, we allow one query (the last checkpoint) per choice of hyperparameters, disallowing early stopping. Recall that this is not a valid benchmarking methodology. Oracle-based results can be either optimistic, because we select models using the test distribution, or pessimistic, because the query limit reduces the number of considered hyperparameters. We also tried limiting the size of the oracle test set instead of the number of queries, but this led to unacceptably high variance." }, { "heading": "3.2 CONSIDERATIONS FROM THE LITERATURE", "text": "Some references in prior work discuss additional strategies to choose hyperparameters in DG. For instance, Krueger et al. (2020, Appendix B.1) suggest choosing hyperparameters to maximize the performance across all domains of an external dataset. This “leave-one-dataset out” is akin to the second strategy outlined above. Albuquerque et al. (2019, Section 5.3.2) suggest performing model selection based on the loss function (which often incorporates an algorithm-specific regularizer), and D’Innocente and Caputo (2018, Section 3) derive an strategy specific to their algorithm. Finally, tools\nfrom differential privacy enable multiple reuses of a validation set (Dwork et al., 2015), which could be a tool to control the power of test-domain validation (oracle)." }, { "heading": "4 DOMAINBED: A PYTORCH TESTBED FOR DOMAIN GENERALIZATION", "text": "At the heart of our large scale experimentation is DOMAINBED, a PyTorch (Paszke et al., 2019) testbed to streamline reproducible and rigorous research in DG:\nhttps://github.com/facebookresearch/DomainBed/.\nThe initial release comprises fourteen algorithms, seven datasets, and three model selection methods (those described in Section 3), as well as the infrastructure to run all the experiments and generate all the LATEX tables below with a single command. The first version of DOMAINBED focuses on image classification, leaving for future work other types of tasks. DOMAINBED is a living project: together with pull requests from collaborators, we continuously update the above repository with new algorithms, datasets, and result tables. As illustrated in Appendix B.5, adding a new algorithm or dataset to DOMAINBED is a matter of a few lines of code.\nAlgorithms DOMAINBED currently includes fourteen algorithms chosen based on their impact over the years, their published performance, and a desire to include varied DG strategies. These are Empirical Risk Minimization (ERM, Vapnik (1998)), Group Distributionally Robust Optimization (GroupDRO, Sagawa et al. (2019)), Inter-domain Mixup (Mixup, Xu et al. (2019); Yan et al. (2020); Wang et al. (2020b)), Meta-Learning for Domain Generalization (MLDG, Li et al. (2018a)), DomainAdversarial Neural Networks (DANN, Ganin et al. (2016)), Class-conditional DANN (C-DANN, Li et al. (2018d)), Deep CORrelation ALignment (CORAL, Sun and Saenko (2016)), Maximum Mean Discrepancy (MMD, Li et al. (2018b)), Invariant Risk Minimization (IRM Arjovsky et al. (2019)), Adaptive Risk Minimization (ARM, Zhang et al. (2020)), Marginal Transfer Learning (MTL, Blanchard et al. (2011; 2017)), Style-Agnostic Networks (SagNet, Nam et al. (2019)), and Representation Self Challenging (RSC, Huang et al. (2020)). Appendix B.1 describes these algorithms, and Appendix B.4 lists their network architectures and hyperparameter search distributions.\nDatasets DOMAINBED currently includes downloaders and loaders for seven standard DG image classification benchmarks. These are Colored MNIST (Arjovsky et al., 2019), Rotated MNIST (Ghifary et al., 2015), PACS (Li et al., 2017), VLCS (Fang et al., 2013), OfficeHome (Venkateswara et al., 2017), Terra Incognita (Beery et al., 2018), and DomainNet (Peng et al., 2019). The datasets based on MNIST are “synthetic” since changes across domains are well understood (colors and rotations). The rest of the datasets are “real” since domains vary in unknown ways. Appendix B.2 describes these datasets.\nImplementation choices We highlight three implementation choices made towards a consistent and realistic evaluation setting. First, whereas prior work is inconsistent in its choice of network architecture, we finetune ResNet-50 models (He et al., 2016) pretrained on ImageNet for all nonMNIST experiments. We note that recent state-of-the-art results (Balaji et al., 2018; Nam et al., 2019; Huang et al., 2020) also use ResNet-50 models. Second, for all non-MNIST datasets, we augment training data using the following protocol: crops of random size and aspect ratio, resizing to 224× 224 pixels, random horizontal flips, random color jitter, grayscaling the image with 10% probability, and normalization using the ImageNet channel statistics. This augmentation protocol is increasingly standard in state-of-the-art DG work (Nam et al., 2019; Huang et al., 2020; Krueger et al., 2020; Carlucci et al., 2019a; Zhou et al., 2020; Dou et al., 2019; Hendrycks et al., 2020; Wang et al., 2020a; Seo et al., 2020; Chattopadhyay et al., 2020). We use no augmentation for MNIST-based datasets. Third, and for RotatedMNIST, we divide all the digits evenly among domains, instead of replicating the same 1000 digits to construct all domains. We deviate from standard practice for two reasons: using the same digits across training and test domains leaks test data, and reducing the amount of training data complicates the task in an unrealistic way." }, { "heading": "5 EXPERIMENTS", "text": "We run experiments for all algorithms, datasets, and model selection criteria shipped in DOMAINBED. We consider all configurations of a dataset where we hide one domain for testing, resulting in the training of 58,000 models. To generate the following results, we simply run sweep.py at commit 0x7df6f06 from DOMAINBED’s repository.\nHyperparameter search For each algorithm and test domain, we conduct a random search (Bergstra and Bengio, 2012) of 20 trials over a joint distribution of all hyperparameters (Appendix B.4). Appendix C.4 shows that running more than 20 trials does not improve our results significantly. We use each model selection criterion to select amongst the 20 models from the random search. We split the data from each domain into 80% and 20% splits. We use the larger splits for training and final evaluation, and the smaller splits to select hyperparameters (for an illustration, see Appendix B.3). All hyperparameters are optimized anew for each algorithm and test domain, including hyperparameters like learning rates which are common to multiple algorithms.\nStandard error bars While some DG literature reports error bars across seeds, randomness arising from model selection is often ignored. This is acceptable if the goal is best-versus-best comparison, but prohibits analyses concerning the model selection process itself. Instead, we repeat our entire study three times, making every random choice anew: hyperparameters, weight initializations, and dataset splits. Every number we report is a mean (and its standard error) over these repetitions." }, { "heading": "5.1 RESULTS", "text": "Table 3 summarizes the results of our experiments. Appendix C contains the full results per dataset and domain. As anticipated in our introduction, we draw three conclusions from our results.\nClaim 1: Carefully tuned ERM outperforms the previously published state-of-the-art Table 1 (full version in Appendix A.5) shows this result, when we provide ERM with a training-domain validation set for hyperparameter selection. Such state-of-the-art average performance of our ERM baseline holds even when we select the best competitor available in the literature separately for each benchmark. One reason for ERM’s strong performance is that we use ResNet-50, whereas some prior work uses smaller ResNet-18 models. As recently shown in the literature (Hendrycks et al., 2020), this suggests that better in-distribution generalization is a dominant factor behind better out-of-distribution generalization. Our result does not refute prior work: it is possible that with stronger implementations, some competing methods may improve upon ERM. Rather, we provide a strong, realistic, and reproducible baseline for future work to build upon.\nClaim 2: When evaluated in a consistent setting, no algorithm outperforms ERM in more than one point We observe this result in Table 3, obtained by running from scratch every combination of dataset, algorithm, and model selection criterion in DOMAINBED. Given any model selection criterion, no method improves the average performance of ERM in more than one point. At the number of trials performed, no improvement over ERM is statistically significant according to a t-test at a significance level α = 0.05. While new algorithms could improve upon ERM (an exciting premise!), getting substantial DG improvements in a rigorous way proved challenging. Most of our baselines can achieve ERM-like performance because there have hyperparameter configurations under which they behave like ERM (e.g. regularization coefficients that can be set to zero). Our advice to DG practitioners is to use ERM (which is a safe contender) or CORAL (Sun and Saenko, 2016) (which achieved the highest average score).\nClaim 3: Model selection methods matter We observe that model selection with a training domain validation set outperforms leave-one-domain-out cross-validation across multiple datasets and algorithms. This does not mean that using a training domain validation set is the right way to tune hyperparameters. In fact, the stronger performance of oracle-selection (+2.3 points for ERM) suggests headroom to develop improved DG model selection criteria." }, { "heading": "Algorithm CMNIST RMNIST VLCS PACS OfficeHome TerraInc DomainNet Average", "text": "" }, { "heading": "Algorithm CMNIST RMNIST VLCS PACS OfficeHome TerraInc DomainNet Average", "text": "" }, { "heading": "Algorithm CMNIST RMNIST VLCS PACS OfficeHome TerraInc DomainNet Average", "text": "" }, { "heading": "5.2 ABLATION STUDY ON ERM", "text": "To better understand our ERM performance, we perform an ablation study on the neural network architecture and the data augmentation protocol. Table 5.2 shows that using a ResNet-50 neural network architecture, instead of a smaller ResNet-18, improves DG test accuracy by 3.7 points. Using data augmentation improves DG test accuracy by 0.5 points. However, these ResNet models were pretrained on ImageNet using data augmentation, so the benefits of augmentation are partly absorbed by the model. In fact, we hypothesize that among models pretrained on ImageNet, domain generalization performance is mainly influenced by the model’s original test accuracy on ImageNet." }, { "heading": "6 DISCUSSIONS", "text": "We provide several discussions to help the reader interpret our results and motivate future work.\nOur negative claims are fundamentally limited Broad negative claims (e.g. “algorithm X does not outperform ERM”) do not specify an exact experimental setting and are therefore impossible to rigorously prove. In order to be verifiable, such claims must be restricted to a specific setting. This limitation is fundamental to all negative result claims, and ours (Claim 2) is no exception. We have shown that many algorithms don’t substantially improve on ERM in our setting, but the relevance of that setting is a subjective matter ultimately left for the reader.\nIn making this judgement, the reader should consider whether they agree with our methodological and implementation choices, which we have explained and motivated throughout the paper. We also note that our implementation can outperform previous results (Table 1). Finally, DomainBed is not a black box: our implementation is open-source and actively maintained, and we invite the research community to improve on our results.\nIs this as good as it gets? We question whether DG is possible in some of the considered datasets. Why do we assume that a neural network should be able to classify cartoons, given only photorealistic training data? In the case of Rotated MNIST, do truly rotation-invariant features discriminative of the digit class exist? Are those features expressible by a neural network? Even in the presence of correct model selection, is the out-of-distribution performance of modern ERM implementations as good as it gets? Or is it simply as poor as every other alternative? How far are we from the achievable DG performance? Is this upper-bound simply the test error in-domain?\nAre these the right datasets? Most datasets considered in the DG literature do not reflect realistic situations. If one wanted to classify cartoons, the easiest option would be to collect a small labeled dataset of cartoons. Should we consider more realistic, impactful tasks for better research in DG? Some alternatives are medical imaging in different hospitals and self-driving cars in different cities.\nGeneralization requires untestable assumptions Every time we use ERM, we assume that training and testing examples follow the same distribution. This is an untestable assumption in every single instance. The same applies for DG: each algorithm assumes a different (untestable) type of invariance across domains. Therefore, the performance of a DG algorithm depends on the problem at hand, and only time can tell if we have made a good choice. This is akin to the generalization of a scientific theory such as Newton’s gravitation, which cannot be proven, but rather only resist falsification." }, { "heading": "7 CONCLUSION", "text": "Our extensive empirical evaluation of DG algorithms leads to three conclusions. First, a carefully tuned ERM baseline outperforms the previously published state-of-the-art results in terms of average performance (Claim 1). Second, when compared to thirteen popular DG alternatives on the exact same experimental conditions, we find out that no competitor is able to outperform ERM by more than one point (Claim 2). Third, model selection is non-trivial for DG, and it should be an integral part of any proposed method (Claim 3). Going forward, we hope that our results and DOMAINBED promote realistic and rigorous evaluation and enable advances in domain generalization." }, { "heading": "A A DECADE OF LITERATURE ON DOMAIN GENERALIZATION", "text": "In this section, we provide an exhaustive literature review on a decade of domain generalization research. The following classifies domain generalization algorithms according into four strategies to learn invariant predictors: learning invariant features, sharing parameters, meta-learning, or performing data augmentation." }, { "heading": "A.1 LEARNING INVARIANT FEATURES", "text": "Muandet et al. (2013) use kernel methods to find a feature transformation that (i) minimizes the distance between transformed feature distributions across domains, and (ii) does not destroy any of the information between the original features and the targets. In their pioneering work, Ganin et al. (2016) propose Domain Adversarial Neural Networks (DANN), a domain adaptation technique which uses generative adversarial networks (GANs, Goodfellow et al. (2014)), to learn a feature representation that matches across training domains. Akuzawa et al. (2019) extend DANN by considering cases where there exists an statistical dependence between the domain and the class label variables. Albuquerque et al. (2019) extend DANN by considering one-versus-all adversaries that try to predict to which training domain does each of the examples belong to. Li et al. (2018b) employ GANs and the maximum mean discrepancy criteria (Gretton et al., 2012) to align feature distributions across domains. Matsuura and Harada (2019) leverages clustering techniques to learn domaininvariant features even when the separation between training domains is not given. Li et al. (2018c;d) learns a feature transformation φ such that the conditional distributions P (φ(Xd) | Y d = y) match for all training domains d and label values y. Shankar et al. (2018) use a domain classifier to construct adversarial examples for a label classifier, and use a label classifier to construct adversarial examples for the domain classifier. This results in a label classifier with better domain generalization. Li et al. (2019a) train a robust feature extractor and classifier. The robustness comes from (i) asking the feature extractor to produce features such that a classifier trained on domain d can classify instances for domain d′ 6= d, and (ii) asking the classifier to predict labels on domain d using features produced by a feature extractor trained on domain d′ 6= d. Li et al. (2020) adopt a lifelong learning strategy to attack the problem of domain generalization. Motiian et al. (2017) learn a feature representation such that (i) examples from different domains but the same class are close, (ii) examples from different domains and classes are far, and (iii) training examples can be correctly classified. Ilse et al. (2019) train a variational autoencoder (Kingma and Welling, 2014) where the bottleneck representation factorizes knowledge about domain, class label, and residual variations in the input space. Fang et al. (2013) learn a structural SVM metric such that the neighborhood of each example contains examples from the same category and all training domains. The algorithms of Sun and Saenko (2016); Sun et al. (2016); Rahman et al. (2019a) match the feature covariance (second order statistics) across training domains at some level of representation. The algorithms of Ghifary et al. (2016); Hu et al. (2019) use kernel-based multivariate component analysis to minimize the mismatch between training domains while maximizing class separability.\nAlthough popular, learning domain-invariant features has received some criticism (Zhao et al., 2019; Johansson et al., 2019). Some alternatives exist, as we review next. Peters et al. (2016); Rojas-Carulla et al. (2018) considered that one should search for features that lead to the same optimal classifier across training domains. In their pioneering work, Peters et al. (2016) linked this type of invariance to the causal structure of data, and provided a basic algorithm to learn invariant linear models, based on feature selection. Arjovsky et al. (2019) extend the previous to general gradient-based models, including neural networks, in their Invariant Risk Minimization (IRM) principle. Teney et al. (2020) build on IRM to learn a feature transformation that minimizes the relative variance of classifier weights across training datasets. The authors apply their method to reduce the learning of spurious correlations in Visual Question Answering (VQA) tasks. Ahuja et al. (2020) analyze IRM under a game-theoretic perspective to develop an alternative algorithm. Krueger et al. (2020) propose an approximation to the IRM problem consisting in reducing the variance of error averages across domains. Bouvier et al. (2019) attack the same problem as IRM by re-weighting data samples." }, { "heading": "A.2 SHARING PARAMETERS", "text": "Blanchard et al. (2011) build classifiers f(xd, µd), where µd is a kernel mean embedding (Muandet et al., 2017) that summarizes the dataset associated to the example xd. Since the distributional\nidentity of test instances is unknown, these embeddings are estimated using single test examples at test time. See Blanchard et al. (2017); Deshmukh et al. (2019) for theoretical results on this family of algorithms (only applicable when using RKHS-based learners). Zhang et al. (2020) is an extension of Blanchard et al. (2011) where a separate CNN computes the domain embedding, appended to the input image as additional channels. Khosla et al. (2012) learn one max-margin linear classifier wd = w + ∆d per domain d, from which they distill their final, invariant predictor w. Ghifary et al. (2015) use a multitask autoencoder to learn invariances across domains. To achieve this, the authors assume that each training dataset contains the same examples; for instance, photographs about the same objects under different views. Mancini et al. (2018b) train a deep neural network with one set of dedicated batch-normalization layers (Ioffe and Szegedy, 2015) per training dataset. Then, a softmax domain classifier predicts how to linearly-combine the batch-normalization layers at test time. Seo et al. (2020) combines instance normalization with batch-normalization to learn a normalization module per domain, enhancing out-of-distribution generalization. Similarly, Mancini et al. (2018a) learn a softmax domain classifier used to linearly-combine domain-specific predictors at test time. D’Innocente and Caputo (2018) explore more sophisticated ways of aggregating domain-specific predictors. Li et al. (2017) extends Khosla et al. (2012) to deep neural networks by extending each of their parameter tensors with one additional dimension, indexed by the training domains, and set to a neutral value to predict domain-agnostic test examples. Ding and Fu (2017) implement parametertying and low-rank reconstruction losses to learn a predictor that relies on common knowledge across training domains. Hu et al. (2016); Sagawa et al. (2019) weight the importance of the minibatches of the training distributions proportional to their error. Chattopadhyay et al. (2020) overlays multiple weight masks over a single network to learn domain-invariant and domain-specific features." }, { "heading": "A.3 META-LEARNING", "text": "Li et al. (2018a) employ Model-Agnostic Meta-Learning, or MAML (Finn et al., 2017), to build a predictor that learns how to adapt fast between training domains. Dou et al. (2019) use a similar MAML strategy, together with two regularizers that encourage features from different domains to respect inter-class relationships, and be compactly clustered by class labels. Li et al. (2019b) extend the MAML meta-learning strategy to instances of domain generalization where the categories vary from domain to domain. Balaji et al. (2018) use MAML to meta-learn a regularizer encouraging the model trained on one domain to perform well on another domain." }, { "heading": "A.4 AUGMENTING DATA", "text": "Data augmentation is an effective strategy to address domain generalization (Zhang et al., 2019). Unfortunately, how to design efficient data augmentation routines depends on the type of data at hand, and demands a significant amount of work from human experts. Xu et al. (2019); Yan et al. (2020); Wang et al. (2020b) use mixup (Zhang et al., 2018) to blend examples from the different training distributions. Carlucci et al. (2019a) constructs an auxiliary classification task aimed at solving jigsaw puzzles of image patches. The authors show that this self-supervised learning task learns features that improve domain generalization. Similarly, Wang et al. (2020a) use metric learning and self-supervised learning to augment the out-of-distribution performance of an image classifier. Albuquerque et al. (2020) introduce the self-supervised task of predicting responses to Gabor filter banks, in order to learn more transferrable features. Wang et al. (2019) remove textural information from images to improve domain generalization. Volpi et al. (2018) show that training with adversarial data augmentation on a single domain is sufficient to improve domain generalization. Nam et al. (2019) promote representations of data that ignore image style and focus on content. Rahman et al. (2019b); Zhou et al. (2020); Carlucci et al. (2019a) are three alternatives that use GANs to augment the data available during training time. Representation Self-Challenging (Huang et al., 2020) learns robust neural networks by iteratively dropping-out important features. Hendrycks et al. (2020) show that, together with larger models and data, data augmentation improves out-of-distribution performance." }, { "heading": "A.5 PREVIOUS STATE-OF-THE-ART NUMBERS", "text": "Table 5 compiles the best out-of-distribution test accuracies reported across a decade of domain generalization research." }, { "heading": "B MORE ABOUT DOMAINBED", "text": "" }, { "heading": "B.1 ALGORITHMS", "text": "1. Empirical Risk Minimization (ERM, Vapnik (1998)) minimizes the errors across domains. 2. Group Distributionally Robust Optimization (DRO, Sagawa et al. (2019)) performs ERM while\nincreasing the importance of domains with larger errors. 3. Inter-domain Mixup (Mixup, Xu et al. (2019); Yan et al. (2020); Wang et al. (2020b)) performs\nERM on linear interpolations of examples from random pairs of domains and their labels. 4. Meta-Learning for Domain Generalization (MLDG, Li et al. (2018a)) leverages MAML (Finn\net al., 2017) to meta-learn how to generalize across domains. 5. Domain-Adversarial Neural Networks (DANN, Ganin et al. (2016)) employ an adversarial net-\nwork to match feature distributions across environments. 6. Class-conditional DANN (C-DANN, Li et al. (2018d)) is a variant of DANN matching the\nconditional distributions P (φ(Xd)|Y d = y) across domains, for all labels y. 7. CORAL (Sun and Saenko, 2016) matches the mean and covariance of feature distributions. 8. MMD (Li et al., 2018b) matches the MMD (Gretton et al., 2012) of feature distributions. 9. Invariant Risk Minimization (IRM Arjovsky et al. (2019)) learns a feature representation φ(Xd)\nsuch that the optimal linear classifier on top of that representation matches across domains. 10. Risk Extrapolation (VREx, Krueger et al. (2020)) approximates IRM with a variance penalty. 11. Marginal Transfer Learning (MTL, Blanchard et al. (2011; 2017)) estimates a mean embedding\nper domain, passed as a second argument to the classifier. 12. Adaptive Risk Min. (ARM, Zhang et al. (2020)) extends MTL with a separate embedding CNN. 13. Style-Agnostic Networks (SagNets, Nam et al. (2019)) learns neural networks by keeping image\ncontent and randomizing style. 14. Representation Self-Challenging (RSC, Huang et al. (2020)) learns robust neural networks by\niteratively discarding (challenging) the most activated features." }, { "heading": "B.2 DATASETS", "text": "DOMAINBED includes downloaders and loaders for seven multi-domain image classification tasks:\n1. Colored MNIST (Arjovsky et al., 2019) is a variant of the MNIST handwritten digit classification dataset (LeCun, 1998). Domain d ∈ {0.1, 0.3, 0.9} contains a disjoint set of digits colored either red or blue. The label is a noisy function of the digit and color, such that color bears correlation d with the label and the digit bears correlation 0.75 with the label. This dataset contains 70, 000 examples of dimension (2, 28, 28) and 2 classes.\n2. Rotated MNIST (Ghifary et al., 2015) is a variant of MNIST where domain d ∈ { 0, 15, 30, 45, 60, 75 } contains digits rotated by d degrees. Our dataset contains 70, 000 examples of dimension (1, 28, 28) and 10 classes.\n3. PACS (Li et al., 2017) comprises four domains d ∈ { art, cartoons, photos, sketches }. This dataset contains 9, 991 examples of dimension (3, 224, 224) and 7 classes.\n4. VLCS (Fang et al., 2013) comprises photographic domains d ∈ { Caltech101, LabelMe, SUN09, VOC2007 }. This dataset contains 10, 729 examples of dimension (3, 224, 224) and 5 classes.\n5. OfficeHome (Venkateswara et al., 2017) includes domains d ∈ { art, clipart, product, real }. This dataset contains 15, 588 examples of dimension (3, 224, 224) and 65 classes.\n6. Terra Incognita (Beery et al., 2018) contains photographs of wild animals taken by camera traps at locations d ∈ {L100,L38,L43,L46}. Our version of this dataset contains 24, 788 examples of dimension (3, 224, 224) and 10 classes.\n7. DomainNet (Peng et al., 2019) has six domains d ∈ { clipart, infograph, painting, quickdraw, real, sketch }. This dataset contains 586, 575 examples of size (3, 224, 224) and 345 classes.\nFor all datasets, we first pool the raw training, validation, and testing images together. For each random seed, we then instantiate random training, validation, and testing splits." }, { "heading": "B.3 MODEL SELECTION CRITERIA, ILLUSTRATED", "text": "Consider Figure 1, and let Ti = {Ai, Bi, Ci} for i ∈ {1, 2}. Training-domain validation trains each hyperparameter configuration on T1 and chooses the configuration with the highest performance in T2. Leave-one-out validation trains one clone FZ of each hyperparameter configuration on T1 \\Z, for Z ∈ T1; then, it chooses the configuration with highest ∑ Z∈T1 Performance(FZ , Z). Test-domain validation trains each hyperparameter configuration on T1 and chooses the configuration with the highest performance onD2, only looking at its final epoch. Finally, result tables show the performance of selected models on D1." }, { "heading": "B.4 ARCHITECTURES AND HYPERPARAMETERS", "text": "Neural network architectures used for each dataset:\nDataset Architecture Colored MNIST MNIST ConvNetRotated MNIST\nPACS ResNet-50 VLCS OfficeHome TerraIncognita\nNeural network architecture for MNIST experiments:\n# Layer 1 Conv2D (in=d, out=64) 2 ReLU 3 GroupNorm (groups=8) 4 Conv2D (in=64, out=128, stride=2) 5 ReLU 6 GroupNorm (8 groups) 7 Conv2D (in=128, out=128) 8 ReLU 9 GroupNorm (8 groups) 10 Conv2D (in=128, out=128) 11 ReLU 12 GroupNorm (8 groups) 13 Global average-pooling\nFor “ResNet-50”, we replace the final (softmax) layer of a ResNet50 pretrained on ImageNet and fine-tune the entire network. Since minibatches from different domains follow different distributions, batch normalization degrades domain generalization algorithms (Seo et al., 2020). Therefore, we freeze all batch normalization layers before fine-tuning. We insert a dropout layer before the final ResNet-50 linear layer.\nTable 6 lists all algorithm hyperparameters, their default values, and their sweep random search distribution. We optimize all models using Adam (Kingma and Ba, 2015)." }, { "heading": "B.5 EXTENDING DOMAINBED", "text": "Algorithms are classes that implement two methods: .update(minibatches) and .predict(x). The update method receives a list of minibatches, one minibatch per training domain, and each minibatch containing one input and one output tensor. For example, to implement group DRO (Sagawa et al., 2019, Algorithm 1), we simply write the following in algorithms.py:\nclass GroupDRO(ERM): def __init__(self, input_shape, num_classes, num_domains, hparams):\nsuper().__init__(input_shape, num_classes, num_domains, hparams) self.register_buffer(\"q\", torch.Tensor())\ndef update(self, minibatches): device = \"cuda\" if minibatches[0][0].is_cuda else \"cpu\"\nif not len(self.q): self.q = torch.ones(len(minibatches)).to(device)\nlosses = torch.zeros(len(minibatches)).to(device)\nfor m in range(len(minibatches)): x, y = minibatches[m] losses[m] = F.cross_entropy(self.predict(x), y) self.q[m] *= (self.hparams[\"dro_eta\"] * losses[m].data).exp()\nself.q /= self.q.sum() loss = torch.dot(losses, self.q) / len(minibatches)\nself.optimizer.zero_grad() loss.backward() self.optimizer.step()\nreturn {’loss’: loss.item()}\nALGORITHMS.append(’GroupDRO’)\nBy inheriting from ERM, the new GroupDRO class has access to a default classifier .network, optimizer .optimizer, and prediction method .predict(x). Finally, we tell DOMAINBED about the default values and hyperparameter search distributions of the hyperparameters of this new algorithm. We do so by adding the following to the function hparams in hparams registry.py:\nhparams[’dro_eta’] = (1e-2, 10**random_state.uniform(-3, -1))\nTo add a new image classification dataset to DOMAINBED, arrange your image files as /root/MyDataset/domain/class/image.jpg. Then, append to datasets.py:\nclass MyDataset(MultipleEnvironmentImageFolder): ENVIRONMENTS = [’Env1’, ’Env2’, ’Env3’] def __init__(self, root, test_envs, augment=True):\nself.dir = os.path.join(root, \"MyDataset/\") super().__init__(self.dir, test_envs, augment)\nDATASETS.append(’MyDataset’)\nWe are now ready to train our new algorithm on our new dataset, using the second domain as test:\npython train.py --model DRO --dataset MyDataset --data_dir /root --test_envs 1 \\ --hparams ’{\"dro_eta\": 0.2}’\nFinally, we can run a fully automated sweep on all datasets, algorithms, test domains, and model selection criteria by simply invoking python sweep.py, after extending the file command launchers.py to your computing infrastructure. When the sweep finishes, the script collect results.py automatically generates all the result tables shown in this manuscript.\nExtension to UDA One can use DOMAINBED to perform experimentation on unsupervised domain adaptation by extending the .update(minibatches) methods to accept unlabeled examples from the test domain." }, { "heading": "C FULL DOMAINBED RESULTS", "text": "" }, { "heading": "C.1 MODEL SELECTION: TRAINING-DOMAIN VALIDATION SET", "text": "" }, { "heading": "C.1.1 COLOREDMNIST", "text": "Algorithm +90% +80% -90% Avg ERM 71.7 ± 0.1 72.9 ± 0.2 10.0 ± 0.1 51.5 IRM 72.5 ± 0.1 73.3 ± 0.5 10.2 ± 0.3 52.0 GroupDRO 73.1 ± 0.3 73.2 ± 0.2 10.0 ± 0.2 52.1 Mixup 72.7 ± 0.4 73.4 ± 0.1 10.1 ± 0.1 52.1 MLDG 71.5 ± 0.2 73.1 ± 0.2 9.8 ± 0.1 51.5 CORAL 71.6 ± 0.3 73.1 ± 0.1 9.9 ± 0.1 51.5 MMD 71.4 ± 0.3 73.1 ± 0.2 9.9 ± 0.3 51.5 DANN 71.4 ± 0.9 73.1 ± 0.1 10.0 ± 0.0 51.5 CDANN 72.0 ± 0.2 73.0 ± 0.2 10.2 ± 0.1 51.7 MTL 70.9 ± 0.2 72.8 ± 0.3 10.5 ± 0.1 51.4 SagNet 71.8 ± 0.2 73.0 ± 0.2 10.3 ± 0.0 51.7 ARM 82.0 ± 0.5 76.5 ± 0.3 10.2 ± 0.0 56.2 VREx 72.4 ± 0.3 72.9 ± 0.4 10.2 ± 0.0 51.8 RSC 71.9 ± 0.3 73.1 ± 0.2 10.0 ± 0.2 51.7" }, { "heading": "C.1.2 ROTATEDMNIST", "text": "Algorithm 0 15 30 45 60 75 Avg ERM 95.9 ± 0.1 98.9 ± 0.0 98.8 ± 0.0 98.9 ± 0.0 98.9 ± 0.0 96.4 ± 0.0 98.0 IRM 95.5 ± 0.1 98.8 ± 0.2 98.7 ± 0.1 98.6 ± 0.1 98.7 ± 0.0 95.9 ± 0.2 97.7 GroupDRO 95.6 ± 0.1 98.9 ± 0.1 98.9 ± 0.1 99.0 ± 0.0 98.9 ± 0.0 96.5 ± 0.2 98.0 Mixup 95.8 ± 0.3 98.9 ± 0.0 98.9 ± 0.0 98.9 ± 0.0 98.8 ± 0.1 96.5 ± 0.3 98.0 MLDG 95.8 ± 0.1 98.9 ± 0.1 99.0 ± 0.0 98.9 ± 0.1 99.0 ± 0.0 95.8 ± 0.3 97.9 CORAL 95.8 ± 0.3 98.8 ± 0.0 98.9 ± 0.0 99.0 ± 0.0 98.9 ± 0.1 96.4 ± 0.2 98.0 MMD 95.6 ± 0.1 98.9 ± 0.1 99.0 ± 0.0 99.0 ± 0.0 98.9 ± 0.0 96.0 ± 0.2 97.9 DANN 95.0 ± 0.5 98.9 ± 0.1 99.0 ± 0.0 99.0 ± 0.1 98.9 ± 0.0 96.3 ± 0.2 97.8 CDANN 95.7 ± 0.2 98.8 ± 0.0 98.9 ± 0.1 98.9 ± 0.1 98.9 ± 0.1 96.1 ± 0.3 97.9 MTL 95.6 ± 0.1 99.0 ± 0.1 99.0 ± 0.0 98.9 ± 0.1 99.0 ± 0.1 95.8 ± 0.2 97.9 SagNet 95.9 ± 0.3 98.9 ± 0.1 99.0 ± 0.1 99.1 ± 0.0 99.0 ± 0.1 96.3 ± 0.1 98.0 ARM 96.7 ± 0.2 99.1 ± 0.0 99.0 ± 0.0 99.0 ± 0.1 99.1 ± 0.1 96.5 ± 0.4 98.2 VREx 95.9 ± 0.2 99.0 ± 0.1 98.9 ± 0.1 98.9 ± 0.1 98.7 ± 0.1 96.2 ± 0.2 97.9 RSC 94.8 ± 0.5 98.7 ± 0.1 98.8 ± 0.1 98.8 ± 0.0 98.9 ± 0.1 95.9 ± 0.2 97.6\nC.1.3 VLCS" }, { "heading": "Algorithm C L S V Avg", "text": "ERM 97.7 ± 0.4 64.3 ± 0.9 73.4 ± 0.5 74.6 ± 1.3 77.5 IRM 98.6 ± 0.1 64.9 ± 0.9 73.4 ± 0.6 77.3 ± 0.9 78.5 GroupDRO 97.3 ± 0.3 63.4 ± 0.9 69.5 ± 0.8 76.7 ± 0.7 76.7 Mixup 98.3 ± 0.6 64.8 ± 1.0 72.1 ± 0.5 74.3 ± 0.8 77.4 MLDG 97.4 ± 0.2 65.2 ± 0.7 71.0 ± 1.4 75.3 ± 1.0 77.2 CORAL 98.3 ± 0.1 66.1 ± 1.2 73.4 ± 0.3 77.5 ± 1.2 78.8 MMD 97.7 ± 0.1 64.0 ± 1.1 72.8 ± 0.2 75.3 ± 3.3 77.5 DANN 99.0 ± 0.3 65.1 ± 1.4 73.1 ± 0.3 77.2 ± 0.6 78.6 CDANN 97.1 ± 0.3 65.1 ± 1.2 70.7 ± 0.8 77.1 ± 1.5 77.5 MTL 97.8 ± 0.4 64.3 ± 0.3 71.5 ± 0.7 75.3 ± 1.7 77.2 SagNet 97.9 ± 0.4 64.5 ± 0.5 71.4 ± 1.3 77.5 ± 0.5 77.8 ARM 98.7 ± 0.2 63.6 ± 0.7 71.3 ± 1.2 76.7 ± 0.6 77.6 VREx 98.4 ± 0.3 64.4 ± 1.4 74.1 ± 0.4 76.2 ± 1.3 78.3 RSC 97.9 ± 0.1 62.5 ± 0.7 72.3 ± 1.2 75.6 ± 0.8 77.1\nC.1.4 PACS" }, { "heading": "Algorithm A C P S Avg", "text": "ERM 84.7 ± 0.4 80.8 ± 0.6 97.2 ± 0.3 79.3 ± 1.0 85.5 IRM 84.8 ± 1.3 76.4 ± 1.1 96.7 ± 0.6 76.1 ± 1.0 83.5 GroupDRO 83.5 ± 0.9 79.1 ± 0.6 96.7 ± 0.3 78.3 ± 2.0 84.4 Mixup 86.1 ± 0.5 78.9 ± 0.8 97.6 ± 0.1 75.8 ± 1.8 84.6 MLDG 85.5 ± 1.4 80.1 ± 1.7 97.4 ± 0.3 76.6 ± 1.1 84.9 CORAL 88.3 ± 0.2 80.0 ± 0.5 97.5 ± 0.3 78.8 ± 1.3 86.2 MMD 86.1 ± 1.4 79.4 ± 0.9 96.6 ± 0.2 76.5 ± 0.5 84.6 DANN 86.4 ± 0.8 77.4 ± 0.8 97.3 ± 0.4 73.5 ± 2.3 83.6 CDANN 84.6 ± 1.8 75.5 ± 0.9 96.8 ± 0.3 73.5 ± 0.6 82.6 MTL 87.5 ± 0.8 77.1 ± 0.5 96.4 ± 0.8 77.3 ± 1.8 84.6 SagNet 87.4 ± 1.0 80.7 ± 0.6 97.1 ± 0.1 80.0 ± 0.4 86.3 ARM 86.8 ± 0.6 76.8 ± 0.5 97.4 ± 0.3 79.3 ± 1.2 85.1 VREx 86.0 ± 1.6 79.1 ± 0.6 96.9 ± 0.5 77.7 ± 1.7 84.9 RSC 85.4 ± 0.8 79.7 ± 1.8 97.6 ± 0.3 78.2 ± 1.2 85.2" }, { "heading": "C.1.5 OFFICEHOME", "text": "" }, { "heading": "Algorithm A C P R Avg", "text": "ERM 61.3 ± 0.7 52.4 ± 0.3 75.8 ± 0.1 76.6 ± 0.3 66.5 IRM 58.9 ± 2.3 52.2 ± 1.6 72.1 ± 2.9 74.0 ± 2.5 64.3 GroupDRO 60.4 ± 0.7 52.7 ± 1.0 75.0 ± 0.7 76.0 ± 0.7 66.0 Mixup 62.4 ± 0.8 54.8 ± 0.6 76.9 ± 0.3 78.3 ± 0.2 68.1 MLDG 61.5 ± 0.9 53.2 ± 0.6 75.0 ± 1.2 77.5 ± 0.4 66.8 CORAL 65.3 ± 0.4 54.4 ± 0.5 76.5 ± 0.1 78.4 ± 0.5 68.7 MMD 60.4 ± 0.2 53.3 ± 0.3 74.3 ± 0.1 77.4 ± 0.6 66.3 DANN 59.9 ± 1.3 53.0 ± 0.3 73.6 ± 0.7 76.9 ± 0.5 65.9 CDANN 61.5 ± 1.4 50.4 ± 2.4 74.4 ± 0.9 76.6 ± 0.8 65.8 MTL 61.5 ± 0.7 52.4 ± 0.6 74.9 ± 0.4 76.8 ± 0.4 66.4 SagNet 63.4 ± 0.2 54.8 ± 0.4 75.8 ± 0.4 78.3 ± 0.3 68.1 ARM 58.9 ± 0.8 51.0 ± 0.5 74.1 ± 0.1 75.2 ± 0.3 64.8 VREx 60.7 ± 0.9 53.0 ± 0.9 75.3 ± 0.1 76.6 ± 0.5 66.4 RSC 60.7 ± 1.4 51.4 ± 0.3 74.8 ± 1.1 75.1 ± 1.3 65.5" }, { "heading": "C.1.6 TERRAINCOGNITA", "text": "" }, { "heading": "Algorithm L100 L38 L43 L46 Avg", "text": "ERM 49.8 ± 4.4 42.1 ± 1.4 56.9 ± 1.8 35.7 ± 3.9 46.1 IRM 54.6 ± 1.3 39.8 ± 1.9 56.2 ± 1.8 39.6 ± 0.8 47.6 GroupDRO 41.2 ± 0.7 38.6 ± 2.1 56.7 ± 0.9 36.4 ± 2.1 43.2 Mixup 59.6 ± 2.0 42.2 ± 1.4 55.9 ± 0.8 33.9 ± 1.4 47.9 MLDG 54.2 ± 3.0 44.3 ± 1.1 55.6 ± 0.3 36.9 ± 2.2 47.7 CORAL 51.6 ± 2.4 42.2 ± 1.0 57.0 ± 1.0 39.8 ± 2.9 47.6 MMD 41.9 ± 3.0 34.8 ± 1.0 57.0 ± 1.9 35.2 ± 1.8 42.2 DANN 51.1 ± 3.5 40.6 ± 0.6 57.4 ± 0.5 37.7 ± 1.8 46.7 CDANN 47.0 ± 1.9 41.3 ± 4.8 54.9 ± 1.7 39.8 ± 2.3 45.8 MTL 49.3 ± 1.2 39.6 ± 6.3 55.6 ± 1.1 37.8 ± 0.8 45.6 SagNet 53.0 ± 2.9 43.0 ± 2.5 57.9 ± 0.6 40.4 ± 1.3 48.6 ARM 49.3 ± 0.7 38.3 ± 2.4 55.8 ± 0.8 38.7 ± 1.3 45.5 VREx 48.2 ± 4.3 41.7 ± 1.3 56.8 ± 0.8 38.7 ± 3.1 46.4 RSC 50.2 ± 2.2 39.2 ± 1.4 56.3 ± 1.4 40.8 ± 0.6 46.6" }, { "heading": "C.1.7 DOMAINNET", "text": "Algorithm clip info paint quick real sketch Avg ERM 58.1 ± 0.3 18.8 ± 0.3 46.7 ± 0.3 12.2 ± 0.4 59.6 ± 0.1 49.8 ± 0.4 40.9 IRM 48.5 ± 2.8 15.0 ± 1.5 38.3 ± 4.3 10.9 ± 0.5 48.2 ± 5.2 42.3 ± 3.1 33.9 GroupDRO 47.2 ± 0.5 17.5 ± 0.4 33.8 ± 0.5 9.3 ± 0.3 51.6 ± 0.4 40.1 ± 0.6 33.3 Mixup 55.7 ± 0.3 18.5 ± 0.5 44.3 ± 0.5 12.5 ± 0.4 55.8 ± 0.3 48.2 ± 0.5 39.2 MLDG 59.1 ± 0.2 19.1 ± 0.3 45.8 ± 0.7 13.4 ± 0.3 59.6 ± 0.2 50.2 ± 0.4 41.2 CORAL 59.2 ± 0.1 19.7 ± 0.2 46.6 ± 0.3 13.4 ± 0.4 59.8 ± 0.2 50.1 ± 0.6 41.5 MMD 32.1 ± 13.3 11.0 ± 4.6 26.8 ± 11.3 8.7 ± 2.1 32.7 ± 13.8 28.9 ± 11.9 23.4 DANN 53.1 ± 0.2 18.3 ± 0.1 44.2 ± 0.7 11.8 ± 0.1 55.5 ± 0.4 46.8 ± 0.6 38.3 CDANN 54.6 ± 0.4 17.3 ± 0.1 43.7 ± 0.9 12.1 ± 0.7 56.2 ± 0.4 45.9 ± 0.5 38.3 MTL 57.9 ± 0.5 18.5 ± 0.4 46.0 ± 0.1 12.5 ± 0.1 59.5 ± 0.3 49.2 ± 0.1 40.6 SagNet 57.7 ± 0.3 19.0 ± 0.2 45.3 ± 0.3 12.7 ± 0.5 58.1 ± 0.5 48.8 ± 0.2 40.3 ARM 49.7 ± 0.3 16.3 ± 0.5 40.9 ± 1.1 9.4 ± 0.1 53.4 ± 0.4 43.5 ± 0.4 35.5 VREx 47.3 ± 3.5 16.0 ± 1.5 35.8 ± 4.6 10.9 ± 0.3 49.6 ± 4.9 42.0 ± 3.0 33.6 RSC 55.0 ± 1.2 18.3 ± 0.5 44.4 ± 0.6 12.2 ± 0.2 55.7 ± 0.7 47.8 ± 0.9 38.9" }, { "heading": "C.1.8 AVERAGES", "text": "" }, { "heading": "Algorithm ColoredMNIST RotatedMNIST VLCS PACS OfficeHome TerraIncognita DomainNet Avg", "text": "ERM 51.5 ± 0.1 98.0 ± 0.0 77.5 ± 0.4 85.5 ± 0.2 66.5 ± 0.3 46.1 ± 1.8 40.9 ± 0.1 66.6 IRM 52.0 ± 0.1 97.7 ± 0.1 78.5 ± 0.5 83.5 ± 0.8 64.3 ± 2.2 47.6 ± 0.8 33.9 ± 2.8 65.4 GroupDRO 52.1 ± 0.0 98.0 ± 0.0 76.7 ± 0.6 84.4 ± 0.8 66.0 ± 0.7 43.2 ± 1.1 33.3 ± 0.2 64.8 Mixup 52.1 ± 0.2 98.0 ± 0.1 77.4 ± 0.6 84.6 ± 0.6 68.1 ± 0.3 47.9 ± 0.8 39.2 ± 0.1 66.7 MLDG 51.5 ± 0.1 97.9 ± 0.0 77.2 ± 0.4 84.9 ± 1.0 66.8 ± 0.6 47.7 ± 0.9 41.2 ± 0.1 66.7 CORAL 51.5 ± 0.1 98.0 ± 0.1 78.8 ± 0.6 86.2 ± 0.3 68.7 ± 0.3 47.6 ± 1.0 41.5 ± 0.1 67.5 MMD 51.5 ± 0.2 97.9 ± 0.0 77.5 ± 0.9 84.6 ± 0.5 66.3 ± 0.1 42.2 ± 1.6 23.4 ± 9.5 63.3 DANN 51.5 ± 0.3 97.8 ± 0.1 78.6 ± 0.4 83.6 ± 0.4 65.9 ± 0.6 46.7 ± 0.5 38.3 ± 0.1 66.1 CDANN 51.7 ± 0.1 97.9 ± 0.1 77.5 ± 0.1 82.6 ± 0.9 65.8 ± 1.3 45.8 ± 1.6 38.3 ± 0.3 65.6 MTL 51.4 ± 0.1 97.9 ± 0.0 77.2 ± 0.4 84.6 ± 0.5 66.4 ± 0.5 45.6 ± 1.2 40.6 ± 0.1 66.2 SagNet 51.7 ± 0.0 98.0 ± 0.0 77.8 ± 0.5 86.3 ± 0.2 68.1 ± 0.1 48.6 ± 1.0 40.3 ± 0.1 67.2 ARM 56.2 ± 0.2 98.2 ± 0.1 77.6 ± 0.3 85.1 ± 0.4 64.8 ± 0.3 45.5 ± 0.3 35.5 ± 0.2 66.1 VREx 51.8 ± 0.1 97.9 ± 0.1 78.3 ± 0.2 84.9 ± 0.6 66.4 ± 0.6 46.4 ± 0.6 33.6 ± 2.9 65.6 RSC 51.7 ± 0.2 97.6 ± 0.1 77.1 ± 0.5 85.2 ± 0.9 65.5 ± 0.9 46.6 ± 1.0 38.9 ± 0.5 66.1" }, { "heading": "C.2 MODEL SELECTION: LEAVE-ONE-DOMAIN-OUT CROSS-VALIDATION", "text": "" }, { "heading": "C.2.1 COLOREDMNIST", "text": "Algorithm +90% +80% -90% Avg ERM 50.0 ± 0.2 50.1 ± 0.2 10.0 ± 0.0 36.7 IRM 46.7 ± 2.4 51.2 ± 0.3 23.1 ± 10.7 40.3 GroupDRO 50.1 ± 0.5 50.0 ± 0.5 10.2 ± 0.1 36.8 Mixup 36.6 ± 10.9 53.4 ± 5.9 10.2 ± 0.1 33.4 MLDG 50.1 ± 0.6 50.1 ± 0.3 10.0 ± 0.1 36.7 CORAL 49.5 ± 0.0 59.5 ± 8.2 10.2 ± 0.1 39.7 MMD 50.3 ± 0.2 50.0 ± 0.4 9.9 ± 0.2 36.8 DANN 49.9 ± 0.1 62.1 ± 7.0 10.0 ± 0.1 40.7 CDANN 63.2 ± 10.1 44.4 ± 4.5 9.9 ± 0.2 39.1 MTL 44.3 ± 4.9 50.7 ± 0.0 10.1 ± 0.1 35.0 SagNet 49.9 ± 0.4 49.7 ± 0.3 10.0 ± 0.1 36.5 ARM 50.0 ± 0.3 50.1 ± 0.3 10.2 ± 0.0 36.8 VREx 50.2 ± 0.4 50.5 ± 0.5 10.1 ± 0.0 36.9 RSC 49.6 ± 0.3 49.7 ± 0.4 10.1 ± 0.0 36.5" }, { "heading": "C.2.2 ROTATEDMNIST", "text": "Algorithm 0 15 30 45 60 75 Avg ERM 95.3 ± 0.2 98.9 ± 0.1 98.9 ± 0.1 98.8 ± 0.1 98.5 ± 0.1 96.2 ± 0.2 97.7 IRM 94.5 ± 0.5 98.2 ± 0.2 98.7 ± 0.1 96.6 ± 1.5 98.4 ± 0.1 95.8 ± 0.1 97.0 GroupDRO 95.7 ± 0.3 98.7 ± 0.1 98.9 ± 0.1 98.6 ± 0.2 98.6 ± 0.2 95.3 ± 0.9 97.6 Mixup 94.8 ± 0.4 98.8 ± 0.0 98.9 ± 0.1 99.0 ± 0.1 98.9 ± 0.0 96.4 ± 0.3 97.8 MLDG 94.3 ± 0.4 98.8 ± 0.1 99.0 ± 0.1 98.8 ± 0.1 98.8 ± 0.1 96.0 ± 0.3 97.6 CORAL 95.7 ± 0.5 98.5 ± 0.2 98.9 ± 0.2 98.6 ± 0.2 98.8 ± 0.1 96.3 ± 0.2 97.8 MMD 95.8 ± 0.2 98.7 ± 0.1 99.0 ± 0.0 98.8 ± 0.1 98.7 ± 0.1 96.1 ± 0.2 97.8 DANN 95.1 ± 0.5 98.3 ± 0.5 98.5 ± 0.1 99.0 ± 0.1 98.6 ± 0.1 96.1 ± 0.3 97.6 CDANN 94.3 ± 0.5 98.4 ± 0.3 98.9 ± 0.1 98.7 ± 0.1 98.9 ± 0.1 95.7 ± 0.4 97.5 MTL 95.5 ± 0.3 98.6 ± 0.3 98.8 ± 0.1 99.0 ± 0.1 99.0 ± 0.1 95.6 ± 0.3 97.8 SagNet 94.0 ± 1.6 98.7 ± 0.2 98.9 ± 0.1 99.1 ± 0.0 98.8 ± 0.1 74.2 ± 16.5 94.0 ARM 95.8 ± 0.1 99.0 ± 0.1 99.0 ± 0.0 98.9 ± 0.1 98.8 ± 0.1 96.9 ± 0.3 98.1 VREx 95.8 ± 0.2 98.7 ± 0.0 98.5 ± 0.1 98.9 ± 0.1 74.0 ± 20.1 95.5 ± 0.5 93.6 RSC 94.6 ± 0.0 98.4 ± 0.2 99.0 ± 0.1 98.9 ± 0.0 98.8 ± 0.1 95.9 ± 0.4 97.6\nC.2.3 VLCS" }, { "heading": "Algorithm C L S V Avg", "text": "ERM 98.0 ± 0.4 62.6 ± 0.9 70.8 ± 1.9 77.5 ± 1.9 77.2 IRM 98.6 ± 0.3 66.0 ± 1.1 69.3 ± 0.9 71.5 ± 1.9 76.3 GroupDRO 98.1 ± 0.3 66.4 ± 0.9 71.0 ± 0.3 76.1 ± 1.4 77.9 Mixup 98.4 ± 0.3 63.4 ± 0.7 72.9 ± 0.8 76.1 ± 1.2 77.7 MLDG 98.5 ± 0.3 61.7 ± 1.2 73.6 ± 1.8 75.0 ± 0.8 77.2 CORAL 96.9 ± 0.9 65.7 ± 1.2 73.3 ± 0.7 78.7 ± 0.8 78.7 MMD 98.3 ± 0.1 65.6 ± 0.7 69.7 ± 1.0 75.7 ± 0.9 77.3 DANN 97.3 ± 1.3 63.7 ± 1.3 72.6 ± 1.4 74.2 ± 1.7 76.9 CDANN 97.6 ± 0.6 63.4 ± 0.8 70.5 ± 1.4 78.6 ± 0.5 77.5 MTL 97.6 ± 0.6 60.6 ± 1.3 71.0 ± 1.2 77.2 ± 0.7 76.6 SagNet 97.3 ± 0.4 61.6 ± 0.8 73.4 ± 1.9 77.6 ± 0.4 77.5 ARM 97.2 ± 0.5 62.7 ± 1.5 70.6 ± 0.6 75.8 ± 0.9 76.6 VREx 96.9 ± 0.3 64.8 ± 2.0 69.7 ± 1.8 75.5 ± 1.7 76.7 RSC 97.5 ± 0.6 63.1 ± 1.2 73.0 ± 1.3 76.2 ± 0.5 77.5\nC.2.4 PACS" }, { "heading": "Algorithm A C P S Avg", "text": "ERM 83.2 ± 1.3 76.8 ± 1.7 97.2 ± 0.3 74.8 ± 1.3 83.0 IRM 81.7 ± 2.4 77.0 ± 1.3 96.3 ± 0.2 71.1 ± 2.2 81.5 GroupDRO 84.4 ± 0.7 77.3 ± 0.8 96.8 ± 0.8 75.6 ± 1.4 83.5 Mixup 85.2 ± 1.9 77.0 ± 1.7 96.8 ± 0.8 73.9 ± 1.6 83.2 MLDG 81.4 ± 3.6 77.9 ± 2.3 96.2 ± 0.3 76.1 ± 2.1 82.9 CORAL 80.5 ± 2.8 74.5 ± 0.4 96.8 ± 0.3 78.6 ± 1.4 82.6 MMD 84.9 ± 1.7 75.1 ± 2.0 96.1 ± 0.9 76.5 ± 1.5 83.2 DANN 84.3 ± 2.8 72.4 ± 2.8 96.5 ± 0.8 70.8 ± 1.3 81.0 CDANN 78.3 ± 2.8 73.8 ± 1.6 96.4 ± 0.5 66.8 ± 5.5 78.8 MTL 85.6 ± 1.5 78.9 ± 0.6 97.1 ± 0.3 73.1 ± 2.7 83.7 SagNet 81.1 ± 1.9 75.4 ± 1.3 95.7 ± 0.9 77.2 ± 0.6 82.3 ARM 85.9 ± 0.3 73.3 ± 1.9 95.6 ± 0.4 72.1 ± 2.4 81.7 VREx 81.6 ± 4.0 74.1 ± 0.3 96.9 ± 0.4 72.8 ± 2.1 81.3 RSC 83.7 ± 1.7 82.9 ± 1.1 95.6 ± 0.7 68.1 ± 1.5 82.6" }, { "heading": "C.2.5 OFFICEHOME", "text": "Algorithm A C P R Avg ERM 61.1 ± 0.9 50.7 ± 0.6 74.6 ± 0.3 76.4 ± 0.6 65.7 IRM 58.2 ± 1.2 51.6 ± 1.2 73.3 ± 2.2 74.1 ± 1.7 64.3 GroupDRO 59.9 ± 0.4 51.0 ± 0.4 73.7 ± 0.3 76.0 ± 0.2 65.2 Mixup 61.4 ± 0.5 53.0 ± 0.3 75.8 ± 0.2 77.7 ± 0.3 67.0 MLDG 60.5 ± 1.4 51.9 ± 0.2 74.4 ± 0.6 77.6 ± 0.4 66.1 CORAL 64.5 ± 0.8 54.8 ± 0.2 76.6 ± 0.3 78.1 ± 0.2 68.5 MMD 60.8 ± 0.7 53.7 ± 0.5 50.2 ± 19.9 76.0 ± 0.7 60.2 DANN 60.2 ± 1.3 52.2 ± 0.9 71.3 ± 2.0 76.0 ± 0.6 64.9 CDANN 58.7 ± 2.9 49.0 ± 2.1 73.6 ± 1.0 76.0 ± 1.1 64.3 MTL 59.1 ± 0.3 52.1 ± 1.2 74.7 ± 0.4 77.0 ± 0.6 65.7 SagNet 63.0 ± 0.8 54.0 ± 0.3 76.6 ± 0.3 76.8 ± 0.4 67.6 ARM 58.7 ± 0.8 49.8 ± 1.1 73.1 ± 0.5 75.9 ± 0.1 64.4 VREx 57.6 ± 3.4 51.3 ± 1.3 74.9 ± 0.2 75.8 ± 0.7 64.9 RSC 61.6 ± 1.0 51.1 ± 0.8 74.8 ± 1.1 75.7 ± 0.9 65.8" }, { "heading": "C.2.6 TERRAINCOGNITA", "text": "Algorithm L100 L38 L43 L46 Avg ERM 34.4 ± 5.6 38.1 ± 4.0 55.7 ± 1.0 37.4 ± 1.1 41.4 IRM 46.7 ± 1.8 40.9 ± 2.1 52.2 ± 3.3 24.9 ± 10.0 41.2 GroupDRO 45.2 ± 6.2 40.1 ± 2.0 55.8 ± 1.4 38.3 ± 4.2 44.9 Mixup 59.7 ± 1.5 41.3 ± 2.1 55.9 ± 0.8 37.9 ± 1.5 48.7 MLDG 51.0 ± 1.9 39.2 ± 0.2 56.2 ± 1.1 38.3 ± 2.4 46.2 CORAL 52.4 ± 7.2 39.7 ± 1.5 56.1 ± 0.9 37.1 ± 2.2 46.3 MMD 49.1 ± 2.2 42.0 ± 1.6 55.3 ± 1.9 39.5 ± 2.0 46.5 DANN 46.9 ± 3.9 38.8 ± 1.1 55.5 ± 1.4 36.2 ± 1.1 44.4 CDANN 43.9 ± 7.3 32.5 ± 4.4 41.0 ± 7.8 42.4 ± 1.8 39.9 MTL 42.8 ± 4.6 43.9 ± 1.1 55.5 ± 0.8 37.5 ± 1.9 44.9 SagNet 48.1 ± 2.4 47.1 ± 0.8 54.4 ± 1.1 39.1 ± 1.8 47.2 ARM 48.9 ± 5.3 34.4 ± 3.5 51.9 ± 0.8 35.4 ± 2.3 42.6 VREx 46.4 ± 1.4 25.5 ± 5.8 39.6 ± 12.8 37.8 ± 3.6 37.3 RSC 40.0 ± 1.3 32.1 ± 2.5 53.9 ± 0.5 34.2 ± 0.2 40.0" }, { "heading": "C.2.7 DOMAINNET", "text": "Algorithm clip info paint quick real sketch Avg ERM 58.1 ± 0.3 17.8 ± 0.3 47.0 ± 0.3 12.2 ± 0.4 59.2 ± 0.7 49.5 ± 0.6 40.6 IRM 47.5 ± 2.7 15.0 ± 1.5 37.3 ± 5.1 10.9 ± 0.5 48.0 ± 5.4 42.3 ± 3.1 33.5 GroupDRO 47.2 ± 0.5 17.0 ± 0.6 33.8 ± 0.5 9.2 ± 0.4 51.6 ± 0.4 39.2 ± 1.2 33.0 Mixup 54.4 ± 0.6 18.0 ± 0.4 44.5 ± 0.5 11.5 ± 0.2 55.8 ± 1.1 46.9 ± 0.2 38.5 MLDG 58.3 ± 0.7 19.3 ± 0.2 45.8 ± 0.7 13.2 ± 0.3 59.4 ± 0.2 49.8 ± 0.3 41.0 CORAL 59.2 ± 0.1 19.5 ± 0.3 46.2 ± 0.1 13.4 ± 0.4 59.1 ± 0.5 49.5 ± 0.8 41.1 MMD 32.2 ± 13.3 11.0 ± 4.6 26.8 ± 11.3 8.7 ± 2.1 32.7 ± 13.8 28.9 ± 11.9 23.4 DANN 52.7 ± 0.1 18.0 ± 0.3 44.2 ± 0.7 11.8 ± 0.1 55.5 ± 0.4 46.8 ± 0.6 38.2 CDANN 53.1 ± 0.9 17.3 ± 0.1 43.7 ± 0.9 11.6 ± 0.6 56.2 ± 0.4 45.9 ± 0.5 38.0 MTL 57.3 ± 0.3 19.3 ± 0.2 45.7 ± 0.4 12.5 ± 0.1 59.3 ± 0.2 49.2 ± 0.1 40.6 SagNet 56.2 ± 0.3 18.9 ± 0.2 46.2 ± 0.5 12.6 ± 0.6 58.2 ± 0.6 49.1 ± 0.2 40.2 ARM 49.0 ± 0.7 15.8 ± 0.3 40.8 ± 1.1 9.4 ± 0.2 53.0 ± 0.4 43.4 ± 0.3 35.2 VREx 46.5 ± 4.1 15.6 ± 1.8 35.8 ± 4.6 10.9 ± 0.3 49.6 ± 4.9 42.0 ± 3.0 33.4 RSC 55.0 ± 1.2 18.3 ± 0.5 44.4 ± 0.6 12.2 ± 0.2 55.7 ± 0.7 47.8 ± 0.9 38.9" }, { "heading": "C.2.8 AVERAGES", "text": "" }, { "heading": "Algorithm ColoredMNIST RotatedMNIST VLCS PACS OfficeHome TerraIncognita DomainNet Avg", "text": "ERM 36.7 ± 0.1 97.7 ± 0.0 77.2 ± 0.4 83.0 ± 0.7 65.7 ± 0.5 41.4 ± 1.4 40.6 ± 0.2 63.2 IRM 40.3 ± 4.2 97.0 ± 0.2 76.3 ± 0.6 81.5 ± 0.8 64.3 ± 1.5 41.2 ± 3.6 33.5 ± 3.0 62.0 GroupDRO 36.8 ± 0.1 97.6 ± 0.1 77.9 ± 0.5 83.5 ± 0.2 65.2 ± 0.2 44.9 ± 1.4 33.0 ± 0.3 62.7 Mixup 33.4 ± 4.7 97.8 ± 0.0 77.7 ± 0.6 83.2 ± 0.4 67.0 ± 0.2 48.7 ± 0.4 38.5 ± 0.3 63.8 MLDG 36.7 ± 0.2 97.6 ± 0.0 77.2 ± 0.9 82.9 ± 1.7 66.1 ± 0.5 46.2 ± 0.9 41.0 ± 0.2 64.0 CORAL 39.7 ± 2.8 97.8 ± 0.1 78.7 ± 0.4 82.6 ± 0.5 68.5 ± 0.2 46.3 ± 1.7 41.1 ± 0.1 65.0 MMD 36.8 ± 0.1 97.8 ± 0.1 77.3 ± 0.5 83.2 ± 0.2 60.2 ± 5.2 46.5 ± 1.5 23.4 ± 9.5 60.7 DANN 40.7 ± 2.3 97.6 ± 0.2 76.9 ± 0.4 81.0 ± 1.1 64.9 ± 1.2 44.4 ± 1.1 38.2 ± 0.2 63.4 CDANN 39.1 ± 4.4 97.5 ± 0.2 77.5 ± 0.2 78.8 ± 2.2 64.3 ± 1.7 39.9 ± 3.2 38.0 ± 0.1 62.2 MTL 35.0 ± 1.7 97.8 ± 0.1 76.6 ± 0.5 83.7 ± 0.4 65.7 ± 0.5 44.9 ± 1.2 40.6 ± 0.1 63.5 SagNet 36.5 ± 0.1 94.0 ± 3.0 77.5 ± 0.3 82.3 ± 0.1 67.6 ± 0.3 47.2 ± 0.9 40.2 ± 0.2 63.6 ARM 36.8 ± 0.0 98.1 ± 0.1 76.6 ± 0.5 81.7 ± 0.2 64.4 ± 0.2 42.6 ± 2.7 35.2 ± 0.1 62.2 VREx 36.9 ± 0.3 93.6 ± 3.4 76.7 ± 1.0 81.3 ± 0.9 64.9 ± 1.3 37.3 ± 3.0 33.4 ± 3.1 60.6 RSC 36.5 ± 0.2 97.6 ± 0.1 77.5 ± 0.5 82.6 ± 0.7 65.8 ± 0.7 40.0 ± 0.8 38.9 ± 0.5 62.7" }, { "heading": "C.3 MODEL SELECTION: TEST-DOMAIN VALIDATION SET (ORACLE)", "text": "" }, { "heading": "C.3.1 COLOREDMNIST", "text": "Algorithm +90% +80% -90% Avg ERM 71.8 ± 0.4 72.9 ± 0.1 28.7 ± 0.5 57.8 IRM 72.0 ± 0.1 72.5 ± 0.3 58.5 ± 3.3 67.7 GroupDRO 73.5 ± 0.3 73.0 ± 0.3 36.8 ± 2.8 61.1 Mixup 72.5 ± 0.2 73.9 ± 0.4 28.6 ± 0.2 58.4 MLDG 71.9 ± 0.3 73.5 ± 0.2 29.1 ± 0.9 58.2 CORAL 71.1 ± 0.2 73.4 ± 0.2 31.1 ± 1.6 58.6 MMD 69.0 ± 2.3 70.4 ± 1.6 50.6 ± 0.2 63.3 DANN 72.4 ± 0.5 73.9 ± 0.5 24.9 ± 2.7 57.0 CDANN 71.8 ± 0.5 72.9 ± 0.1 33.8 ± 6.4 59.5 MTL 71.2 ± 0.2 73.5 ± 0.2 28.0 ± 0.6 57.6 SagNet 72.1 ± 0.3 73.2 ± 0.3 29.4 ± 0.5 58.2 ARM 84.9 ± 0.9 76.8 ± 0.6 27.9 ± 2.1 63.2 VREx 72.8 ± 0.3 73.0 ± 0.3 55.2 ± 4.0 67.0 RSC 72.0 ± 0.1 73.2 ± 0.1 30.2 ± 1.6 58.5" }, { "heading": "C.3.2 ROTATEDMNIST", "text": "Algorithm 0 15 30 45 60 75 Avg ERM 95.3 ± 0.2 98.7 ± 0.1 98.9 ± 0.1 98.7 ± 0.2 98.9 ± 0.0 96.2 ± 0.2 97.8 IRM 94.9 ± 0.6 98.7 ± 0.2 98.6 ± 0.1 98.6 ± 0.2 98.7 ± 0.1 95.2 ± 0.3 97.5 GroupDRO 95.9 ± 0.1 99.0 ± 0.1 98.9 ± 0.1 98.8 ± 0.1 98.6 ± 0.1 96.3 ± 0.4 97.9 Mixup 95.8 ± 0.3 98.7 ± 0.0 99.0 ± 0.1 98.8 ± 0.1 98.8 ± 0.1 96.6 ± 0.2 98.0 MLDG 95.7 ± 0.2 98.9 ± 0.1 98.8 ± 0.1 98.9 ± 0.1 98.6 ± 0.1 95.8 ± 0.4 97.8 CORAL 96.2 ± 0.2 98.8 ± 0.1 98.8 ± 0.1 98.8 ± 0.1 98.9 ± 0.1 96.4 ± 0.2 98.0 MMD 96.1 ± 0.2 98.9 ± 0.0 99.0 ± 0.0 98.8 ± 0.0 98.9 ± 0.0 96.4 ± 0.2 98.0 DANN 95.9 ± 0.1 98.9 ± 0.1 98.6 ± 0.2 98.7 ± 0.1 98.9 ± 0.0 96.3 ± 0.3 97.9 CDANN 95.9 ± 0.2 98.8 ± 0.0 98.7 ± 0.1 98.9 ± 0.1 98.8 ± 0.1 96.1 ± 0.3 97.9 MTL 96.1 ± 0.2 98.9 ± 0.0 99.0 ± 0.0 98.7 ± 0.1 99.0 ± 0.0 95.8 ± 0.3 97.9 SagNet 95.9 ± 0.1 99.0 ± 0.1 98.9 ± 0.1 98.6 ± 0.1 98.8 ± 0.1 96.3 ± 0.1 97.9 ARM 95.9 ± 0.4 99.0 ± 0.1 98.8 ± 0.1 98.9 ± 0.1 99.1 ± 0.1 96.7 ± 0.2 98.1 VREx 95.5 ± 0.2 99.0 ± 0.0 98.7 ± 0.2 98.8 ± 0.1 98.8 ± 0.0 96.4 ± 0.0 97.9 RSC 95.4 ± 0.1 98.6 ± 0.1 98.6 ± 0.1 98.9 ± 0.0 98.8 ± 0.1 95.4 ± 0.3 97.6\nC.3.3 VLCS" }, { "heading": "Algorithm C L S V Avg", "text": "ERM 97.6 ± 0.3 67.9 ± 0.7 70.9 ± 0.2 74.0 ± 0.6 77.6 IRM 97.3 ± 0.2 66.7 ± 0.1 71.0 ± 2.3 72.8 ± 0.4 76.9 GroupDRO 97.7 ± 0.2 65.9 ± 0.2 72.8 ± 0.8 73.4 ± 1.3 77.4 Mixup 97.8 ± 0.4 67.2 ± 0.4 71.5 ± 0.2 75.7 ± 0.6 78.1 MLDG 97.1 ± 0.5 66.6 ± 0.5 71.5 ± 0.1 75.0 ± 0.9 77.5 CORAL 97.3 ± 0.2 67.5 ± 0.6 71.6 ± 0.6 74.5 ± 0.0 77.7 MMD 98.8 ± 0.0 66.4 ± 0.4 70.8 ± 0.5 75.6 ± 0.4 77.9 DANN 99.0 ± 0.2 66.3 ± 1.2 73.4 ± 1.4 80.1 ± 0.5 79.7 CDANN 98.2 ± 0.1 68.8 ± 0.5 74.3 ± 0.6 78.1 ± 0.5 79.9 MTL 97.9 ± 0.7 66.1 ± 0.7 72.0 ± 0.4 74.9 ± 1.1 77.7 SagNet 97.4 ± 0.3 66.4 ± 0.4 71.6 ± 0.1 75.0 ± 0.8 77.6 ARM 97.6 ± 0.6 66.5 ± 0.3 72.7 ± 0.6 74.4 ± 0.7 77.8 VREx 98.4 ± 0.2 66.4 ± 0.7 72.8 ± 0.1 75.0 ± 1.4 78.1 RSC 98.0 ± 0.4 67.2 ± 0.3 70.3 ± 1.3 75.6 ± 0.4 77.8\nC.3.4 PACS" }, { "heading": "Algorithm A C P S Avg", "text": "ERM 86.5 ± 1.0 81.3 ± 0.6 96.2 ± 0.3 82.7 ± 1.1 86.7 IRM 84.2 ± 0.9 79.7 ± 1.5 95.9 ± 0.4 78.3 ± 2.1 84.5 GroupDRO 87.5 ± 0.5 82.9 ± 0.6 97.1 ± 0.3 81.1 ± 1.2 87.1 Mixup 87.5 ± 0.4 81.6 ± 0.7 97.4 ± 0.2 80.8 ± 0.9 86.8 MLDG 87.0 ± 1.2 82.5 ± 0.9 96.7 ± 0.3 81.2 ± 0.6 86.8 CORAL 86.6 ± 0.8 81.8 ± 0.9 97.1 ± 0.5 82.7 ± 0.6 87.1 MMD 88.1 ± 0.8 82.6 ± 0.7 97.1 ± 0.5 81.2 ± 1.2 87.2 DANN 87.0 ± 0.4 80.3 ± 0.6 96.8 ± 0.3 76.9 ± 1.1 85.2 CDANN 87.7 ± 0.6 80.7 ± 1.2 97.3 ± 0.4 77.6 ± 1.5 85.8 MTL 87.0 ± 0.2 82.7 ± 0.8 96.5 ± 0.7 80.5 ± 0.8 86.7 SagNet 87.4 ± 0.5 81.2 ± 1.2 96.3 ± 0.8 80.7 ± 1.1 86.4 ARM 85.0 ± 1.2 81.4 ± 0.2 95.9 ± 0.3 80.9 ± 0.5 85.8 VREx 87.8 ± 1.2 81.8 ± 0.7 97.4 ± 0.2 82.1 ± 0.7 87.2 RSC 86.0 ± 0.7 81.8 ± 0.9 96.8 ± 0.7 80.4 ± 0.5 86.2" }, { "heading": "C.3.5 OFFICEHOME", "text": "" }, { "heading": "Algorithm A C P R Avg", "text": "ERM 61.7 ± 0.7 53.4 ± 0.3 74.1 ± 0.4 76.2 ± 0.6 66.4 IRM 56.4 ± 3.2 51.2 ± 2.3 71.7 ± 2.7 72.7 ± 2.7 63.0 GroupDRO 60.5 ± 1.6 53.1 ± 0.3 75.5 ± 0.3 75.9 ± 0.7 66.2 Mixup 63.5 ± 0.2 54.6 ± 0.4 76.0 ± 0.3 78.0 ± 0.7 68.0 MLDG 60.5 ± 0.7 54.2 ± 0.5 75.0 ± 0.2 76.7 ± 0.5 66.6 CORAL 64.8 ± 0.8 54.1 ± 0.9 76.5 ± 0.4 78.2 ± 0.4 68.4 MMD 60.4 ± 1.0 53.4 ± 0.5 74.9 ± 0.1 76.1 ± 0.7 66.2 DANN 60.6 ± 1.4 51.8 ± 0.7 73.4 ± 0.5 75.5 ± 0.9 65.3 CDANN 57.9 ± 0.2 52.1 ± 1.2 74.9 ± 0.7 76.2 ± 0.2 65.3 MTL 60.7 ± 0.8 53.5 ± 1.3 75.2 ± 0.6 76.6 ± 0.6 66.5 SagNet 62.7 ± 0.5 53.6 ± 0.5 76.0 ± 0.3 77.8 ± 0.1 67.5 ARM 58.8 ± 0.5 51.8 ± 0.7 74.0 ± 0.1 74.4 ± 0.2 64.8 VREx 59.6 ± 1.0 53.3 ± 0.3 73.2 ± 0.5 76.6 ± 0.4 65.7 RSC 61.7 ± 0.8 53.0 ± 0.9 74.8 ± 0.8 76.3 ± 0.5 66.5" }, { "heading": "C.3.6 TERRAINCOGNITA", "text": "" }, { "heading": "Algorithm L100 L38 L43 L46 Avg", "text": "ERM 59.4 ± 0.9 49.3 ± 0.6 60.1 ± 1.1 43.2 ± 0.5 53.0 IRM 56.5 ± 2.5 49.8 ± 1.5 57.1 ± 2.2 38.6 ± 1.0 50.5 GroupDRO 60.4 ± 1.5 48.3 ± 0.4 58.6 ± 0.8 42.2 ± 0.8 52.4 Mixup 67.6 ± 1.8 51.0 ± 1.3 59.0 ± 0.0 40.0 ± 1.1 54.4 MLDG 59.2 ± 0.1 49.0 ± 0.9 58.4 ± 0.9 41.4 ± 1.0 52.0 CORAL 60.4 ± 0.9 47.2 ± 0.5 59.3 ± 0.4 44.4 ± 0.4 52.8 MMD 60.6 ± 1.1 45.9 ± 0.3 57.8 ± 0.5 43.8 ± 1.2 52.0 DANN 55.2 ± 1.9 47.0 ± 0.7 57.2 ± 0.9 42.9 ± 0.9 50.6 CDANN 56.3 ± 2.0 47.1 ± 0.9 57.2 ± 1.1 42.4 ± 0.8 50.8 MTL 58.4 ± 2.1 48.4 ± 0.8 58.9 ± 0.6 43.0 ± 1.3 52.2 SagNet 56.4 ± 1.9 50.5 ± 2.3 59.1 ± 0.5 44.1 ± 0.6 52.5 ARM 60.1 ± 1.5 48.3 ± 1.6 55.3 ± 0.6 40.9 ± 1.1 51.2 VREx 56.8 ± 1.7 46.5 ± 0.5 58.4 ± 0.3 43.8 ± 0.3 51.4 RSC 59.9 ± 1.4 46.7 ± 0.4 57.8 ± 0.5 44.3 ± 0.6 52.1" }, { "heading": "C.3.7 DOMAINNET", "text": "Algorithm clip info paint quick real sketch Avg ERM 58.6 ± 0.3 19.2 ± 0.2 47.0 ± 0.3 13.2 ± 0.2 59.9 ± 0.3 49.8 ± 0.4 41.3 IRM 40.4 ± 6.6 12.1 ± 2.7 31.4 ± 5.7 9.8 ± 1.2 37.7 ± 9.0 36.7 ± 5.3 28.0 GroupDRO 47.2 ± 0.5 17.5 ± 0.4 34.2 ± 0.3 9.2 ± 0.4 51.9 ± 0.5 40.1 ± 0.6 33.4 Mixup 55.6 ± 0.1 18.7 ± 0.4 45.1 ± 0.5 12.8 ± 0.3 57.6 ± 0.5 48.2 ± 0.4 39.6 MLDG 59.3 ± 0.1 19.6 ± 0.2 46.8 ± 0.2 13.4 ± 0.2 60.1 ± 0.4 50.4 ± 0.3 41.6 CORAL 59.2 ± 0.1 19.9 ± 0.2 47.4 ± 0.2 14.0 ± 0.4 59.8 ± 0.2 50.4 ± 0.4 41.8 MMD 32.2 ± 13.3 11.2 ± 4.5 26.8 ± 11.3 8.8 ± 2.2 32.7 ± 13.8 29.0 ± 11.8 23.5 DANN 53.1 ± 0.2 18.3 ± 0.1 44.2 ± 0.7 11.9 ± 0.1 55.5 ± 0.4 46.8 ± 0.6 38.3 CDANN 54.6 ± 0.4 17.3 ± 0.1 44.2 ± 0.7 12.8 ± 0.2 56.2 ± 0.4 45.9 ± 0.5 38.5 MTL 58.0 ± 0.4 19.2 ± 0.2 46.2 ± 0.1 12.7 ± 0.2 59.9 ± 0.1 49.0 ± 0.0 40.8 SagNet 57.7 ± 0.3 19.1 ± 0.1 46.3 ± 0.5 13.5 ± 0.4 58.9 ± 0.4 49.5 ± 0.2 40.8 ARM 49.6 ± 0.4 16.5 ± 0.3 41.5 ± 0.8 10.8 ± 0.1 53.5 ± 0.3 43.9 ± 0.4 36.0 VREx 43.3 ± 4.5 14.1 ± 1.8 32.5 ± 5.0 9.8 ± 1.1 43.5 ± 5.6 37.7 ± 4.5 30.1 RSC 55.0 ± 1.2 18.3 ± 0.5 44.4 ± 0.6 12.5 ± 0.1 55.7 ± 0.7 47.8 ± 0.9 38.9" }, { "heading": "C.3.8 AVERAGES", "text": "" }, { "heading": "Algorithm ColoredMNIST RotatedMNIST VLCS PACS OfficeHome TerraIncognita DomainNet Avg", "text": "ERM 57.8 ± 0.2 97.8 ± 0.1 77.6 ± 0.3 86.7 ± 0.3 66.4 ± 0.5 53.0 ± 0.3 41.3 ± 0.1 68.7 IRM 67.7 ± 1.2 97.5 ± 0.2 76.9 ± 0.6 84.5 ± 1.1 63.0 ± 2.7 50.5 ± 0.7 28.0 ± 5.1 66.9 GroupDRO 61.1 ± 0.9 97.9 ± 0.1 77.4 ± 0.5 87.1 ± 0.1 66.2 ± 0.6 52.4 ± 0.1 33.4 ± 0.3 67.9 Mixup 58.4 ± 0.2 98.0 ± 0.1 78.1 ± 0.3 86.8 ± 0.3 68.0 ± 0.2 54.4 ± 0.3 39.6 ± 0.1 69.0 MLDG 58.2 ± 0.4 97.8 ± 0.1 77.5 ± 0.1 86.8 ± 0.4 66.6 ± 0.3 52.0 ± 0.1 41.6 ± 0.1 68.7 CORAL 58.6 ± 0.5 98.0 ± 0.0 77.7 ± 0.2 87.1 ± 0.5 68.4 ± 0.2 52.8 ± 0.2 41.8 ± 0.1 69.2 MMD 63.3 ± 1.3 98.0 ± 0.1 77.9 ± 0.1 87.2 ± 0.1 66.2 ± 0.3 52.0 ± 0.4 23.5 ± 9.4 66.9 DANN 57.0 ± 1.0 97.9 ± 0.1 79.7 ± 0.5 85.2 ± 0.2 65.3 ± 0.8 50.6 ± 0.4 38.3 ± 0.1 67.7 CDANN 59.5 ± 2.0 97.9 ± 0.0 79.9 ± 0.2 85.8 ± 0.8 65.3 ± 0.5 50.8 ± 0.6 38.5 ± 0.2 68.2 MTL 57.6 ± 0.3 97.9 ± 0.1 77.7 ± 0.5 86.7 ± 0.2 66.5 ± 0.4 52.2 ± 0.4 40.8 ± 0.1 68.5 SagNet 58.2 ± 0.3 97.9 ± 0.0 77.6 ± 0.1 86.4 ± 0.4 67.5 ± 0.2 52.5 ± 0.4 40.8 ± 0.2 68.7 ARM 63.2 ± 0.7 98.1 ± 0.1 77.8 ± 0.3 85.8 ± 0.2 64.8 ± 0.4 51.2 ± 0.5 36.0 ± 0.2 68.1 VREx 67.0 ± 1.3 97.9 ± 0.1 78.1 ± 0.2 87.2 ± 0.6 65.7 ± 0.3 51.4 ± 0.5 30.1 ± 3.7 68.2 RSC 58.5 ± 0.5 97.6 ± 0.1 77.8 ± 0.6 86.2 ± 0.5 66.5 ± 0.6 52.1 ± 0.2 38.9 ± 0.6 68.2" }, { "heading": "C.4 RESULTS OF A LARGER PACS SWEEP WITH 100 HYPERPARAMETER TRIALS", "text": "ERM, model selection: A C P S Avg training-domain 86.6 ± 0.8 79.7 ± 0.6 96.6 ± 0.4 77.8 ± 0.8 85.2 leave-one-out-domain 86.4 ± 1.1 78.2 ± 1.0 96.8 ± 0.2 76.0 ± 2.1 84.4 test-domain (oracle) 89.3 ± 0.3 82.2 ± 0.5 97.6 ± 0.2 82.7 ± 1.1 88.0" } ]
2,021
IN SEARCH OF LOST DOMAIN GENERALIZATION
SP:04abdf6d039513f23e00e6686832cd4b950f1d75
[ "This work proposes a specific parametrisation for the Gaussian prior and approximate posterior distribution in variational Bayesian neural networks in terms of inducing weights. The general idea is an instance of the sparse variational inference scheme for GPs proposed by Titsias back in 2009; for a given model with a prior p(W) perform variational inference on an extended model with a hierarchical prior p(U) p(W | U), that has the same marginal p(W) = \\int p(U)p(W | U)dU as the original model. The authors then consider “U” to be auxiliary weights that are jointly Gaussian with the actual weights “W” and then use the decomposition p(W|U)p(U), q(W|U)q(U) for the prior and approximate posterior (which can easily be computed via the conditional Gaussian rules). Furthermore, they “tie” (almost) all of the parameters between q(W|U) and p(W|U) (similarly to Titsias, 2009). The main benefit from these two things is that since the mean and covariance of the Gaussian distribution over W conditioned on U can be efficiently represented as functions of U, whenever dim(U) << dim(W) we get reductions in memory for storing the distributions over the parameters in the network. The authors furthermore, discuss how to efficiently parametrize the joint distribution over W, U, discuss different choices for q(U) (that can lead to either traditional VI or something like deep ensembles). In addition, they also discuss how more efficient sampling from q(W|U) can be realised via an extension of the Matheron’s rule to the case of matrix random variables. Finally, they evaluate their method against traditional mean field variational Bayesian neural networks and deep ensembles on several tasks that include regression, classification, calibration and OOD performance. ", "This paper proposes a method for reducing the storage cost of BNNs. The idea is to apply the inducing point method and fast Matheron's sampling rule, commonly used in GPs, to standard variational BNNs. Experiments show that the resulting BNNs stay competitive to standard BNNs while enjoying up to 75% reduction in parameter size.", "The paper proposes to represent uncertainty in neural networks by augmenting neural network layers with inducing weights and then assuming that the resulting vector of weights is drawn from a matrix normal distribution which is structured so that it has fewer parameters than the original network. The parameters of the distribution are learned via variational inference and the authors present a procedure to sample efficiently from this distribution for optimizing the model parameters and for inference at test time. Experiments on classification and out of distribution detection are presented to validate the approach.", "The paper proposes a parameter-efficient method for uncertanity quantification in deep neural network using per layer inducing weights. The paper develops a network weight sampling scheme based on inducing weights. The proposed method employs Matheron's rule for efficient sampling. The method requires fewer parameters than even deterministic networks, while keeping good calibration and performance, at the cost of longer compute times, when few MC weight samples are used. ", "This paper proposes a layer-wise variational inference scheme for Bayesian neural networks that uses \"inducing weights\" for each layer before up-sampling to the full (layer-wise) weight distribution. The scheme is equivalent to a hierarchical prior formulation with an analogous hierarchical variational posterior. Experiments are performed on the standard BDL benchmarks of synthetic one dimensional regression, ResNets on CIFAR-100, and out of distribution detection." ]
Bayesian Neural Networks and deep ensembles represent two modern paradigms of uncertainty quantification in deep learning. Yet these approaches struggle to scale mainly due to memory inefficiency, requiring parameter storage several times that of their deterministic counterparts. To address this, we augment each weight matrix with a small inducing weight matrix, projecting the uncertainty quantification into a lower dimensional space. We further extend Matheron’s conditional Gaussian sampling rule to enable fast weight sampling, which enables our inference method to maintain reasonable run-time as compared with ensembles. Importantly, our approach achieves competitive performance to the state-of-the-art in prediction and uncertainty estimation tasks with fully connected neural networks and ResNets, while reducing the parameter size to ≤ 24.3% of that of a single neural network.
[ { "affiliations": [], "name": "Hippolyt Ritter" }, { "affiliations": [], "name": "Martin Kukla" }, { "affiliations": [], "name": "Cheng Zhang" }, { "affiliations": [], "name": "Yingzhen Li" } ]
[ { "authors": [ "F.V. Agakov", "D. Barber" ], "title": "An auxiliary variational method", "venue": "In ICONIP,", "year": 2019 }, { "authors": [ "E. Bingham", "J.P. Chen", "M. Jankowiak", "F. Obermeyer", "N. Pradhan", "T. Karaletsos", "R. Singh", "P. Szerlip", "P. Horsfall", "N.D. Goodman" ], "title": "Pyro: Deep universal probabilistic programming", "venue": null, "year": 2019 }, { "authors": [ "C. Blundell", "J. Cornebise", "K. Kavukcuoglu", "D. Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "In ICML,", "year": 2015 }, { "authors": [ "J. Bradshaw", "Matthews", "A.G. d. G", "Z. Ghahramani" ], "title": "Adversarial examples, uncertainty, and transfer testing robustness in Gaussian process hybrid deep networks", "venue": "arXiv preprint arXiv:1707.02476,", "year": 2017 }, { "authors": [ "T.B. Brown", "B. Mann", "N. Ryder", "M. Subbiah", "J. Kaplan", "P. Dhariwal", "A. Neelakantan", "P. Shyam", "G. Sastry", "A Askell" ], "title": "Language models are few-shot learners", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "B.P. Carlin", "S. Chib" ], "title": "Bayesian model choice via Markov chain Monte Carlo methods", "venue": "JRSS B,", "year": 1995 }, { "authors": [ "E. Daxberger", "E. Nalisnick", "J.U. Allingham", "J. Antorán", "J.M. Hernández-Lobato" ], "title": "Expressive yet tractable Bayesian deep learning via subnetwork inference", "venue": "In ICML,", "year": 2021 }, { "authors": [ "W. Deng", "X. Zhang", "F. Liang", "G. Lin" ], "title": "An adaptive empirical bayesian method for sparse deep learning", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "A. Doucet" ], "title": "A note on efficient conditional simulation of Gaussian distributions", "venue": "Technical report, University of British Columbia,", "year": 2010 }, { "authors": [ "M.W. Dusenberry", "G. Jerfel", "Y. Wen", "Ma", "Y.-a", "J. Snoek", "K. Heller", "B. Lakshminarayanan", "D. Tran" ], "title": "Efficient and scalable Bayesian neural nets with rank-1 factors", "venue": "In ICML,", "year": 2020 }, { "authors": [ "A.Y. Foong", "Y. Li", "J.M. Hernández-Lobato", "R.E. Turner" ], "title": "In-between’ uncertainty in Bayesian neural networks", "venue": null, "year": 1906 }, { "authors": [ "J. Frankle", "M. Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Y. Gal", "Z. Ghahramani" ], "title": "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", "venue": "In ICML,", "year": 2016 }, { "authors": [ "S. Ghosh", "J. Yao", "F. Doshi-Velez" ], "title": "Model selection in Bayesian neural networks via horseshoe priors", "venue": null, "year": 2019 }, { "authors": [ "A. Graves" ], "title": "Practical variational inference for neural networks", "venue": "In NeurIPS,", "year": 2011 }, { "authors": [ "C. Guo", "G. Pleiss", "Y. Sun", "K.Q. Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "A.K. Gupta", "D.K. Nagar" ], "title": "Matrix variate distributions", "venue": null, "year": 2018 }, { "authors": [ "S. Han", "H. Mao", "W.J. Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "M. Havasi", "R. Peharz", "J.M. Hernández-Lobato" ], "title": "Minimal random code learning: Getting bits back from compressed model parameters", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "D. Hendrycks", "T. Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "G.E. Hinton", "D. Van Camp" ], "title": "Keeping the neural networks simple by minimizing the description length of the weights", "venue": "In COLT,", "year": 1993 }, { "authors": [ "M.D. Hoffman", "A. Gelman" ], "title": "The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo", "venue": null, "year": 2014 }, { "authors": [ "M.D. Hoffman", "D.M. Blei", "C. Wang", "J. Paisley" ], "title": "Stochastic variational inference", "venue": null, "year": 2013 }, { "authors": [ "Y. Hoffman", "E. Ribak" ], "title": "Constrained realizations of Gaussian fields-a simple algorithm", "venue": "ApJ,", "year": 1991 }, { "authors": [ "G. Huang", "Z. Liu", "L. Van Der Maaten", "K.Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "P. Izmailov", "W.J. Maddox", "P. Kirichenko", "T. Garipov", "D. Vetrov", "A.G. Wilson" ], "title": "Subspace inference for Bayesian deep learning", "venue": null, "year": 2019 }, { "authors": [ "M.I. Jordan", "Z. Ghahramani", "T.S. Jaakkola", "L.K. Saul" ], "title": "An introduction to variational methods for graphical models", "venue": "Machine Learning,", "year": 1999 }, { "authors": [ "T. Karaletsos", "T.D. Bui" ], "title": "Hierarchical Gaussian process priors for Bayesian neural network weights", "venue": "In NeurIPS,", "year": 2020 }, { "authors": [ "A. Kendall", "Y. Gal" ], "title": "What uncertainties do we need in Bayesian deep learning for computer vision", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "M.E.E. Khan", "A. Immer", "E. Abedi", "M. Korzepa" ], "title": "Approximate inference turns deep networks into Gaussian processes", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "B. Lakshminarayanan", "A. Pritzel", "C. Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "J. Lee", "J. Sohl-Dickstein", "J. Pennington", "R. Novak", "S. Schoenholz", "Y. Bahri" ], "title": "Deep neural networks as Gaussian processes", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "N. Lee", "T. Ajanthan", "P. Torr" ], "title": "SNIP: Single-shot network pruning based on connection sensitivity", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "F. Leibfried", "V. Dutordoir", "S. John", "N. Durrande" ], "title": "A tutorial on sparse Gaussian processes and variational inference", "venue": "arXiv preprint arXiv:2012.13962,", "year": 2020 }, { "authors": [ "C. Louizos", "M. Welling" ], "title": "Multiplicative normalizing flows for variational Bayesian neural networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "C. Louizos", "K. Ullrich", "M. Welling" ], "title": "Bayesian compression for deep learning", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "C. Ma", "Y. Li", "J.M. Hernández-Lobato" ], "title": "Variational implicit processes", "venue": "In ICML,", "year": 2019 }, { "authors": [ "D.J. MacKay" ], "title": "Bayesian neural networks and density networks", "venue": "NIMPR A,", "year": 1995 }, { "authors": [ "Matthews", "A.G. d. G", "J. Hron", "M. Rowland", "R.E. Turner", "Z. Ghahramani" ], "title": "Gaussian process behaviour in wide deep neural networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "A. Mishkin", "F. Kunstner", "D. Nielsen", "M. Schmidt", "M.E. Khan" ], "title": "SLANG: Fast structured covariance approximations for Bayesian deep learning with natural gradient", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "R.M. Neal" ], "title": "Bayesian Learning for Neural Networks", "venue": "PhD thesis, University of Toronto,", "year": 1995 }, { "authors": [ "S.W. Ober", "L. Aitchison" ], "title": "Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes", "venue": "In ICML,", "year": 2021 }, { "authors": [ "Y. Ovadia", "E. Fertig", "J. Ren", "Z. Nado", "D. Sculley", "S. Nowozin", "J. Dillon", "B. Lakshminarayanan", "J. Snoek" ], "title": "Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "A. Paszke", "S. Gross", "F. Massa", "A. Lerer", "J. Bradbury", "G. Chanan", "T. Killeen", "Z. Lin", "N. Gimelshein", "L Antiga" ], "title": "PyTorch: An imperative style, high-performance deep learning library", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "R. Ranganath", "D. Tran", "D. Blei" ], "title": "Hierarchical variational models", "venue": "In ICML,", "year": 2016 }, { "authors": [ "H. Ritter", "A. Botev", "D. Barber" ], "title": "A scalable Laplace approximation for neural networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "T. Salimans", "D. Kingma", "M. Welling" ], "title": "Markov chain Monte Carlo and variational inference: Bridging the gap", "venue": "In ICML,", "year": 2015 }, { "authors": [ "H. Salimbeni", "M. Deisenroth" ], "title": "Doubly stochastic variational inference for deep Gaussian processes", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "E. Snelson", "Z. Ghahramani" ], "title": "Sparse Gaussian processes using pseudo-inputs", "venue": "In NeurIPS,", "year": 2006 }, { "authors": [ "O. Stegle", "C. Lippert", "J.M. Mooij", "N.D. Lawrence", "K. Borgwardt" ], "title": "Efficient inference in matrix-variate gaussian models with iid observation noise", "venue": "NeurIPS,", "year": 2011 }, { "authors": [ "S. Sun", "G. Zhang", "J. Shi", "R. Grosse" ], "title": "Functional variational Bayesian neural networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "J. Świątkowski", "K. Roth", "B.S. Veeling", "L. Tran", "J.V. Dillon", "S. Mandt", "J. Snoek", "T. Salimans", "R. Jenatton", "S. Nowozin" ], "title": "The k-tied Normal distribution: A compact parameterization of Gaussian mean field posteriors in Bayesian neural networks", "venue": null, "year": 2020 }, { "authors": [ "R. Tanno", "D.E. Worrall", "A. Ghosh", "E. Kaden", "S.N. Sotiropoulos", "A. Criminisi", "D.C. Alexander" ], "title": "Bayesian image quality transfer with CNNs: exploring uncertainty in dMRI super-resolution", "venue": "In MICCAI,", "year": 2017 }, { "authors": [ "M.E. Tipping", "C.M. Bishop" ], "title": "Probabilistic principal component analysis", "venue": "JRSS B,", "year": 1999 }, { "authors": [ "M. Titsias" ], "title": "Variational learning of inducing variables in sparse Gaussian processes", "venue": "In AISTATS,", "year": 2009 }, { "authors": [ "M. Titsias", "M. Lázaro-Gredilla" ], "title": "Doubly stochastic variational Bayes for non-conjugate inference", "venue": "In ICML,", "year": 2014 }, { "authors": [ "M. Welling", "Y.W. Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In ICML,", "year": 2011 }, { "authors": [ "Y. Wen", "D. Tran", "J. Ba" ], "title": "Batchensemble: an alternative approach to efficient ensemble and lifelong learning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "J.T. Wilson", "V. Borovitskiy", "A. Terenin", "P. Mostowsky", "M.P. Deisenroth" ], "title": "Efficiently sampling functions from Gaussian process posteriors", "venue": "In ICML,", "year": 2020 }, { "authors": [ "H. Xu", "B. Liu", "L. Shu", "P. Yu" ], "title": "BERT post-training for review reading comprehension and aspect-based sentiment analysis", "venue": null, "year": 2019 }, { "authors": [ "G. Zhang", "S. Sun", "D. Duvenaud", "R. Grosse" ], "title": "Noisy natural gradient as variational inference", "venue": "In ICML,", "year": 2018 }, { "authors": [ "R. Zhang", "C. Li", "J. Zhang", "C. Chen", "A.G. Wilson" ], "title": "Cyclical stochastic gradient MCMC for Bayesian deep learning", "venue": "In ICLR,", "year": 2020 } ]
[ { "heading": "1 Introduction", "text": "Deep learning models are becoming deeper and wider than ever before. From image recognition models such as ResNet-101 (He et al., 2016a) and DenseNet (Huang et al., 2017) to BERT (Xu et al., 2019) and GPT-3 (Brown et al., 2020) for language modelling, deep neural networks have found consistent success in fitting large-scale data. As these models are increasingly deployed in real-world applications, calibrated uncertainty estimates for their predictions become crucial, especially in safety-critical areas such as healthcare. In this regard, Bayesian Neural Networks (BNNs) (MacKay, 1995; Blundell et al., 2015; Gal & Ghahramani, 2016; Zhang et al., 2020) and deep ensembles (Lakshminarayanan et al., 2017) represent two popular paradigms for estimating uncertainty, which have shown promising results in applications such as (medical) image processing (Kendall & Gal, 2017; Tanno et al., 2017) and out-of-distribution detection (Ovadia et al., 2019).\nThough progress has been made, one major obstacle to scaling up BNNs and deep ensembles is their high storage cost. Both approaches require the parameter counts to be several times higher than their deterministic counterparts. Although recent efforts have improved memory efficiency (Louizos & Welling, 2017; Świątkowski et al., 2020; Wen et al., 2020; Dusenberry et al., 2020), these still use more parameters than a deterministic neural network. This is particularly problematic in hardware-constrained edge devices, when on-device storage is required due to privacy regulations.\nMeanwhile, an infinitely wide BNN becomes a Gaussian process (GP) that is known for good uncertainty estimates (Neal, 1995; Matthews et al., 2018; Lee et al., 2018). But perhaps surprisingly, this infinitely wide BNN is “parameter efficient”, as its “parameters” are effectively the datapoints, which have a considerably smaller memory footprint than explicitly storing the network weights. In addition, sparse posterior approximations store a smaller number of inducing points instead (Snelson & Ghahramani, 2006; Titsias, 2009), making sparse GPs even more memory efficient.\n∗Work done at Microsoft Research Cambridge.\n35th Conference on Neural Information Processing Systems (NeurIPS 2021).\nCan we bring the advantages of sparse approximations in GPs — which are infinitely-wide neural networks — to finite width deep learning models? We provide an affirmative answer regarding memory efficiency, by proposing an uncertainty quantification framework based on sparse uncertainty representations. We present our approach in BNN context, but the proposed approach is also applicable to deep ensembles. In detail, our contributions are as follows:\n• We introduce inducing weights — an auxiliary variable method with lower dimensional counterparts to the actual weight matrices — for variational inference in BNNs, as well as a memory efficient parameterisation and an extension to ensemble methods (Section 3.1).\n• We extend Matheron’s rule to facilitate efficient posterior sampling (Section 3.2). • We provide an in-depth computation complexity analysis (Section 3.3), showing the signifi-\ncant advantage in terms of parameter efficiency. • We show the connection to sparse (deep) GPs, in that inducing weights can be viewed as\nprojected noisy inducing outputs in pre-activation output space (Section 5.1). • We apply the proposed approach to BNNs and deep ensembles. Experiments in classification,\nmodel robustness and out-of-distribution detection tasks show that our inducing weight approaches achieve competitive performance to their counterparts in the original weight space on modern deep architectures for image classification, while reducing the parameter count to ≤ 24.3% of that of a single network.\nWe open-source our proposed inducing weight approach, together with baseline methods reported in the experiments, as a PyTorch (Paszke et al., 2019) wrapper named bayesianize: https: //github.com/microsoft/bayesianize. As demonstrated in Appendix I, our software makes the conversion of a deterministic neural network to a Bayesian one with a few lines of code: import bnn # our pytorch wrapper package net = torchvision.models.resnet18() # construct a deterministic ResNet18 bnn.bayesianize_(net, inference=\"inducing\") # convert it into a Bayesian one" }, { "heading": "2 Inducing variables for variational inference", "text": "Our work is built on variational inference and inducing variables for posterior approximations. Given observations D = {X,Y} with X = [x1, ...,xN ], Y = [y1, ...,yN ], we would like to fit a neural network p(y|x,W1:L) with weights W1:L to the data. BNNs posit a prior distribution p(W1:L) over the weights, and construct an approximate posterior q(W1:L) to the exact posterior p(W1:L|D) ∝ p(D|W1:L)p(W1:L), where p(D|W1:L) = p(Y|X,W1:L) = ∏N n=1 p(yn|xn,W1:L).\nVariational inference Variational inference (Hinton & Van Camp, 1993; Jordan et al., 1999; Zhang et al., 2018a) constructs an approximation q(θ) to the posterior p(θ|D) ∝ p(θ)p(D|θ) by maximising a variational lower-bound:\nlog p(D) ≥ L(q(θ)) := Eq(θ) [log p(D|θ)]−KL [q(θ)||p(θ)] . (1)\nFor BNNs, θ = {W1:L}, and a simple choice of q is a Fully-factorized Gaussian (FFG): q(W1:L) = ∏L l=1 ∏dlout i=1 ∏dlin j=1N (m (i,j) l , v (i,j) l ), withm (i,j) l , v (i,j) l the mean and variance of W (i,j) l and dlin, d l out the respective number of inputs and outputs to layer l. The variational parameters are then φ = {m(i,j)l , v (i,j) l }Ll=1. Gradients of L w.r.t. φ can be estimated with mini-batches of data (Hoffman et al., 2013) and with Monte Carlo sampling from the q distribution (Titsias & LázaroGredilla, 2014; Kingma & Welling, 2014). By setting q to an BNN, a variational BNN can be trained with similar computational requirements as a deterministic network (Blundell et al., 2015).\nImproved posterior approximation with inducing variables Auxiliary variable approaches (Agakov & Barber, 2004; Salimans et al., 2015; Ranganath et al., 2016) construct the q(θ) distribution with an auxiliary variable a: q(θ) = ∫ q(θ|a)q(a)da, with the hope that a potentially richer mixture distribution q(θ) can achieve better approximations. As then q(θ) becomes intractable, an auxiliary variational lower-bound is used to optimise q(θ,a) (see Appendix B):\nlog p(D) ≥ L(q(θ,a)) = Eq(θ,a)[log p(D|θ)] + Eq(θ,a) [ log\np(θ)r(a|θ) q(θ|a)q(a)\n] . (2)\nHere r(a|θ) is an auxiliary distribution that needs to be specified, where existing approaches often use a “reverse model” for r(a|θ). Instead, we define r(a|θ) in a generative manner: r(a|θ) is the “posterior” of the following “generative model”, whose “evidence” is exactly the prior of θ:\nr(a|θ) = p̃(a|θ) ∝ p̃(a)p̃(θ|a), such that p̃(θ) := ∫ p̃(a)p̃(θ|a)da = p(θ). (3)\nPlugging Eq. (3) into Eq. (2):\nL(q(θ,a)) = Eq(θ)[log p(D|θ)]− Eq(a) [KL[q(θ|a)||p̃(θ|a)]]−KL[q(a)||p̃(a)]. (4) This approach yields an efficient approximate inference algorithm, translating the complexity of inference in θ to a. If dim(a) < dim(θ) and q(θ,a) = q(θ|a)q(a) has the following properties:\n1. A “pseudo prior” p̃(a)p̃(θ|a) is defined such that ∫ p̃(a)p̃(θ|a)da = p(θ);\n2. The conditionals q(θ|a) and p̃(θ|a) are in the same parametric family, so can share parameters; 3. Both sampling θ ∼ q(θ) and computing KL[q(θ|a)||p̃(θ|a)] can be done efficiently; 4. The designs of q(a) and p̃(a) can potentially provide extra advantages (in time and space\ncomplexities and/or optimisation easiness).\nWe call a the inducing variable of θ, which is inspired by variationally sparse GP (SVGP) with inducing points (Snelson & Ghahramani, 2006; Titsias, 2009). Indeed SVGP is a special case (see Appendix C): θ = f , a = u, the GP prior is p(f |X) = GP(0,KXX), p(u) = GP(0,KZZ), p̃(f ,u) = p(u)p(f |X,u), q(f |u) = p(f |X,u), q(f ,u) = p(f |X,u)q(u), and Z are the optimisable inducing inputs. The variational lower-bound is L(q(f ,u)) = Eq(f)[log p(Y|f)]−KL[q(u)||p(u)], and the variational parameters are φ = {Z, distribution parameters of q(u)}. SVGP satisfies the marginalisation constraint Eq. (3) by definition, and it has KL[q(f |u)||p̃(f |u)] = 0. Also by using small M = dim(u) and exploiting the q distribution design, SVGP reduces run-time from O(N3) to O(NM2 + M3) where N is the number of inputs in X, meanwhile it also makes storing a full Gaussian q(u) affordable. Lastly, u can be whitened, leading to the “pseudo prior” p̃(f ,v) = p(f |X,u = K1/2ZZ v)p̃(v), p̃(v) = N (v; 0, I) which could bring potential benefits in optimisation. We emphasise that the introduction of “pseudo prior” does not change the probabilistic model as long as the marginalisation constraint Eq. (3) is satisfied. In the rest of the paper we assume the constraint Eq. (3) holds and write p(θ,a) := p̃(θ,a). It might seem unclear how to design such p̃(θ,a) for an arbitrary probabilistic model, however, for a Gaussian prior on θ the rules for computing conditional Gaussian distributions can be used to construct p̃. In Section 3 we exploit these rules to develop an efficient approximate inference method for Bayesian neural networks with inducing weights." }, { "heading": "3 Sparse uncertainty representation with inducing weights", "text": "" }, { "heading": "3.1 Inducing weights for neural network parameters", "text": "Following the above design principles, we introduce to each network layer l a smaller inducing weight matrix Ul to assist approximate posterior inference in Wl. Therefore in our context, θ = W1:L and a = U1:L. In the rest of the paper, we assume a factorised prior across layers p(W1:L) = ∏ l p(Wl), and drop the l indices when the context is clear to ease notation.\nAugmenting network layers with inducing weights Suppose the weight W ∈ Rdout×din has a Gaussian prior p(W) = p(vec(W)) = N (0, σ2I) where vec(W) concatenates the columns of the weight matrix into a vector. A first attempt to augment p(vec(W)) with an inducing weight variable U ∈ RMout×Min may be to construct a multivariate Gaussian p(vec(W), vec(U)), such that ∫ p(vec(W), vec(U))dU = N (0, σ2I). This means for the joint covariance matrix of (vec(W), vec(U)), it requires the block corresponding to the covariance of vec(W) to match the prior covariance σ2I . We are then free to parameterise the rest of the entries in the joint covariance matrix, as long as this full matrix remains positive definite. Now the conditional distribution p(W|U) is a function of these parameters, and the conditional sampling from p(W|U) is further discussed in Appendix D.1. Unfortunately, as dim(vec(W)) is typically large (e.g. of the order of 107), using a full covariance Gaussian for p(vec(W), vec(U)) becomes computationally intractable.\nWe address this issue with matrix normal distributions (Gupta & Nagar, 2018). The prior p(vec(W)) = N (0, σ2I) has an equivalent matrix normal distribution form as p(W) =\nMN (0, σ2rI, σ2cI), with σr, σc > 0 the row and column standard deviations satisfying σ = σrσc. Now we introduce the inducing variable U in matrix space, as well as two auxiliary variables Ur ∈ RMout×din , Uc ∈ Rdout×Min , so that the full augmented prior is:(\nW Uc Ur U\n) ∼ p(W,Uc,Ur,U) :=MN (0,Σr,Σc), (5)\nwith Lr = ( σrI 0 Zr Dr ) s.t. Σr = LrL>r = ( σ2rI σrZ > r σrZr ZrZ > r +D 2 r ) and Lc = ( σcI 0 Zc Dc ) s.t. Σc = LcL>c = ( σ2cI σcZ > c σcZc ZcZ > c +D 2 c ) .\nSee Fig. 1(a) for a visualisation of the augmentation. Matrix normal distributions have similar marginalisation and conditioning rules as multivariate Gaussian distributions, for which we provide further examples in Appendix D.2. Therefore the marginalisation constraint Eq. (3) is satisfied for any Zc ∈ RMin×din ,Zr ∈ RMout×dout and diagonal matrices Dc,Dr. For the inducing weight U we have p(U) = MN (0,Ψr,Ψc) with Ψr = ZrZ>r + D2r and Ψc = ZcZ>c + D2c . In the experiments we use whitened inducing weights which transforms U so that p(U) =MN (0, I, I) (Appendix H), but for clarity we continue with the above formulas in the main text.\nThe matrix normal parameterisation introduces two additional variables Ur,Uc without providing additional expressiveness. Hence it is desirable to integrate them out, leading to a joint multivariate normal with Khatri-Rao product structure for the covariance:\np(vec(W), vec(U)) = N ( 0, ( σ2cI ⊗ σ2rI σcZ>c ⊗ σrZ>r σcZc ⊗ σrZr Ψc ⊗Ψr )) . (6)\nAs the dominating memory complexity here isO(doutMout+dinMin) which comes from storingZr and Zc, we see that the matrix normal parameterisation of the augmented prior is memory efficient.\nPosterior approximation in the joint space We construct a factorised posterior approximation across the layers: q(W1:L,U1:L) = ∏ l q(Wl|Ul)q(Ul). Below we discuss options for q(W|U).\nThe simplest option is q(W|U) = p(vec(W)| vec(U)) = N (µW|U,ΣW|U), similar to sparse GPs. A slightly more flexible variant adds a rescaling term λ2 to the covariance matrix, which allows efficient KL computation (Appendix E):\nq(W|U) = q(vec(W)| vec(U)) = N (µW|U, λ2ΣW|U), (7)\nR(λ) := KL [q(W|U)||p(W|U)] = dindout(0.5λ2 − log λ− 0.5). (8)\nPlugging θ = W1:L, a = U1:L and Eq. (8) into Eq. (4) returns the following variational lower-bound L(q(W1:L,U1:L)) = Eq(W1:L)[log p(D|W1:L)] − ∑L\nl=1 (R(λl) + KL[q(Ul)||p(Ul)]), (9)\nwith λl the associated scaling parameter for q(Wl|Ul). Again as the choices of Zc,Zr,Dc,Dr do not change the marginal prior p(W), we are safe to optimise them as well. Therefore the variational parameters are now φ = {Zc,Zr,Dc,Dr, λ, dist. params. of q(U)} for each layer.\nTwo choices of q(U) A simple choice is FFG q(vec(U)) = N (mu, diag(vu)), which performs mean-field inference in U space (c.f. Blundell et al., 2015), and here KL[q(U)||p(U)] has a closedform solution. Another choice is a “mixture of delta measures” q(U) = 1K ∑K k=1 δ(U = U\n(k)), i.e. we keep K distinct sets of parameters {U (k)1:L}Kk=1 in inducing space that are projected back into the original parameter space via the shared conditionals q(Wl|Ul) to obtain the weights. This approach can be viewed as constructing “deep ensembles” in U space, and we follow ensemble methods (e.g. Lakshminarayanan et al., 2017) to drop KL[q(U)||p(U)] in Eq. (9). Often U is chosen to have significantly lower dimensions than W, i.e. Min << din and Mout << dout. As q(W|U) and p(W|U) only differ in the covariance scaling constant, U can be regarded as a sparse representation of uncertainty for the network layer, as the major updates in (approximate) posterior belief is quantified by q(U)." }, { "heading": "3.2 Efficient sampling with extended Matheron’s rule", "text": "Computing the variational lower-bound Eq. (9) requires samples from q(W), which requires an efficient sampling procedure for q(W|U). Unfortunately, q(W|U) derived from Eq. (6) & Eq. (7) is not a matrix normal, so direct sampling is prohibitive. To address this challenge, we extend Matheron’s rule (Journel & Huijbregts, 1978; Hoffman & Ribak, 1991; Doucet, 2010) to efficiently sample from q(W|U), with derivations provided in Appendix F. The original Matheron’s rule applies to multivariate Gaussian distributions. As a running example, consider two vector-valued random variablesw, u with joint distribution p(w,u) = N (0,Σ). Then the conditional distribution p(w|u) = N (µw|u,Σw|u) is also Gaussian, and direct sampling from it requires decomposing the conditional covariance matrix Σw|u which can be costly. The main idea of Matheron’s rule is that we can transform a sample from the joint Gaussian to obtain a sample from the conditional distribution p(w|u) as follows:\nw = w̄ + ΣwuΣ −1 uu(u− ū), w̄, ū ∼ N (0,Σ), Σ = ( Σww Σwu Σuw Σuu ) . (10)\nOne can check the validity of Matheron’s rule by computing the mean and variance of w above:\nEw̄,ū[w] = ΣwuΣ−1uuu = µw|u, Vw̄,ū[w] = Σww −ΣwuΣ−1uuΣuw = Σw|u. It might seem counter-intuitive at first sight in that this rule requires samples from a higher dimensional space. However, in the case where decomposition/inversion of Σ and Σuu can be done efficiently, sampling from the joint Gaussian p(w,u) can be significantly cheaper than directly sampling from the conditional Gaussian p(w|u). This happens e.g. when Σ is directly parameterised by its Cholesky decomposition and dim(u) << dim(w), so that sampling w̄, ū ∼ N (0,Σ) is straight-forward, and computing Σ−1uu is significantly cheaper than decomposing Σw|u.\nUnfortunately, the original Matheron’s rule cannot be applied directly to sample from q(W|U). This is because q(W|U) = q(vec(W)| vec(U)) differs from p(vec(W)| vec(U)) only in the variance scaling λ, and for p(vec(W)| vec(U)), its joint distribution counter-part Eq. (6) does not have an efficient representation for the covariance matrix. Therefore a naive application of Matheron’s rule requires decomposing the covariance matrix of p(vec(W), vec(U)) which is even more expensive than direct conditional sampling. However, notice that for the joint distribution p(W,Uc,Ur,U) in an even higher dimensional space, the row and column covariance matrices Σr and Σc are parameterised by their Cholesky decompositions, so that sampling from this joint distribution can be done efficiently. This inspire us to extend the original Matheron’s rule for efficient sampling from q(W|U) (details in Appendix F, when λ = 1 it also applies to sampling from p(W|U)):\nW = λW̄ + σZ>r Ψ −1 r (U − λŪ)Ψ−1c Zc; W̄, Ū ∼ p(W̄, Ūc, Ūr, Ū) =MN (0,Σr,Σc).\n(11) Here W̄, Ū ∼ p(W̄, Ūc, Ūr, Ū) means we first sample W̄, Ūc, Ūr, Ū from the joint then drop Ūc, Ūr; in fact Ūc, Ūr are never computed, and the other samples W̄, Ū can be obtained by:\nW̄ = σE1, Ū = ZrE1Z > c + L̂rẼ2Dc +DrẼ3L̂ > c +DrE4Dc,\nE1 ∼MN (0, Idout , Idin); Ẽ2, Ẽ3,E4 ∼MN (0, IMout , IMin), (12) L̂r = chol(ZrZ>r ), L̂c = chol(ZcZ > c ).\nThe run-time cost isO(2M3out+2M3in+doutMoutMin+Mindoutdin) required by inverting Ψr,Ψc, computing L̂r, L̂c, and the matrix products. The extended Matheron’s rule is visualised in Fig. 1\nwith a comparison to the original Matheron’s rule for sampling from q(vec(W)| vec(U)). Note that the original rule requires joint sampling from Eq. (6) (i.e. sampling the white blocks in Fig. 1(b)) which has O((doutdin +MoutMin)3) cost. Therefore our recipe avoids inverting and multiplying big matrices, resulting in a significant speed-up for conditional sampling." }, { "heading": "3.3 Computational complexities", "text": "In Table 1 we report the complexity figures for two types of inducing weight approaches: FFG q(U) (FFG-U) and Delta mixture q(U) (Ensemble-U). Baseline approaches include: DeterministicW, variational inference with FFG q(W) (FFG-W, Blundell et al., 2015), deep ensemble in W (Ensemble-W, Lakshminarayanan et al., 2017), as well as parameter efficient approaches such as matrix-normal q(W) (Matrix-normal-W, Louizos & Welling (2017)), variational inference with k-tied FFG q(W) (k-tied FFG-W, Świątkowski et al. (2020)), and rank-1 BNN (Dusenberry et al., 2020). The gain in memory is significant for the inducing weight approaches, in fact with Min < din and Mout < dout the parameter storage requirement is smaller than a single deterministic neural network. The major overhead in run-time comes from the extended Matheron’s rule for sampling q(W|U). Some of the computations there are performed only once, and in our experiments we show that by using a relatively low-dimensional U and large batch-sizes, the overhead is acceptable." }, { "heading": "4 Experiments", "text": "We evaluate the inducing weight approaches on regression, classification and related uncertainty estimation tasks. The goal is to demonstrate competitive performance to popular W-space uncertainty estimation methods while using significantly fewer parameters. We acknowledge that existing parameter efficient approaches for uncertainty estimation (e.g. k-tied or rank-1 BNNs) have achieved close performance to deep ensembles. However, none of them reduces the parameter count to be smaller than that of a single network. Therefore we decide not to include these baselines and instead focus on comparing: (1) variational inference with FFG q(W) (FFG-W, Blundell et al., 2015) v.s. FFG q(U) (FFG-U, ours); (2) deep ensemble in W space (Ensemble-W, Lakshminarayanan et al., 2017) v.s. ensemble in U space (Ensemble-U, ours). Another baseline is training a deterministic neural network with maximum likelihood. Details and additional results are in Appendices J and K." }, { "heading": "4.1 Synthetic 1-D regression", "text": "The regression task follows Foong et al. (2019), which has two input clusters x1 ∼ U [−1,−0.7], x2 ∼ U [0.5, 1], and targets y ∼ N (cos(4x+ 0.8), 0.01). For reference we show the exact posterior results using the NUTS sampler (Hoffman & Gelman, 2014). The results are visualised in Fig. 2 with predictive mean in blue, and up to three standard deviations as shaded area. Similar to historical results, FFG-W fails to represent the increased uncertainty away from the data and in between clusters. While underestimating predictive uncertainty overall, FFG-U shows a small increase in predictive uncertainty away from the data. In contrast, a per-layer Full-covariance Gaussian (FCG) in both weight (FCG-W) and inducing space (FCG-U) as well as Ensemble-U better capture the increased predictive variance, although the mean function is more similar to that of FFG-W.\nTable 2: CIFAR in-distribution metrics (in %).\nCIFAR10 CIFAR100 Method Acc. ↑ ECE ↓ Acc. ↑ ECE ↓ Deterministic 94.72 4.46 75.73 19.69 Ensemble-W 95.90 1.08 79.33 6.51 FFG-W 94.13 0.50 74.44 4.24 FFG-U 94.40 0.64 75.37 2.29 Ensemble-U 94.94 0.45 75.97 1.12\n12 4 8 16 32 Number of samples\n0\n5\n10\nre la\ntiv e\nru n-\ntim e\nSpeed FFG-U (M=128, R18) FFG-U (M=128, R50) FFG-U (M=64, R18) FFG-U (M=64, R50)\nFFG-W (R18) FFG-W (R50)\n3264 128 256 M\n0\n0.5\n1\n2\nre la\ntiv e\npa ra\nm . c\nou nt\ndeterministic\nFFG-W\nacc=94.4% 94.7%\nFFG-U\nModel size (R50)\nFigure 3: Resnet run-times & model sizes." }, { "heading": "4.2 Classification and in-distribution calibration", "text": "As the core empirical evaluation, we train Resnet-50 models (He et al., 2016b) on CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009). To avoid underfitting issues with FFG-W, a useful trick is to set an upper limit σ2max on the variance of q(W) (Louizos & Welling, 2017). This trick is similarly applied to the U-space methods, where we cap λ ≤ λmax for q(W|U), and for FFG-U we also set σ2max for the variance of q(U). In convolution layers, we treat the 4D weight tensor W of shape (cout, cin, h, w) as a cout × cinhw matrix. We use U matrices of shape 64× 64 for all layers (i.e. M = Min = Mout = 64), except that for CIFAR-10 we set Mout = 10 for the last layer.\nIn Table 2 we report test accuracy and test expected calibration error (ECE) (Guo et al., 2017) as a first evaluation of the uncertainty estimates. Overall, Ensemble-W achieves the highest accuracy, but is not as well-calibrated as variational methods. For the inducing weight approaches, Ensemble-U outperforms FFG-U on both datasets; overall it performs the best on the more challenging CIFAR-100 dataset (close-to-Ensemble-W accuracy and lowest ECE). Tables 5 and 6 in Appendix K show that increasing the U dimensions to M = 128 improves accuracy but leads to slightly worse calibration.\nIn Fig. 3 we show prediction run-times for batch-size = 500 on an NVIDIA Tesla V100 GPU, relative to those of an ensemble of deterministic nets, as well as relative parameter sizes to a single ResNet-50. The extra run-times for the inducing methods come from computing the extended Matheron’s rule. However, as they can be calculated once and cached for drawing multiple samples, the overhead reduces to a small factor when using larger number of samples K, especially for the bigger Resnet-50. More importantly, when compared to a deterministic ResNet-50, the inducing weight models reduce the parameter count by over 75% (5, 710, 902 vs. 23, 520, 842) for M = 64.\nHyper-parameter choices We visualise in Fig. 4 the accuracy and ECE results for computationally lighter inducing weight ResNet-18 models with different hyper-parameters (see Appendix J).\nPerformance in both metrics improves as the U matrix size M is increased (right-most panels), and the results for M = 64 and M = 128 are fairly similar. Also setting proper values for λmax, σmax is key to the improved results. The left-most panels show that with fixed σmax values (or Ensemble-U), the preferred conditional variance cap values λmax are fairly small (but still larger than 0 which corresponds to a point estimate for W given U). For σmax which controls variance in U space, we see from the top middle panel that the accuracy metric is fairly robust to σmax as long as λmax is not too large. But for ECE, a careful selection of σmax is required (bottom middle panel)." }, { "heading": "4.3 Robustness, out-of-distribution detection and pruning", "text": "To investigate the models’ robustness to distribution shift, we compute predictions on corrupted CIFAR datasets (Hendrycks & Dietterich, 2019) after training on clean data. Fig. 5 shows accuracy and ECE results for the ResNet-50 models. Ensemble-W is the most accurate model across skew intensities, while FFG-W, though performing well on clean data, returns the worst accuracy under perturbation. The inducing weight methods perform competitively to Ensemble-W with Ensemble-U being slightly more accurate than FFG-U as on the clean data. For ECE, FFG-U outperforms Ensemble-U and Ensemble-W, which are similarly calibrated. Interestingly, while the accuracy of FFG-W decays quickly as the data is perturbed more strongly, its ECE remains roughly constant.\nTable 3 further presents the utility of the maximum predicted probability for out-of-distribution (OOD) detection. The metrics are the area under the receiver operator characteristic (AUROC) and the precision-recall curve (AUPR). The inducing-weight methods perform similarly to Ensemble-W; all three outperform FFG-W and deterministic networks across the board.\nParameter pruning We further investigate pruning as a pragmatic alternative for more parameter-efficient inference. For FFG-U, we prune entries of the Z matrices, which contribute the largest number of parameters to the inducing methods, with the smallest magnitude. For FFG-W we follow Graves (2011) in setting different fractions of W to 0 depending on their variational mean-to-variance ratio and repeat the previous experiments after fine-tuning the distributions on the remaining variables. We stress that, unlike FFG-U, the FFG-W pruning corresponds to a post-hoc change of the probabilistic model and no longer performs inference in the original weight-space.\nFor FFG-W, pruning 90% of the parameters (leaving 20% of parameters as compared to its deterministic counterpart) worsens the ECE, in particular on CIFAR100, see Fig. 6. Further pruning to 1%\nworsens the accuracy and the OOD detection results as well. On the other hand, pruning 50% of the Z matrices for FFG-U reduces the parameter count to 13.2% of a deterministic net, at the cost of only slightly worse calibration. See Appendix K for the full results." }, { "heading": "5 Discussions", "text": "" }, { "heading": "5.1 A function-space perspective on inducing weights", "text": "Although the inducing weight approach performs approximate inference in weight space, we present in Appendix G a function-space inference perspective of the proposed method, showing its deep connections to sparse GPs. Our analysis considers the function-space behaviour of each network layer’s output and discusses the corresponding interpretations of the U variables and Z parameters.\nThe interpretations are visualised in Fig. 7. Similar to sparse GPs, in each layer, the Zc parameters can be viewed as the (transposed) inducing input locations which lie in the same space as the layer’s input. The Uc variables can also be viewed as the corresponding (noisy) inducing outputs that lie in the pre-activation space. Given that the output dimension dout can still be high (e.g. > 1000 in a fully connected layer), our approach performs further dimension reduction in a similar spirit as probabilistic PCA (Tipping & Bishop, 1999), which projects the column vectors of Uc to a lower-dimensional space. This returns the inducing weight variables U, and the projection parameters are {Zr,Dr}. Combining the two steps, it means the column vectors of U can be viewed as collecting the “noisy projected inducing outputs” whose corresponding “inducing inputs” are row vectors of Zc (see the red bars in Fig. 7).\nIn Appendix G we further derive the resulting variational objective from the function-space view, which is almost identical to Eq. (9), except for scaling coefficients on the R(λl) terms to account for the change in dimensionality from weight space to function space. This result nicely connects posterior inference in weight- and function-space." }, { "heading": "5.2 Related work", "text": "Parameter-efficient uncertainty quantification methods Recent research has proposed Gaussian posterior approximations for BNNs with efficient covariance structure (Ritter et al., 2018; Zhang et al., 2018b; Mishkin et al., 2018). The inducing weight approach differs from these in introducing structure via a hierarchical posterior with low-dimensional auxiliary variables. Another line of work reduces the memory overhead via efficient parameter sharing (Louizos & Welling, 2017; Wen et al., 2020; Świątkowski et al., 2020; Dusenberry et al., 2020). The third category of work considers a hybrid approach, where only a selective part of the neural network receives Bayesian treatments, and the other weights remain deterministic (Bradshaw et al., 2017; Daxberger et al., 2021). However, both types of approaches maintain a “mean parameter” for the weights, making the memory footprint at least that of storing a deterministic neural network. Instead, our approach shares parameters via the augmented prior with efficient low-rank structure, reducing the memory use compared to a deterministic network. In a similar spirit, Izmailov et al. (2019) perform inference in a d-dimensional sub-space obtained from PCA on weights collected from an SGD trajectory. But this approach does not leverage the layer-structure of neural networks and requires d× memory of a single network.\nNetwork pruning in uncertainty estimation context There is a large amount of existing research advocating network pruning approaches for parameter-efficient deep learning, e.g. see Han et al. (2016); Frankle & Carbin (2018); Lee et al. (2019). In this regard, mean-field VI approaches have also shown success in network pruning, but only in terms of maintaining a minimum accuracy level (Graves, 2011; Louizos et al., 2017; Havasi et al., 2019). To the best of our knowledge, our empirical study presents the first evaluation for VI-based pruning methods in maintaining uncertainty estimation quality. Deng et al. (2019) considers pruning BNNs with stochastic gradient Langevin dynamics (Welling & Teh, 2011) as the inference engine. The inducing weight approach is orthogonal to these BNN pruning approaches, as it leaves the prior on the network parameters intact, while the pruning\napproaches correspond to a post-hoc change of the probabilistic model to using a sparse weight prior. Indeed our parameter pruning experiments showed that our approach can be combined with network pruning to achieve further parameter efficiency improvements.\nSparse GP and function-space inference As BNNs and GPs are closely related (Neal, 1995; Matthews et al., 2018; Lee et al., 2018), recent efforts have introduced GP-inspired techniques to BNNs (Ma et al., 2019; Sun et al., 2019; Khan et al., 2019; Ober & Aitchison, 2021). Compared to weight-space inference, function-space inference is appealing as its uncertainty is more directly relevant for predictive uncertainty estimation. While the inducing weight approach performs computations in weight-space, Section 5.1 establishes the connection to function-space posteriors. Our approach is related to sparse deep GP methods with Uc having similar interpretations as inducing outputs in e.g. Salimbeni & Deisenroth (2017). The major difference is that U lies in a low-dimensional space, projected from the pre-activation output space of a network layer.\nThe original Matheron’s rule (Journel & Huijbregts, 1978; Hoffman & Ribak, 1991; Doucet, 2010) for sampling from conditional multivariate Gaussian distributions has recently been applied to speed-up sparse GP inference (Wilson et al., 2020, 2021). As explained in Section 3.2, direct application of the original rule to sampling W conditioned on U still incurs prohibitive cost as p(vec(W), vec(U)) does not have a convenient factorisation form. Our extended Matheron’s rule addresses this issue by exploiting the efficient factorisation structure of the joint matrix normal distribution p(W,Uc,Ur,U), reducing the dominating factor of computation cost from cubic (O(d3outd3in)) to linear (O(doutdin)). We expect this new rule to be useful for a wide range of models/applications beyond BNNs, such as matrix-variate Gaussian processes (Stegle et al., 2011).\nPriors on neural network weights Hierarchical priors for weights has also been explored (Louizos et al., 2017; Krueger et al., 2017; Atanov et al., 2019; Ghosh et al., 2019; Karaletsos & Bui, 2020). However, we emphasise that p̃(W,U) is a pseudo prior that is constructed to assist posterior inference rather than to improve model design. Indeed, parameters associated with the inducing weights are optimisable for improving posterior approximations. Our approach can be adapted to other priors, e.g. for a Horseshoe prior p(θ, ν) = p(θ|ν)p(ν) = N (θ; 0, ν2)C+(ν; 0, 1), the pseudo prior can be defined as p̃(θ, ν, a) = p̃(θ|ν, a)p̃(a)p(ν) such that ∫ p̃(θ|ν, a)p̃(a)da = p(θ|ν). In general, pseudo priors have found broader success in Bayesian computation (Carlin & Chib, 1995)." }, { "heading": "6 Conclusion", "text": "We have proposed a parameter-efficient uncertainty quantification framework for neural networks. It augments each of the network layer weights with a small matrix of inducing weights, and by extending Matheron’s rule to matrix-normal related distributions, maintains a relatively small run-time overhead compared with ensemble methods. Critically, experiments on prediction and uncertainty estimation tasks show the competence of the inducing weight methods to the state-of-the-art, while reducing the parameter count to under a quarter of a deterministic ResNet-50 before pruning. This represents a significant improvement over prior Bayesian and deep ensemble techniques, which so far have not managed to go below this threshold despite various attempts of matching it closely.\nSeveral directions are to be explored in the future. First, modelling correlations across layers might further improve the inference quality. We outline an initial approach leveraging inducing variables in Appendix H. Second, based on the function-space interpretation of inducing weights, better initialisation techniques can be inspired from the sparse GP and dimension reduction literature. Similarly, this interpretation might suggest other innovative pruning approaches for the inducing weight method, thereby achieving further memory savings. Lastly, the run-time overhead of our approach can be mitigated by a better design of the inducing weight structure as well as vectorisation techniques amenable to parallelised computation. Designing hardware-specific implementations of the inducing weight approach is also a viable alternative for such purposes." } ]
2,022
Sparse Uncertainty Representation in Deep Learning with Inducing Weights
SP:4b4f70092c9fceabdc76c6ed5c5cf83c7791e119
[ "This paper proposes a hybrid-regressive machine translation (HRT) approach—combining autoregressive (AT) and non-autoregressive (NAT) translation paradigms: it first uses an AT model to generate a “gappy” sketch (every other token in a sentence), and then applies a NAT model to fill in the gaps with a single pass. As a result the AT part latency is roughly reduced by half compared to a full AT baseline. The AT and NAT models share a majority part of the parameters and can be trained jointly with a carefully designed curriculum learning procedure. Experiments on several MT benchmarks show that the proposed approach achieves speedup over the full AT baseline with comparable translation quality." ]
Although the non-autoregressive translation model based on iterative refinement has achieved comparable performance to the autoregressive counterparts with faster decoding, we empirically found that such aggressive iterations make the acceleration rely heavily on small batch size (e.g., 1) and computing device (e.g., GPU). By designing synthetic experiments, we highlight that iteration times can be significantly reduced when providing a good (partial) target context. Inspired by this, we propose a two-stage translation prototype – Hybrid-Regressive Translation (HRT). HRT first jumpily generates a discontinuous sequence by autoregression (e.g., make a prediction every k tokens, k >1). Then, with the help of the partially deterministic target context, HRT fills all the previously skipped tokens with one iteration in a non-autoregressive way. The experimental results on WMT’16 En↔Ro and WMT’14 En↔De show that our model outperforms the state-of-the-art non-autoregressive models with multiple iterations, and the original autoregressive models. Moreover, compared with autoregressive models, HRT can be steadily accelerated 1.5 times regardless of batch size and device.
[]
[ { "authors": [ "Nader Akoury", "Kalpesh Krishna", "Mohit Iyyer" ], "title": "Syntactically supervised transformers for faster neural machine translation", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Aishwarya Bhandare", "Vamsi Sripathi", "Deepthi Karkada", "Vivek Menon", "Sun Choi", "Kushal Datta", "Vikram Saletore" ], "title": "Efficient 8-bit quantization of transformer neural machine language translation model", "venue": null, "year": 1906 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Marjan Ghazvininejad", "Omer Levy", "Yinhan Liu", "Luke Zettlemoyer" ], "title": "Mask-predict: Parallel decoding of conditional masked language models", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Jiatao Gu", "James Bradbury", "Caiming Xiong", "Victor OK Li", "Richard Socher" ], "title": "Non-autoregressive neural machine translation", "venue": "arXiv preprint arXiv:1711.02281,", "year": 2017 }, { "authors": [ "Junliang Guo", "Xu Tan", "Di He", "Tao Qin", "Linli Xu", "Tie-Yan Liu" ], "title": "Non-autoregressive neural machine translation with enhanced decoder input", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Junliang Guo", "Xu Tan", "Linli Xu", "Tao Qin", "Enhong Chen", "Tie-Yan Liu" ], "title": "Fine-tuning by curriculum learning for non-autoregressive neural machine translation", "venue": "arXiv preprint arXiv:1911.08717,", "year": 2019 }, { "authors": [ "Junliang Guo", "Linli Xu", "Enhong Chen" ], "title": "Jointly masked sequence-to-sequence model for nonautoregressive neural machine translation", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Lukasz Kaiser", "Samy Bengio", "Aurko Roy", "Ashish Vaswani", "Niki Parmar", "Jakob Uszkoreit", "Noam Shazeer" ], "title": "Fast decoding in sequence models using discrete latent variables", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jungo Kasai", "James Cross", "Marjan Ghazvininejad", "Jiatao Gu" ], "title": "Parallel machine translation with disentangled context transformer", "venue": "In ICML 2020: 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Jungo Kasai", "Nikolaos Pappas", "Hao Peng", "James Cross", "Noah A. Smith" ], "title": "Deep encoder, shallow decoder: Reevaluating the speed-quality tradeoff in machine translation", "venue": "arXiv preprint arXiv:2006.10369,", "year": 2020 }, { "authors": [ "Yoon Kim", "Alexander M Rush" ], "title": "Sequence-level knowledge distillation", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Jason Lee", "Elman Mansimov", "Kyunghyun Cho" ], "title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Ye Lin", "Yanyang Li", "Tengbo Liu", "Tong Xiao", "Tongran Liu", "Jingbo Zhu" ], "title": "Towards fully 8-bit integer inference for the transformer model", "venue": "In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Xuezhe Ma", "Chunting Zhou", "Xian Li", "Graham Neubig", "Eduard Hovy" ], "title": "Flowseq: Nonautoregressive conditional sequence generation with generative flow", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Qiu Ran", "Yankai Lin", "Peng Li", "Jie Zhou" ], "title": "Guiding non-autoregressive neural machine translation decoding with reordering information", "venue": null, "year": 1911 }, { "authors": [ "Chenze Shao", "Yang Feng", "Jinchao Zhang", "Fandong Meng", "Xilin Chen", "Jie Zhou" ], "title": "Retrieving sequential information for non-autoregressive neural machine translation", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Peter Shaw", "Jakob Uszkoreit", "Ashish Vaswani" ], "title": "Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", "venue": null, "year": 2018 }, { "authors": [ "Raphael Shu", "Jason Lee", "Hideki Nakayama", "Kyunghyun Cho" ], "title": "Latent-variable nonautoregressive neural machine translation with deterministic inference using a delta posterior", "venue": null, "year": 1908 }, { "authors": [ "Zhiqing Sun", "Zhuohan Li", "Haoqing Wang", "Di He", "Zi Lin", "Zhihong Deng" ], "title": "Fast structured decoding for sequence models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Chunqi Wang", "Ji Zhang", "Haiqing Chen" ], "title": "Semi-autoregressive neural machine translation", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Qiang Wang", "Fuxue Li", "Tong Xiao", "Yanyang Li", "Yinqiao Li", "Jingbo Zhu" ], "title": "Multi-layer representation fusion for neural machine translation", "venue": "In Proceedings of the 27th International Conference on Computational Linguistics,", "year": 2018 }, { "authors": [ "Qiang Wang", "Bei Li", "Tong Xiao", "Jingbo Zhu", "Changliang Li", "Derek F. Wong", "Lidia S. Chao" ], "title": "Learning deep transformer models for machine translation", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Yiren Wang", "Fei Tian", "Di He", "Tao Qin", "ChengXiang Zhai", "Tie-Yan Liu" ], "title": "Non-autoregressive machine translation with auxiliary regularization", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Tong Xiao", "Yinqiao Li", "Jingbo Zhu", "Zhengtao Yu", "Tongran Liu" ], "title": "Sharing attention weights for fast transformer", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Biao Zhang", "Deyi Xiong", "Jinsong Su" ], "title": "Accelerating neural transformer via an average attention network", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Wen Zhang", "Liang Huang", "Yang Feng", "Lei Shen", "Qun Liu" ], "title": "Speeding up neural machine translation decoding by cube pruning", "venue": "Proceedings of EMNLP 2018,", "year": 2018 }, { "authors": [ "Zhisong Zhang", "Rui Wang", "Masao Utiyama", "Eiichiro Sumita", "Hai Zhao" ], "title": "Exploring recombination for efficient decoding of neural machine translation", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Chunting Zhou", "Jiatao Gu", "Graham Neubig" ], "title": "Understanding knowledge distillation in nonautoregressive machine translation", "venue": "In ICLR 2020 : Eighth International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ghazvininejad" ], "title": "In the synthetic experiment, we trained all AT models with the standard Transformer-Base configuration: layer=6, dim=512, ffn=2048, head=8. The difference from Ghazvininejad et al. (2019) is that they trained the AT models for 300k steps, but we updated 50k/100k steps on En→Ro and En→De", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Although autoregressive translation (AT) has become the de facto standard for Neural Machine Translation (Bahdanau et al., 2015), its nature of generating target sentences sequentially (e.g., from left to right) makes it challenging to respond quickly in a production environment. One straightforward solution is the non-autoregressive translation (NAT) (Gu et al., 2017), which predicts the entire target sequence in one shot. However, such one-pass NAT models lack dependencies between target words and still struggles to produce smooth translations, despite many efforts developed (Ma et al., 2019; Guo et al., 2019a; Wang et al., 2019b; Shao et al., 2019; Sun et al., 2019).\nRecent studies show that extending one-pass NAT to multi-pass NAT, so-called iterative refinement (IR-NAT), is expected to break the performance bottleneck (Lee et al., 2018; Ghazvininejad et al., 2019; Gu et al., 2019; Guo et al., 2020; Kasai et al., 2020a). Unlike onepass NAT, which outputs the prediction immediately, IR-NAT takes the translation hypothesis from the previous iteration as a reference and regularly polishes the new translation until achieving the predefined iteration count I or no changes appear in the translation. Compared with AT, IR-NAT with I=10 runs 2-5 times faster with a considerable translation accuracy, as reported by Guo et al. (2020).\nHowever, we highlight that the fast decoding of IR-NAT heavily relies on small batch size and GPU, which is rarely mentioned in prior studies 1. Without loss of generality, we take Mask-Predict (MP)\n1Unfortunately, such a decoding setting is not common in practice. NMT systems deployed on GPUs tend to use larger batches to increase translation throughput, while the batch size of 1 is used more frequently in offline systems running on CPUs. e.g., smartphones.\n(Ghazvininejad et al., 2019) as an example, a typical IR-NAT paradigm based on the conditional masked language model. Figure 1 illustrates that when the batch exceeds 8, MP(I=10) is already running slower than AT, and the situation is even worse on CPU. Further analysis shows that the increase in batch size leads to the efficiency degradation of parallel computing in NAT models 2.\nTo tackle this problem, we first design a synthetic experiment to understand the relationship between target context and iteration times. We mask some proportion tokens on the translation generated by a pretrained AT and take it as the decoder input of the pretrained MP. Then we surprisingly found that even masking 70% AT hypothesis, and the remaining target context can help MP(I=1) to compete with the standard MP(I=10) (Figure 2). This result confirms that decoding with multiple iterations in NAT is unnecessary when providing a good (partial) reference hypothesis.\nInspired by this, we propose a two-stage translation prototype——Hybrid-Regressive Translation (HRT). After encoding, HRT first uses an autoregressive decoder (called Skip-AT) to produce a discontinuous translation hypothesis. Concretely, at decoding step i, the SKip-AT decoder immediately predicts the (i + k)-th token yi+k without generating yi+1, . . . , yi+k−1, where k is a hyperparameter and k > 1. Then, a non-autoregressive decoder like MP (called Skip-MP) predicts previously skipped tokens with one iteration according to the deterministic context provided by Skip-AT. Since both Skip-AT and Skip-MP share the same model parameters, HRT does not increase parameters significantly. To train HRT effectively and efficiently, we further propose joint training guided by curriculum learning and mixed distillation. Experimental results on WMT En↔Ro and En↔De show that HRT is far superior to existing IR-NATs and achieves comparable or even better accuracy than the original AT 3 with a consistent 50% decoding speedup on varying batch sizes and devices (GPU, CPU)." }, { "heading": "2 BACKGROUND", "text": "Given a source sentence x = {x1, x2, . . . , xM} and a target sentence y = {y1, y2, . . . , yN}, there are several ways to model P (y|x):\nAutoregressive translation (AT) is the dominant approach in NMT, which decomposes P (y|x) by chain rules:\nP (y|x) = N∏ t=1 P (yt|x, y<t) (1)\nwhere y<t denotes the generated prefix translation before time step t. However, the existence of y<t requires the model must wait for yt−1 to be produced before predicting yt, which hinders the possibility of parallel computation along with time step.\nNon-autoregressive translation (NAT) is first proposed by Gu et al. (2017), allowing the model to generate all target tokens simultaneously. NAT replaces y<t with target-independent input z and rewrites Eq. 1 as:\nP (y|x) = P (N |x) N∏ t=1 P (yt|x, z) (2)\nIn Gu et al. (2017), they monotonically copy the source embedding as z according to a fertility model. Subsequently, the researchers developed more advanced methods to enhance z, such as adversarial source embedding (Guo et al., 2019a), reordered source sentence (Ran et al., 2019), latent variables (Ma et al., 2019; Shu et al., 2019) etc, but there still is a huge performance gap between AT and NAT.\nIterative refinement based non-autoregressive translation (IR-NAT) extends the traditional onepass NAT by introducing the multi-pass decoding mechanism (Lee et al., 2018; Ghazvininejad et al., 2019; Gu et al., 2019; Guo et al., 2020; Kasai et al., 2020a). IR-NAT applies a conversion function\n2Early experiment shows that when the batch size increases from 1 to 32, the latency of AT is reduced by 22 times, while MP(I=10) only reduces by four times. Latency is measured by the average time of translating a sentence on a constant test set. See Appendix A for details.\n3Thanks to the proposed training algorithm, a single HRT model can support both hybrid-regressive decoding and autoregressive decoding at inference. Here, the AT model refers to the autoregressive teacher model that generates the distillation data.\nF on the deterministic hypothesis of previous iteration y′ as the alternative to z. Common implementations of F include identity (Lee et al., 2018), random masking (Ghazvininejad et al., 2019) or random deletion (Gu et al., 2019) etc. Thus, we can predict y by:\nP (y|x) = N ′∏ t=1 P (y′m(t)|x,F(y′)) (3)\nwhere N ′ is the number of refined tokens in F(y′), m(t) is the real position of t-th refined token in y′. In this way, the generation process of IR-NAT is simple: first, the NAT model produces an inaccurate translation as the initial hypothesis, and then iteratively refines it until converge or reaching the maximum number of iterations.\nMask-Predict (MP) is a typical instance of IR-NAT, trained by a conditional masked language model objective like BERT (Devlin et al., 2019). In this work, we use MP as the representation of IR-NAT due to its excellent performance and simplification. In MP, F randomly masks some tokens over the sequence in training but selects those predicted tokens with low confidences at inference." }, { "heading": "3 IS ITERATIVE REFINEMENT ALL YOU NEED?", "text": "As mentioned earlier, IR-NAT with multiple iterations slows down severely in some cases. It is natural to think of reducing iterations to alleviate it. This section starts from synthetic experiments on WMT’16 En→Ro and WMT’14 En→De to verify the assumption that a sufficiently good decoder input can help reduce iterations. Here we construct the “good ” decoder input from the translation hypothesis produced by an AT model.\nModels We use the official MP models released by Ghazvininejad et al. (2019) 4. Since the authors did not publish their AT baselines, we use the same data to retrain AT models with the standard Transformer-Base configuration (Vaswani et al., 2017) and obtain comparable performance with theirs (see Appendix B for more details).\nDecoding AT models decode with beam sizes of 5 on both tasks. Then, we replace a certain percentage of AT translation tokens with <mask> and use it as input to the MP model (see below for replacement strategy). Unlike the standard MP model that uses a large beam size (e.g., 5) and iterates several times (e.g., 10), the MP model used here only iterates once with beam size 1. We substitute all input <mask> with MP’s predictions to obtain the final translation. We report casesensitive tokenized BLEU score by multi-bleu.perl.\nMask Strategy We tested 4 strategies to mask AT translations: Head, Tail, Random and Chunk. Given the masking rate pmask and the translation length N , the number of masked tokens is Nmask=max(1, bN×pmaskc). Then Head/Tail always masks the first/last Nmask tokens, while Random masks the translation randomly. Chunk is slightly different from the above strategies. It first divides the target sentence into C chunks, where C = Ceil(N/k) and k is the chunk size. Then in each chunk, we retain the first token, but mask other k-1 tokens. Thus, the actual masking rate in Chunk is 1-1/k instead of pmask. To exclude randomness, we ran Random three times with different seeds and report the average results." }, { "heading": "3.1 RESULTS", "text": "The experimental results are illustrated in Figure 2, where we can see that:\nA balanced bidirectional context is critical. Compared with Tail and Head, it is obvious that Rand and Chunk both have better performance. We attribute it to the benefit of the bidirectional context in Rand and Chunk (Devlin et al., 2019), because Tail and Head can only provide unidirectional context (i.e., prefix or suffix). In addition, compare Chunk with Random, we find that Chunk is moderately but consistently superior to Random, even if more tokens are masked. For instance, on the WMT En-De task, when the chunk size is 4 (the masking rate is 75%), the BLEU score of Chunk is 27.03, which is +0.3 BLEU higher than that of Random with the masking rate of\n4https://github.com/facebookresearch/Mask-Predict\n70%. Because the difference between Chunk and Random lies only in the distribution of <mask>, this experiment indicates that making <mask> uniformly on sequence is better than random 5. Small beams and one iteration are sufficient. Compared with the standard MP with the beam size of 5 and 10 iterations, it is interesting to find that even if only 30%-40% of the AT translations are exposed, our MP using greedy search and one iteration can achieve quite comparable performance." }, { "heading": "4 HYBRID-REGRESSIVE TRANSLATION", "text": "A limitation in the above synthetic experiment is that the MP decoder input comes from an AT hypothesis, which is impossible in practice 6. To solve this problem as well as inspired by the Chunk strategy’s success, we propose a two-stage translation paradigm called Hybrid-Regressive Translation (HRT). Briefly speaking, HRT can autoregressively generate a discontinuous sequence with chunk size k (stage I), and non-autoregressively fill the skipped tokens (stage II) in one model. Thus, the standard AT can be regarded as the special case of HRT when k=1 without stage II." }, { "heading": "4.1 ARCHITECTURE", "text": "Overview. Our HRT consists of three parts: encoder, Skip-AT decoder (for stage I), and Skip-MP decoder (for stage II). All components adopt the Transformer architecture (Vaswani et al., 2017): the encoder contains self-attention sublayers and feedforward sublayers, and additional cross-attention sublayers are added to the decoder. The two decoders have the same network structure and share model parameters, leading to the same parameter size compared to the standard MP. The only difference between the two decoders lies in the masking mode in the self-attention sublayer. The Skip-AT decoder masks future tokens to guarantee strict left-to-right generation, while the Skip-MP decoder eliminates this limitation to leverage the bi-directional context. Simplified relative position representation. Another difference from the standard MP architecture is that our decoder self-attention equips with relative position representation (RPR) (Shaw et al.,\n5Chunk guarantees that each unmasked token (except the first or last one in the sequence) can meet two deterministic tokens within the window size of k. However, in extreme cases, when all <mask> happen to concentrate on the left/right side of the sequence, Random will degrade into Head/Tail\n6We can directly return AT predictions as translation results without going through MP.\n2018) to enable the model to capture the positional relationship between words easily 7. Precisely, the decoder self-attention with RPR is calculated by:\nOi = Softmax (Qi(KT +Ri)√\ndk\n) V (4)\nwhere Ri is the relative position embedding 8. Note that Eq. 4 only injects the relative positional representation in Key (KT + Ri) without involving Value V . We found that this simplification has no negative impact on performance but significantly saves memory footprint. No target length predictor. Most previous NAT methods need to jointly train the translation model with an independent translation length predictor. However, such a length predictor is unnecessary for us because the translation length is a by-product of Skip-AT, e.g., Nnat=k×Nat, where Nat is the sequence length produced by Skip-AT 9. Another bonus is that we can avoid carefully tuning the weighting coefficient between the length loss and the token prediction loss." }, { "heading": "4.2 TRAINING", "text": "Training HRT models is not trivial because a single HRT model needs to learn to generate sequences by both autoregression and non-autoregression. This section will introduce three details of training HRT models, including chunk-aware training samples, curriculum learning, and mixed distillation. We describe the entire training algorithm in Appendix C.\nChunk-aware training samples. As listed in Table 1, the training samples for Skip-AT and SkipMP are different from the standard AT and MP. Compared with AT, Skip-AT shrinks the sequence length from N to N/k. It should be noted that, although the sequence feeding to Skip-AT is shortened, the input position still follows the original sequence rather than the surface position. For example, in Table 1, the position of Skip-AT input (<s2>, y2, y4) is (0, 2, 4), instead of (0, 1, 2). Moreover, MP has the opportunity to mask any token over the target sequence without considering the position. However, the masking pattern in Skip-MP is deterministic, i.e., masking all non-first tokens in each chunk. Therefore, we can say that the training sample for Skip-AT and Skip-MP is in a chunk-aware manner. Curriculum learning. Unfortunately, direct joint training of Skip-AT and Skip-MP is problematic because the chunk-aware training samples cannot make full use of all the tokens in the sequence. For example, in Table 1, the target tokens y1 and y3 have no chance to be learned as the decoder input of either Skip-AT or Skip-MP. However, there is no such problem in AT and MP. Therefore, we propose to gradually transition from joint training {AT, MP} to {Skip-AT, Skip-MP} through curriculum learning (Bengio et al., 2009). In other words, the model is trained from chunk size 1 to chunk size k (k>1). More concretely, given a batch of original sentence pairs B = (xi, yi)|ni=1 and let the proportion of chunk size k in B be pk, we start with pk = 0 and construct the training samples of AT and MP for all pairs. And then we gradually increase pk to introduce more learning signals for Skip-AT and Skip-MP until pk=1. In implement, we schedule pk by:\npk = ( t T )λ (5)\nwhere t and T are the current and total training steps, respectively. λ is a hyperparameter and we use λ=1 to increase pk in a linear manner for all experiments. Mixed Distillation. NAT models generally use the distillation data generated by AT models due to more smoothing data distribution (Zhou et al., 2020). However, making full use of distillation data may miss the diversity in raw data. To combine the best of both worlds, we propose a simple and effective approach – Mixed Distillation (MixDistill). During training, MixDistill randomly samples a target sentence from the raw version y with probability praw or its distillation version y∗ with probability 1-praw, where praw is a hyperparameter 10. By learning from raw target sentences, we empirically found that HRT is less prone to over-fit in some simple tasks (e.g., WMT’16 En→Ro).\n7We keep the sinusoidal absolute position embedding unchanged. 8Ri = Embed(clip(i, 1), . . . , clip(i,N)), where clip(i, j) = max(w,min(w, j−i)), w is the window size. 9More precisely, Nnat here is the maximum target length rather than the realistic length because multiple </s> may be predicted in the last k tokens. 10Training with full raw data or full distillation data can be regarded as the special case of MixDistill when praw=1 or praw=0." }, { "heading": "4.3 INFERENCE", "text": "After encoding, the Skip-AT decoder starts from<sk> to autoregressively generate a discontinuous target sequence yat = (z1, z2, . . . , zm) with chunk size k until meeting</s>. Then we construct the input of Skip-MP decoder xmp by appending k − 1 <mask> before every zi. The final translation is generated by replacing all <mask> with the predicted tokens by Skip-MP decoder with one iteration. If there are multiple </s> existing, we truncate to the first </s>. Note that the beam size bat in Skip-AT can be different from the beam size bmp in Skip-MP as long as st. bat ≥ bmp. If bat > bmp, then we only feed the Skip-MP with the top bmp Skip-AT hypothesis. Finally, we choose the hypothesis with the highest score:\nscore(ŷ) = m∑ i=1\n(zi|x, z<i)︸ ︷︷ ︸ Skip-AT score +\nm−1∑ i=0 k−1∑ j=1\n(ŷi×k+j |x,xmp)︸ ︷︷ ︸ Skip-MP score\n(6)\nwhere zi=ŷi×k. In Appendix D, we summarized the comparison with the existing three methods from the aspects of decoding step and calculation cost, including AT, MP, and semi-autoregressive translation (SAT) (Wang et al., 2018a). Besides, thanks to the joint training of chunk size 1 and k simultaneously, the HRT model can also behave like a standard AT model by forcing decoding by chunk size one (denoted as Cd=1). In this way, we can only use the Skip-AT decoder to generate the entire sequence without the help of Skip-MP. Thus, Cd=1 can be regarded as the performance upper bound when the decoding chunk size is k (denoted as Cd=k)." }, { "heading": "5 EXPERIMENTS", "text": "Datasets. We conducted experiments on four widely used tasks: WMT’16 English↔Romanian (En↔Ro, 610k) and WMT’14 English↔German (En↔De, 4.5M). We replicated the same data processing as Ghazvininejad et al. (2019) for fair comparisons.\nAT teachers for distillation. Since Ghazvininejad et al. (2019) only release the distillation data of En↔Ro, not En↔De, we retrained the AT teacher models of En↔De to produce the distillation data. Specifically, Ghazvininejad et al. (2019) use Transformer-Large as the teacher, but we use the deep PreNorm Transformer-Base with a 20-layer encoder, which is faster to train and infer with comparable performance (Wang et al., 2019a).\nModels and hyperparameters. We ran all experiments on 8 TITAN X (Pascal) GPUs. Unless noted otherwise, we use the chunk size k=2 and λ=1. praw=0.5 for En↔Ro and praw=0.8 for\nEn↔Ro according to validation sets. The windows size of RPR is 16 11. HRT models are finetuned on pretrained AT models for the same training steps 12. Other training hyperparameters are the same as Vaswani et al. (2017) or Wang et al. (2019a) (using deep-encoder). Please refer to them for more details. Translation quality. Table 2 reports the BLEU scores on four tasks. First of all, we can see that IR-NAT models significantly outperform those one-pass NAT models (e.g., SAT, FCL-NAT, FlowSeq). However, our small beam model (bat=bmp=1) can defeat the existing multiple-iteration models. Furthermore, when the beam sizes increase to 5, HRT equipped with a standard 6-layer encoder achieves +1.0 BLEU point improvement in En↔Ro compared to the previous best results (Guo et al., 2020). Even on harder En↔De tasks, we also outperform them with a 0.4 BLEU score. We can easily trade-off between performance and speed by using bat=5 but bmp=1. Not only NAT models, but also we are surprised to find that HRT can even surpass the AT models trained from scratch. We attribute it to two reasons: (1) HRT is fine-tuned on a well-trained AT model, making training easier; (2) Mixing up AT and NAT has a better regularization effect than training alone. Besides, in line with Guo et al. (2020), which demonstrate that the encoder is critical for NAT, we can obtain a further improvement of about +0.8 BLEU when using a deeper encoder.\nModel MT04 MT05 MT08 AT 43.86 52.91 33.94 MP(I=10) 42.47 52.16 33.09 HRT1-1 43.96 53.16 33.99 HRT5-1 44.28 53.44 34.63 HRT5-5 44.31 53.77 34.74\nTranslation speed. Unlike the previous works that only run the model on GPU with batch size of 1, we systematically test the decoding speed using varying batch sizes and devices on WMT’14 En→De test set (see Figure 4). By default, the beam size is 5. It can be seen that although HRT is slower than MP10 when running on GPU with a batch size of 1, MP10 dramatically slows down as the batch size increases. In contrast, HRT5-1 is consistently more than 50% faster than AT without changing with the environment. These results show that our HRT can be an effective and efficient substitute for AT and IR-NAT." }, { "heading": "6 ANALYSIS", "text": "Effect of Mixed Distillation. In Table 3, we compared different data strategies, including raw data (Raw), sequence-level knowledge distillation (Dist.), and mixed distillation (Mix Dist.). Overall, Mix Dist. is superior to other methods across the board, which indicates that training with raw data and distillation data is complementary. In addition, we also find that the performance of the distillation data is lower than the raw data on En→Ro task, which is against the previous results. As interpreted by Zhou et al. (2020), we suspect that when the translation model is strong enough, training by distillation data completely may make the learning too easy and lead to over-fitting. Effect of chunk size. We tested the chunk size k from {2, 3, 4}, and the results are listed in Table 4. Obviously, we can see that: (1) A large k has more significant acceleration on GPU because fewer autoregressive steps are required; (2) As k increases, the performance of hybrid-regressive decoding\n11For autoregressive baselines, adding RPR in the Transformer decoder did not bring obvious improvement over the vanilla Transformer. For example, on WMT’14 En→De, Transformer=27.45 and Transformer+RPR=27.34.\n12Since HRT needs to train Skip-AT and Skip-MP jointly (please see Algorithm 1 in Appendix C), the wallclock time is about two times longer than AT in the same training epochs. One more thing to note is that the officially released MP models are trained for 300k steps from scratch.\nTable 3: Performance against different data strategies. Cd=1 represents decoding the HRT model in an autoregressive manner.\nLang. Cd Raw Dist. Mix Dist. En→Ro k 33.92 33.41 34.53 1 34.29 33.41 34.27\nEn→De k 26.37 28.00 28.10 1 27.60 28.42 28.51\nTable 4: The effect of training by different chunk sizes. Latency is tested in batch size of 16 using Cd=k and bat=bmp=1.\nChunk BLEU Latency (sec.) (k) Cd= k Cd= 1 GPU CPU 2 34.11 33.86 20.0 70.5 3 31.15 33.78 13.0 54.6 4 28.22 34.12 12.2 53.9\ndrops sharply (e.g., k=4 is 6 BLEU points lower than k=2.), but k has little effect on autoregressive modes. It indicates that the training difficulty of Skip-AT increases as k gets bigger. We think that skip-generation may require more fancy model architecture or training method, which is left for our future work.\nEffect of decoding mode. We tested the well-trained HRT model with two decoding modes: autoregressive (Cd=1) and hybrid-regressive (Cd=k). Concretely, We divided the test set of WMT’14 En→De into several groups according to the source sentence’s length and then compared the two decoding modes in terms of translation speed and accuracy in each group (see Figure 5). First of all, we can see that regardless of the source sentence length, the running speed of Cd=k is consistently faster than Cd=1 on both GPU and CPU, thanks to the shorter autoregressive length. This advantage is more evident on CPU: When the source length is less than 10, Cd=k runs 1.6 times faster than Cd=1, while the speedup ratio increases to 2.0 when the source length > 50. As for accuracy, Cd=k has closed performance to Cd=1 when the length is between 10 and 30, but shorter or longer sentences will hurt the performance. This result indicates that if we dynamically adjust the decoding chunk size Cd according to the source sentence’s length, the HRT model can be expected to improve the performance further at the expense of a certain speed.\nAblation study. We also did an ablation study on WMT’16 En→Ro test set. As shown in Table 5, we can see that all introduced techniques help to improve performance. In particular, using mixed distillation prevents the HRT model from over-fitting and leads to +1.1 BLEU points improvement compared to the standard distillation (-MixDistill). In addition, the other three methods, including training the HRT model from a pretrained AT model (FT), using a relative positional representation on decoder (RPR), and using curriculum learning (CL), can bring about 0.3-0.4 BLEU improvements each. It should be noted that removing curriculum learning makes the trained HRT model fail to decode by Cd=1, whose BLEU score is only 5.18. Since the BLEU score decreases slightly (0.3-0.4 except -MixDistill) when each component is excluded independently, it is difficult to say\nthat the difference of BLEU is not caused by random fluctuation. To verify it, we try to exclude them all from the standard HRT (-ALL). Interestingly, the obtained model drops by 1.96 BLEU points, which is very close to the cumulative BLEU loss (2.13) of excluding each component separately. It indicates that these newly introduced components are complementary. In addition, we also test these methods in MP training. Please see Appendix E for details." }, { "heading": "7 RELATED WORK", "text": "Iterative refinement. Lee et al. (2018) first extend NAT from the conventional one-pass manner to the multi-pass manner. They add an additional decoder to learn to recover from a collapsed target sequence to gold one. Mask-Predict (Ghazvininejad et al., 2019) simplifies the two-decoder structure by introducing the conditional masked language model objective. During each iteration, Mask-Predict retains partial inputted target tokens according to the prediction confidence, while LevTransformer (Gu et al., 2019) uses multiple discriminators to determine the edited tokens. Combination of AT and NAT. The idea of incorporating AT in the NAT model is not new (Kaiser et al., 2018; Ran et al., 2019; Akoury et al., 2019). The main difference from existing methods lies in the content of AT output, such as latent variables (Kaiser et al., 2018), reordered source tokens (Ran et al., 2019), syntactic labels (Akoury et al., 2019) etc. In contrast, our approach uses the deterministic target tokens, which has been proven effective in Ghazvininejad et al. (2019). Decoding acceleration. In addition to transforming the decoding paradigm from autoregressive to non-autoregressive, there are many works to explore how to achieve faster decoding from other aspects. Zhang et al. (2018c;b) propose to optimize the beam search progress by recombining or pruning the translation hypothesis. Considering the network architecture, Zhang et al. (2018a) use light AAN instead of the standard self-attention module; Xiao et al. (2019) share the self-attention weight matrix across decoder layers; Kasai et al. (2020b) suggest using deep-encoder and shallow-decoder network to keep high BLEU score and low delay. Moreover, some common model compression techniques, such as distillation (Kim & Rush, 2016) and quantization (Bhandare et al., 2019; Lin et al., 2020), have also helpful for acceleration. However, the above methods mainly focus on the traditional autoregressive translation, which is orthogonal to our work." }, { "heading": "8 CONCLUSION", "text": "We have pointed out that NAT, especially IR-NAT, cannot efficiently accelerate decoding when using a large batch or running on CPUs. Through a well-designed synthetic experiment, we highlighted that given a good decoder input, the number of iterations in IR-NAT could be dramatically reduced. Inspired by this, we proposed a two-stage translation paradigm HRT to combine AT and NAT’s advantages. The experimental results show that HRT owning equivalent or even higher accuracy and 50% acceleration ratio on varying batches and computing devices is a good substitute for AT." }, { "heading": "A SPEED DEGRADATION ANALYSIS OF NAT MODEL", "text": "For the NAT model, we assume that the computation cost of each iteration is proportional to the size of decoder input tensor (BH ×BM,L,H), where BH is the batch size, BM is the beam size, L is the predicted target length, andH is the network dimension. In this way, the total cost of I iterations (generally, I < L) is Cnat ∝ I × O(BH × BM × L × H). For convenience, we omit BM and H and simplify Cnat to I × O(BH × L). Likely, the computational cost of AT model is about Cat ∝ L×O(BH × 1) 13. Then, we can denote the speedup ratio r as r = CatCnat = L I × O(BH×1) O(BH×L) . Thus, fewer iterations (small I) and faster parallel computation (large O(BH×1)O(BH×L) ) are the keys to IR-NAT.\nHowever, in practice, we find it difficult to increase O(BH×1)O(BH×L) , especially in larger batches. As shown in Table 6, the direct evidence is that when decoding the test set of the WMT’14 En→De task, the time spent by AT decreases with the increase of batch size, while MP(I=10) cannot benefit from this. We test it on Titian X GPU and report the average of 3 runs. BM=5. Note that CPU has similar results. Specifically, we can see that when BH increases from 1 to 32, AT’s latency reduces by 962/43 (about 22) times, while MP (I=10) only reduces by 464/129 (about 4) times. It means that O(BH×1)O(BH×L) becomes lower as BH increases. Until O(BH×1) O(BH×L) < I L , NAT will start to be slower than AT." }, { "heading": "B AT TRANSFORMER IN SYNTHETIC EXPERIMENTS", "text": "In the synthetic experiment, we trained all AT models with the standard Transformer-Base configuration: layer=6, dim=512, ffn=2048, head=8. The difference from Ghazvininejad et al. (2019) is that they trained the AT models for 300k steps, but we updated 50k/100k steps on En→Ro and En→De, respectively. Although fewer updates, as shown in Table 7, our AT models have comparable performance with theirs." }, { "heading": "C TRAINING ALGORITHM", "text": "Algorithm 1 describes the process of training the HRT model. The HRT model is pre-initialized by a pre-trained AT model (Line 1). During training, the training batch Bi randomly select a raw target sentence Yi or its distilled version Y ′ (Line 4-6). Then according to Eq. 5, we can divide B into two parts: Bc=1 and Bc=k, where |Bc=k|/|B| = pk (Line 7-8). Next, we construct four kinds of training samples based on corresponding batches: Batc=k, B at c=1, B mp c=k and B mp c=1. Finally, we collect all training samples together and accumulate their gradients to update the model parameters, which results in the batch size being twice that of standard training.\nAlgorithm 1 Training Algorithm for Hybrid-Regressive Translation Input: Training dataD including distillation targets, pretrained AT model Mat, chunk size k, mixed\ndistillation rate praw Output: Hybrid-Regressive Translation model Mhrt\n1: Mhrt ← Mat . finetune on pre-trained AT 2: for t in 1, 2, . . . , T do 3: X = {x1, . . . ,xn}, Y = {y1, . . . ,yn}, Y ′ = {y′1, . . . ,y′n} ← fetch a batch from D 4: for i in 1, 2, . . . , n do 5: Bi = (Xi,Y ∗i )← sampling Y ∗i ∼ {Yi, Y ′i } with P (Yi) = praw . mixed distillation 6: end for 7: pk ← get the chunk-aware proportion by Eq. 5 . curriculum learning 8: Bc=k,Bc=1 ← B:bn×pkc,Bbn×pkc: . split batch 9: Batc=k,B mp c=k ← construct {Skip-AT, Skip-MP} training samples based on Bc=k\n10: Batc=1,B mp c=1 ← construct {AT, MP} training samples based on Bc=1 11: Optimize Mhrt using Batc=k ∪Batc=1 ∪B mp c=k ∪B mp c=1 . joint training 12: end for" }, { "heading": "D COMPUTATION COMPLEXITY", "text": "In Table 8, we summarized the comparison with autoregressive translation (AT), iterative refinement based non-autoregressive translation (IR-NAT) and semi-autoregressive translation (SAT) (Wang et al., 2018a).\nAT. Although both HRT and AT contain a slow autoregressive generation process, HRT’s length is k times shorter than AT. Considering that the computational complexity of self-attention is quadratic with its length, HRT can save more time in autoregressive decoding. IR-NAT. Since Skip-AT provides a high-quality target context, HRT does not need to use large beam size and multiple iterations like IR-NAT. The experimental results also show that our light NAT can make up for the increased cost in Skip-AT and can achieve stable acceleration regardless of the decoding batch size and running device. SAT. SAT generates segments locally by non-autoregression, but it is still autoregressive between segments. We claim that the SAT reduces the decoding steps by k, but each token’s calculation remains unchanged. In other words, in the time step i, there are i− 1 tokens used for self-attention. By contrast, only i/k tokens are involved in our Skip-AT." }, { "heading": "E APPLY THE OPTIMIZATION METHODS TO MASK-PREDICT", "text": "We conducted experiments on the WMT En-De task to verify whether the optimization methods used in HRT training are complementary to MP, including fine-tuning from the pre-trained autoregressive\n13While the decoder self-attention module considers the previous i tokens, we omit it here for the sake of clarity.\nmodel (FT), relative positional representation (RPR), and mixed distillation (MD). The joint training of Skip-AT and Skip-MP through curriculum learning is not involved, because it is incompatible with MP training. Table 9 shows that the optimization methods used in HRT training are complementary to MP training. With the help of FT+RPR+MD, our MP model with 100k steps can achieve almost the same BLEU score as the officially released model with 300k steps. What’s more, when we train more steps, our MP is improved by +0.61 BLEU points compared with the official model, but still falls behind our HRT model." } ]
2,020
null
SP:41b23082a1439aa8601439e27c9abaa33e06959c
[ "This paper proposes a (decentralized) method for online adjustment of agent incentives in multi-agent learning scenarios, as a means to obtain higher outcomes for each agent and for the group as a whole. The paper uses the “price of anarchy” (the worst value of an equilibrium divided by the best value in the game) as a proxy for the efficiency of the game outcome, and derive an upper bound on a local price of anarchy that agents can differentiate. In several experiments (a traffic network, the coin game, Cleanup), their method leads to improved individual agent and group outcomes relative to baselines, while avoiding cases of stark division of labor that sometimes emerges when agents directly optimize the sum of all agent rewards. " ]
Even in simple multi-agent systems, fixed incentives can lead to outcomes that are poor for the group and each individual agent. We propose a method, D3C, for online adjustment of agent incentives that reduces the loss incurred at a Nash equilibrium. Agents adjust their incentives by learning to mix their incentive with that of other agents, until a compromise is reached in a distributed fashion. We show that D3C improves outcomes for each agent and the group as a whole on several social dilemmas including a traffic network with Braess’s paradox, a prisoner’s dilemma, and several reinforcement learning domains.
[]
[ { "authors": [ "Blaise Aguera y Arcas" ], "title": "Social intelligence", "venue": "In Talk presented at the 33rd Conference on Neural Information Processing Systems Conference,", "year": 2020 }, { "authors": [ "Richard D. Alexander", "Gerald Bargia" ], "title": "Group selection, altruism, and the levels of organization of life", "venue": "Annual Review of Ecology and Systematics,", "year": 1978 }, { "authors": [ "Dario Amodei", "Chris Olah", "Jacob Steinhardt", "Paul Christiano", "John Schulman", "Dan Mané" ], "title": "Concrete problems in ai safety", "venue": "arXiv preprint arXiv:1606.06565,", "year": 2016 }, { "authors": [ "Kenneth J Arrow" ], "title": "Social choice and individual values, volume 12", "venue": "Yale university press,", "year": 1970 }, { "authors": [ "Amir Beck", "Marc Teboulle" ], "title": "Mirror descent and nonlinear projected subgradient methods for convex optimization", "venue": "Operations Research Letters,", "year": 2003 }, { "authors": [ "Martin Beckmann", "Charles B McGuire", "Christopher B Winsten" ], "title": "Studies in the economics of transportation", "venue": "Technical report,", "year": 1956 }, { "authors": [ "Dimitris Bertsimas", "Vivek F Farias", "Nikolaos Trichakis" ], "title": "The price of fairness", "venue": "Operations research,", "year": 2011 }, { "authors": [ "Dimitris Bertsimas", "Vivek F Farias", "Nikolaos Trichakis" ], "title": "On the efficiency-fairness trade-off", "venue": "Management Science,", "year": 2012 }, { "authors": [ "Dietrich Braess" ], "title": "Über ein paradoxon aus der verkehrsplanung", "venue": "Unternehmensforschung,", "year": 1968 }, { "authors": [ "Ennio Cavazzuti", "Massimo Pappalardo", "Mauro Passacantando" ], "title": "Nash equilibria, variational inequalities, and dynamical systems", "venue": "Journal of optimization theory and applications,", "year": 2002 }, { "authors": [ "Edward H Clarke" ], "title": "Multipart pricing of public goods", "venue": "Public choice,", "year": 1971 }, { "authors": [ "Tom Eccles", "Edward Hughes", "János Kramár", "Steven Wheelwright", "Joel Z Leibo" ], "title": "The imitation game: Learned reciprocity in markov games", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2019 }, { "authors": [ "Tom Eccles", "Edward Hughes", "János Kramár", "Steven Wheelwright", "Joel Z Leibo" ], "title": "Learning reciprocity in complex sequential social dilemmas", "venue": "arXiv preprint arXiv:1903.08082,", "year": 2019 }, { "authors": [ "Gerald M Edelman", "Joseph A Gally" ], "title": "Degeneracy and complexity in biological systems", "venue": "Proceedings of the National Academy of Sciences,", "year": 2001 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "arXiv preprint arXiv:1802.01561,", "year": 2018 }, { "authors": [ "Francisco Facchinei", "Jong-Shi Pang" ], "title": "Finite-dimensional variational inequalities and complementarity problems", "venue": "Springer Science & Business Media,", "year": 2007 }, { "authors": [ "Jakob Foerster", "Richard Y Chen", "Maruan Al-Shedivat", "Shimon Whiteson", "Pieter Abbeel", "Igor Mordatch" ], "title": "Learning with opponent-learning awareness", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2018 }, { "authors": [ "Hirokata Fukushima", "Kazuo Hiraki" ], "title": "Whose loss is it? human electrophysiological correlates of non-self reward processing", "venue": "Social Neuroscience,", "year": 2009 }, { "authors": [ "Kurtuluş Gemici", "Elias Koutsoupias", "Barnabé Monnot", "Christos Papadimitriou", "Georgios Piliouras" ], "title": "Wealth inequality and the price of anarchy", "venue": "arXiv preprint arXiv:1802.09269,", "year": 2018 }, { "authors": [ "Ana Grande-Pérez", "Ester Lázaro", "Pedro Lowenstein", "Esteban Domingo", "Susanna C. Manrubia" ], "title": "Suppression of viral infectivity through lethal defection", "venue": "Proceedings of the National Academy of Sciences,", "year": 2005 }, { "authors": [ "Jerry Green", "Jean-Jacques Laffont" ], "title": "Characterization of satisfactory mechanisms for the revelation of preferences for public goods", "venue": "Econometrica: Journal of the Econometric Society,", "year": 1977 }, { "authors": [ "Jerry R Green", "Jean-Jacques Laffont" ], "title": "Incentives in public decision making", "venue": null, "year": 1979 }, { "authors": [ "Garrett Hardin" ], "title": "The tragedy of the commons", "venue": "Science, 162(3859):1243–1248,", "year": 1968 }, { "authors": [ "Jason D Hartline", "Tim Roughgarden" ], "title": "Optimal mechanism design and money burning", "venue": "In Proceedings of the fortieth annual ACM symposium on Theory of computing,", "year": 2008 }, { "authors": [ "David Earl Hostallero", "Daewoo Kim", "Sangwoo Moon", "Kyunghwan Son", "Wan Ju Kang", "Yung Yi" ], "title": "Inducing cooperation through reward reshaping based on peer evaluations in deep multi-agent reinforcement learning", "venue": "In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2020 }, { "authors": [ "Edward Hughes", "Joel Z Leibo", "Matthew Phillips", "Karl Tuyls", "Edgar Dueñez-Guzman", "Antonio García Castañeda", "Iain Dunning", "Tina Zhu", "Kevin McKee", "Raphael Koster" ], "title": "Inequity aversion improves cooperation in intertemporal social dilemmas", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Lorens A Imhof", "Drew Fudenberg", "Martin A Nowak" ], "title": "Tit-for-tat or win-stay, lose-shift", "venue": "Journal of theoretical biology,", "year": 2007 }, { "authors": [ "Max Jaderberg", "Wojciech M Czarnecki", "Iain Dunning", "Luke Marris", "Guy Lever", "Antonio Garcia Castaneda", "Charles Beattie", "Neil C Rabinowitz", "Ari S Morcos", "Avraham Ruderman" ], "title": "Humanlevel performance in 3d multiplayer games with population-based reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Sonia K. Kang", "Jacob B. Hirsh", "Alison L. Chasteen" ], "title": "Your mistakes are mine: Self-other overlap predicts neural response to observed errors", "venue": "Journal of Experimental Social Psychology,", "year": 2010 }, { "authors": [ "Harold H. Kelley", "John W. Thibaut" ], "title": "Interpersonal Relations: A Theory of Interdependence", "venue": null, "year": 1978 }, { "authors": [ "Raghavendra V Kulkarni", "Anna Förster", "Ganesh Kumar Venayagamoorthy" ], "title": "Computational intelligence in wireless sensor networks: A survey", "venue": "IEEE communications surveys & tutorials,", "year": 2010 }, { "authors": [ "Anastasios Kyrillidis", "Stephen Becker", "Volkan Cevher", "Christoph Koch" ], "title": "Sparse projections onto the simplex", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Joel Z Leibo", "Vinicius Zambaldi", "Marc Lanctot", "Janusz Marecki", "Thore Graepel" ], "title": "Multi-agent reinforcement learning in sequential social dilemmas", "venue": "arXiv preprint arXiv:1702.03037,", "year": 2017 }, { "authors": [ "Adam Lerer", "Alexander Peysakhovich" ], "title": "Maintaining cooperation in complex social dilemmas using deep reinforcement learning", "venue": "arXiv preprint arXiv:1707.01068,", "year": 2017 }, { "authors": [ "Alistair Letcher", "Jakob Foerster", "David Balduzzi", "Tim Rocktäschel", "Shimon Whiteson" ], "title": "Stable opponent shaping in differentiable games", "venue": "arXiv preprint arXiv:1811.08469,", "year": 2018 }, { "authors": [ "Ping Li", "Syama Sundar Rangapuram", "Martin Slawski" ], "title": "Methods for sparse and low-rank recovery under simplex constraints", "venue": "arXiv preprint arXiv:1605.00507,", "year": 2016 }, { "authors": [ "Siqi Liu", "Guy Lever", "Josh Merel", "Saran Tunyasuvunakool", "Nicolas Heess", "Thore Graepel" ], "title": "Emergent coordination through competition", "venue": "arXiv preprint arXiv:1902.07151,", "year": 2019 }, { "authors": [ "Andrei Lupu", "Doina Precup" ], "title": "Gifting in multi-agent reinforcement learning", "venue": "In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2020 }, { "authors": [ "Kevin R. McKee", "Ian Gemp", "Brian McWilliams", "Edgar A. Duéñez-Guzmán", "Edward Hughes", "Joel Z. Leibo" ], "title": "Social diversity and social preferences in mixed-motive reinforcement learning", "venue": null, "year": 2002 }, { "authors": [ "Michael P Murray" ], "title": "A drunk and her dog: An illustration of cointegration and error correction", "venue": "The American Statistician,", "year": 1994 }, { "authors": [ "Roger B Myerson", "Mark A Satterthwaite" ], "title": "Efficient mechanisms for bilateral trading", "venue": "Journal of economic theory,", "year": 1983 }, { "authors": [ "Anna Nagurney", "Ding Zhang" ], "title": "Projected dynamical systems and variational inequalities with applications, volume 2", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "William Neuman", "Michael Barbaro" ], "title": "Mayor plans to close parts of broadway to traffic. NYTimes.com", "venue": null, "year": 2009 }, { "authors": [ "Noam Nisan", "Tim Roughgarden", "Eva Tardos", "Vijay V Vazirani" ], "title": "Algorithmic game theory", "venue": "Cambridge university press,", "year": 2007 }, { "authors": [ "Martin Nowak", "Karl Sigmund" ], "title": "A strategy of win-stay, lose-shift that outperforms tit-for-tat in the prisoner’s dilemma", "venue": "game. Nature,", "year": 1993 }, { "authors": [ "Travis E Oliphant" ], "title": "A guide to NumPy, volume 1", "venue": "Trelgol Publishing USA,", "year": 2006 }, { "authors": [ "Mert Pilanci", "Laurent E Ghaoui", "Venkat Chandrasekaran" ], "title": "Recovery of sparse probability measures via convex programming", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Anatol Rapoport", "Albert M Chammah", "Carol J Orwant" ], "title": "Prisoner’s dilemma: A study in conflict and cooperation, volume 165", "venue": "University of Michigan press,", "year": 1965 }, { "authors": [ "Herbert Robbins" ], "title": "Some aspects of the sequential design of experiments", "venue": "Bulletin of the American Mathematical Society,", "year": 1952 }, { "authors": [ "Michael H Rothkopf" ], "title": "Thirteen reasons why the vickrey-clarke-groves process is not practical", "venue": "Operations Research,", "year": 2007 }, { "authors": [ "Tim Roughgarden" ], "title": "Intrinsic robustness of the price of anarchy", "venue": "Journal of the ACM (JACM),", "year": 2015 }, { "authors": [ "Tim Roughgarden", "Florian Schoppmann" ], "title": "Local smoothness and the price of anarchy in splittable congestion games", "venue": "Journal of Economic Theory,", "year": 2015 }, { "authors": [ "Sagar Sahasrabudhe", "Adilson E Motter" ], "title": "Rescuing ecosystems from extinction cascades through compensatory perturbations", "venue": "Nature Communications,", "year": 2011 }, { "authors": [ "Mark Allen Satterthwaite" ], "title": "Strategy-proofness and arrow’s conditions: Existence and correspondence theorems for voting procedures and social welfare functions", "venue": "Journal of economic theory,", "year": 1975 }, { "authors": [ "Florian Schäfer", "Anima Anandkumar" ], "title": "Competitive gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jonathan Sorg", "Richard L Lewis", "Satinder P Singh" ], "title": "Reward design via online gradient ascent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Richard Steinberg", "Willard I Zangwill" ], "title": "The prevalence of braess", "venue": "paradox. Transportation Science,", "year": 1983 }, { "authors": [ "Robert J Tibshirani", "Bradley Efron" ], "title": "An introduction to the bootstrap", "venue": "Monographs on statistics and applied probability,", "year": 1993 }, { "authors": [ "John Glen Wardrop" ], "title": "Some theoretical aspects of road traffic research", "venue": "Proceedings of the institution of civil engineers,", "year": 1952 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Daniel J Wilson" ], "title": "The harmonic mean p-value for combining dependent tests", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Dirk Witthaut", "Marc Timme" ], "title": "Braess’s paradox in oscillator networks, desynchronization and power outage", "venue": "New journal of physics,", "year": 2012 }, { "authors": [ "Jiachen Yang", "Ang Li", "Mehrdad Farajtabar", "Peter Sunehag", "Edward Hughes", "Hongyuan Zha" ], "title": "Learning to incentivize other learning agents", "venue": "arXiv preprint arXiv:2006.06051,", "year": 2020 }, { "authors": [ "Hyejin Youn", "Michael T Gastner", "Hawoong Jeong" ], "title": "Price of anarchy in transportation networks: efficiency and optimality control", "venue": "Physical review letters,", "year": 2008 } ]
[ { "heading": "1 INTRODUCTION", "text": "We consider a setting composed of multiple interacting artificially intelligent agents. These agents will be instantiated by humans, corporations, or machines with specific individual incentives. However, it is well known that the interactions between individual agent goals can lead to inefficiencies at the group level, for example, in environments exhibiting social dilemmas (Braess, 1968; Hardin, 1968; Leibo et al., 2017). In order to resolve these inefficiencies, agents must reach a compromise.\nAny arbitration mechanism that leverages a central coordinator1 faces challenges when attempting to scale to large populations. The coordinator’s task becomes intractable as it must both query preferences from a larger population and make a decision accounting for the exponential growth of agent interactions. If agents or their designers are permitted to modify their incentives over time, the principal must collect all this information again, exacerbating the computational burden. A central coordinator represents a single point of failure for the system whereas one motivation for multi-agent systems research inspired by nature (e.g., humans, ants, the body, etc.) is robustness to node failures (Edelman and Gally, 2001). Therefore, we focus on decentralized approaches.\nA trivial form of decentralized compromise is to require every agent to minimize group loss (maximize welfare). Leaving the optimization problem aside, this removes inefficiency, but similar to a mechanism with a central coordinator, requires communicating all goals between all agents, an expensive step and one with real consequences for existing distributed systems like wireless sensor networks (Kulkarni et al., 2010) where transmitting a signal saps a node’s energy budget. There is also the obvious issue that this compromise may not appeal to an individual agent, especially one that is expected to trade its low-loss state for a higher average group loss. One additional, more subtle consequence of optimizing group loss is that it cannot distinguish between behaviors in environments with a group loss that is constant sum, for instance, in zero-sum games. But zero-sum games have rich structure to which we would like agents to respond. Electing a team leader (or voting on a decision) implies one candidate (decision) wins while another loses. Imagine two agents differ on their binary preference with each trying to minimize their probability of losing. A group loss is indifferent; we prefer the agents play the game (and in this, case argue their points).\nDesign Criteria: We seek an approach to compromise in multi-agent systems that applies to the setting just described. The celebrated Myerson-Satterthwaite theorem (Arrow, 1970; Satterthwaite, 1975; Green and Laffont, 1977; Myerson and Satterthwaite, 1983) states that no mechanism exists that simultaneously achieves optimal efficiency (welfare-maximizing behavior), budget-balance (no taxing agents and burning side-payments), appeals to rational individuals (individuals want to opt-in to the mechanism), and is incentive compatible (resulting behavior is a Nash equilibrium). Given\n1For example, the VCG mechanism (Clarke, 1971).\nthis impossibility result, we aim to design a mechanism that approximates weaker notions of these criteria. In addition, the mechanism should be decentralized, extensible to large populations, and adapt to learning agents with evolving incentives in possibly non-stationary environments.\nDesign: We formulate compromise as agents mixing their incentives with others. In other words, an agent may become incentivized to minimize a mixture of their loss and other agents’ losses. We design a decentralized meta-algorithm to search over the space of these possible mixtures.\nWe model the problem of efficiency using price of anarchy. The price of anarchy, ⇢ 2 [1,1), is a measure of inefficiency from algorithmic game theory with lower values indicating more efficient games (Nisan et al., 2007). Forcing agents to minimize a group (average) loss with a single local minimum results in a “game” with ⇢ = 1. Note that any optimal group loss solution is also Paretoefficient. Computing the price of anarchy of a game is intractable in general. Instead, we derive a differentiable upper bound on the price of anarchy that agents can optimize incrementally over time. Differentiability of the bound makes it easy to pair the proposed mechanism with, for example, deep learning agents that optimize via gradient descent (Lerer and Peysakhovich, 2017; OpenAI et al., 2019). Budget balance is achieved exactly by placing constraints on the allowable mixtures of losses. We appeal to individual rationality in three ways. One, we initialize all agents to optimize only their own losses. Two, we include penalties for agents that deviate from this state and mix their losses with others. Three, we show empirically on several domains that opting into the proposed mechanism results in better individual outcomes. We also provide specific, albeit narrow, conditions under which agents may achieve a Nash equilibrium, i.e. the mechanism is incentive compatible, and demonstrate the agents achieving a Nash equilibrium under our proposed mechanism in a traffic network problem.\nThe approach we propose divides the loss mixture coefficients among the agents to be learned individually; critically, the agents do not need to observe or directly differentiate with respect to the other agent strategies. In this work, we do not tackle the challenge of scaling communication of incentives to very large populations; we leave this to future work. Under our approach, scale can be achieved through randomly sharing incentives according to the learned mixture weights or sparse optimization over the simplex (Pilanci et al., 2012; Kyrillidis et al., 2013; Li et al., 2016).\nOur Contribution: We propose a differentiable, local estimator of game inefficiency, as measured by price of anarchy. We then present two instantiations of a single decentralized meta-algorithm, one 1st order (gradient-feedback) and one 0th order (bandit-feedback), that reduce this inefficiency. This meta-algorithm is general and can be applied to any group of individual agent learning algorithms.\nThis paper focuses on how to enable a group of agents to respond to an unknown environment and minimize overall inefficiency. Agents with distinct losses may find their incentives well aligned to the given task, however, they may instead encounter a social dilemma (Sec. 3). We also show that our approach leads to interesting behavior in scenarios where agents may need to sacrifice team reward to save an individual (Sec. F.4) or need to form parties and vote on a new team direction (Sec. 3.4). Ideally, one meta-algorithm would allow a multi-agent system to perform sufficiently well in all these scenarios. The approach we propose, D3C (Sec. 2), is not that meta-algorithm, but it represents a holistic effort to combine critical ingredients that we hope takes a step in the right direction.2" }, { "heading": "2 DYNAMICALLY CHANGING THE GAME", "text": "In our approach, agents may consider slight re-definitions of their original losses, thereby changing the definition of the original game. Critically, this is done in a way that conserves the original sum of losses (budget-balanced) so that the original group loss can still be measured. In this section, we derive our approach to minimizing the price of anarchy in several steps. First we formulate minimizing the price of anarchy via compromise as an optimization problem. Second we specifically consider compromise as the linear mixing of agent incentives. Next, we define a local price of anarchy and derive an upper bound that agents can differentiate. Then, we decompose this bound into a set of differentiable objectives, one for each agent. Finally, we develop a gradient estimator to minimize the agent objectives in settings with bandit feedback (e.g., RL) that enables scalable decentralization.\n2D3C is agnostic to any action or strategy semantics. We are interested in rich environments where high level actions with semantics such as “cooperation” and “defection” are not easily extracted or do not exist." }, { "heading": "2.1 NOTATION AND TRANSFORMED LOSSES", "text": "Let agent i’s loss be fi(x) : x 2 X ! R where x is the joint strategy of all agents. We denote the joint strategy at iteration t by xt when considering discrete updates and x(t) when considering continuous time dynamics. Let fA\ni (x) denote agent i’s transformed loss which mixes losses among\nagents. Let f(x) = [f1(x), . . . , fn(x)]> and fA(x) = [fA1 (x), . . . , fAn (x)]> where n 2 Z denotes the number of agents. In general, we require fA\ni (x) > 0 and P i f A i (x) = P i fi(x) so that total\nloss is conserved3; note that the agents are simply exploring the space of possible non-negative group loss decompositions. We consider transformations of the form fA(x) = A>f(x) where each agent i controls row i of A with each row constrained to the simplex, i.e. Ai 2 n 1. X ⇤ denotes the set of Nash equilibria. [a; b] = [a>, b>]> signifies row stacking of vectors." }, { "heading": "2.2 PRICE OF ANARCHY", "text": "Nisan et al. (2007) define price of anarchy as the worst value of an equilibrium divided by the best value in the game. Here, value means sum of player losses, best means lowest, and Nash is the equilibrium. It is well known that Nash can be arbitrarily bad from both an individual agent and group perspective; Appendix B presents a simple example and demonstrates how opponent shaping is not a balm for these issues (Foerster et al., 2018; Letcher et al., 2018). With the above notation, the price of anarchy, ⇢, is defined as\n⇢X (f A) def =\nmaxX⇤ P i f A i (x⇤)\nminX P i f A i (x)\n1. (1)\nNote that computing the price of anarchy precisely requires solving for both the optimal welfare and the worst case Nash equilibrium. We explain how we circumvent this issue with a local approximation in §2.4." }, { "heading": "2.3 COMPROMISE AS AN OPTIMIZATION PROBLEM", "text": "Given a game, we want to minimize the price of anarchy by perturbing the original agent losses:\nmin f 0= A(f) 1>f 0=1>f\n⇢X (f 0) + ⌫D(f ,f 0) (2)\nwhere f and f 0 = A(f) denote the vectors of original and perturbed losses respectively, A : Rn ! Rn is parameterized by weights A, ⌫ is a regularization hyperparameter, and D penalizes deviation of the perturbed losses from the originals or represents constraints through an indicator function. To ensure minimizing the price of anarchy of the perturbed game improves on the original, we incorporate the constraint that the sum of perturbed losses equals the sum of original losses. We refer to this approach as ⇢-minimization.\nOur agents reconstruct their losses using the losses of all agents as a basis. For simplicity, we consider linear transformations of their loss functions, although the theoretical bounds hereafter are independent of this simplification. We also restrict ourselves to convex combinations so that agents do not learn incentives that are directly adverse to other agents. The problem can now be reformulated. Let A(f) = A>f and D(f, f 0) = P i DKL(ei || Ai) where A 2 Rn⇥n is a right stochastic matrix (rows are non-negative and sum to 1), ei 2 Rn is a unit vector with a 1 at index i, and DKL denotes the Kullback-Liebler divergence. Note OpenAI Five (OpenAI et al., 2019) also used a linear mixing approach where the “team spirit\" mixture parameter (⌧ ) is manually annealed throughout training from 0.3 to 1.0 (i.e., Aii = 1 0.8⌧, Aij = 0.2⌧, j 6= i).\nThe A matrix is interpretable and reveals the structure of \"teams\" that evolve and develop over training. In experiments we measure relative reward attention for each agent i as ln((n 1)Aii) ln( P j 6=i Aji) to reveal how much agent i attends to their own loss versus the other agents on average (e.g., Figure 4b). This number is 0 when Aij = 1n for all i, j. Positive values indicate agent i mostly attends to its own loss. Negative values indicate agent i attends to others’ losses more than its own. We also discuss the final A in the election example in §3.4.\n3The price of anarchy assumes positive losses. This is accounted for in §2.5 to allow for losses in R." }, { "heading": "2.4 A LOCAL PRICE OF ANARCHY", "text": "The price of anarchy, ⇢ 1, is defined over the joint strategy space of all players. Computing it is intractable for general games. However, many agents learn via gradient-based training, and so only observe the portion of the strategy space explored by their learning trajectory. Hence, we imbue our agents with the ability to locally estimate the price of anarchy along this trajectory. Definition 1 (Local Price of Anarchy). Define\n⇢x(f A , t) =\nmaxX⇤ ⌧ P i f A i (x⇤)\nmin⌧2[0, t] P i f A i (x ⌧F (x)) 1 (3)\nwhere F (x) = [rx1fA1 (x); . . . ;rxnfAn (x)], t is a small step size, fAi is assumed positive 8 i, and X\n⇤ ⌧ denotes the set of equilibria of the game when constrained to the line.\nTo obtain bounds, we leverage theoretical results on smooth games, summarized as a class of games where “the externality imposed on any one player by the others is bounded” (Roughgarden, 2015). We assume a Lipschitz property on all fA\ni (x) (details in Theorem 1), which allows us to appeal to this\nclass of games. The bound in Eqn (4) is tight for some games. Proofs can be found in appendix D. Theorem 1 (Local Utilitarian Price of Anarchy). Assuming each agent’s loss is positive and its loss gradient is Lipschitz, there exists a learning rate t > 0 sufficiently small such that, to O( t2), the local utilitarian price of anarchy of the game, ⇢x(fA, t), is upper bounded by\nmax i\n{1 + tReLU ⇣ d\ndt log(fA i (x(t))) +\n||rxif A i (x)||2\nf A i (x)µ̄\n⌘ } (4)\nwhere i indexes each agent, µ̄ is a user defined positive scalar, ReLU(z) def = max(z, 0), and Lipschitz implies there exists a i such that ||rxif A i (x) ryif A i (y)|| i||x y|| 8x,y, A.4\nRecall that this work focuses on price of anarchy defined using total loss as the value of the game. This is a utilitarian objective. We also derive an upper bound on the local egalitarian price of anarchy where value is defined as the max loss over all agents (replace P i with maxi in Eqn (3); see §D.2)." }, { "heading": "2.5 DECENTRALIZED LEARNING OF THE LOSS MIXTURE MATRIX A", "text": "Minimizing Eqn (2) w.r.t. A can become intractable if n is large. Moreover, if solving for A at each step is the responsibility of a central authority, the system is vulnerable to this authority failing. A distributed solution is therefore appealing, and the local price of anarchy bound admits a natural decomposition over agents. Equation 2 becomes\nmin Ai2 n 1 ⇢i + ⌫DKL(ei || Ai) (5)\nwhere ⇢i = 1 + tReLU ⇣ d\ndt log(fA i (x(t))) +\n||rxif A i (x)||2\nfA i (x)µ̄\n⌘ . This objective is differentiable w.r.t.\neach Ai with gradient rAi⇢i / rAiReLU ⇣ d dt log(fA i (x(t))) + ||rxif A i (x)||2\nfA i (x)µ̄\n⌘ . The log appears\ndue to price of anarchy being defined as the worst case Nash total loss divided by the minimal total loss. We propose the following modified learning rule for a hypothetical price of anarchy which is\n4Larger i (less smooth loss) requires smaller t.\ndefined as a difference and accepts negative loss: Ai Ai ⌘Ar̃Ai⇢i where ⌘A is a learning rate and\nr̃Ai⇢i = rAiReLU ⇣ d\ndt f A i (x) + ✏\n⌘ . [✏ is a hyperparameter.] (6)\nThe update direction in (6) is proportional to rAi⇢i asymptotically for large fAi ; see §D.1.1 for further discussion. Each agent i updates xi and Ai simultaneously using rxifAi (x) and r̃Ai⇢i.\nImprove-Stay, Suffer-Shift—rAi⇢i encodes the rule: if the loss is decreasing, maintain the mixing weights, otherwise, change them. This strategy applies Win-Stay, Lose-Shift (WSLS) (Robbins, 1952) to learning (derivatives) rather than outcomes (losses). WSLS was shown to outperform Tit-for-Tat in an iterated prisoner’s dilemma (Nowak and Sigmund, 1993; Imhof et al., 2007).\nNote that the trival solution of minimizing average group loss coincides with Aij = 1n for all i, j. If the agent strategies converge to a social optimum, this is a fixed point in the augmented strategy space (x,A). This can be seen by noting that 1) convergence to an optimum impliesrxifAi (x) = 0 and 2) convergence alone implies dfi\ndt = 0 for all agents so rAi = 0 by Eqn (6) assuming ✏ = 0." }, { "heading": "2.6 DECENTRALIZED LEARNING & EXTENDING TO REINFORCEMENT LEARNING", "text": "The time derivative of each agent’s loss, d dt f A i (x(t)), in Eqn (6) requires differentiating through potentially all other agent loss functions, which precludes scaling to large populations. In addition, this derivative is not always available as a differentiable function. In order to estimate r̃Ai⇢i when only scalar estimates of ⇢i are available as in, e.g., reinforcement learning (RL), each agent perturbs their loss mixture and commits to this perturbation for a random number of training steps. If the loss increases over the trial, the agent updates their mixture in a direction opposite the perturbation. Otherwise, no update is performed.\nAlgorithm 1 D3C Update for RL Agent i Input: ⌘A, , ⌫, ⌧min, ⌧max, A0i , ✏, l, h, L, iterations T Ai A 0 i\n{Initialize Mixing Weights} {Draw Initial Random Mixing Trial} Ãi, ã, ⌧, tb, Gb = trial( , ⌧min, ⌧max, Ai, 0, G) G = 0 {Initialize Mean Return of Trial} for t = 0 : T do g = Li(Ãi 8 i) {Update Policy With Mixed Rewards} tb = t tb {Elapsed Trial Steps} G = (G( tb 1) + g)/ tb {Update Mean Return} if tb == ⌧ {Trial Complete} then ⇢̃i = ReLU( Gb G ⌧\n+ ✏) {Approximate ⇢} rAi = ⇢̃iã ⌫ei ◆ Ai {Estimate Gradient —(6)} Ai = softmax lblog(Ai) ⌘ArAie\nh {Update} {Draw New Random Mixing Trial} Ãi, ã, ⌧, tb, Gb = trial( , ⌧min, ⌧max, Ai, t, G)\nend if end for\nAlgorithm 2 Li—example learner Input: Ã = [Ã1; . . . ; Ãn] while episode not terminal do\ndraw action from agent policy play action and observe reward ri broadcast ri to all agents update policy with r̃i = P j Ãjirj\nend while Output: return over episode g\nAlgorithm 3 trial—helper function Input: , ⌧min, ⌧max, Ai, t, G {Sample Perturbation Direction} ãi ⇠ Usp(n) {Perturb Mixture} Ãi = softmax(log(Ai) + ãi) {Draw Random Trial Length} ⌧ ⇠ Uniform{⌧min, ⌧max} Output: Ãi, ã, ⌧, t, G\nThis is formally accomplished with approximate one-shot gradient estimates (Shalev-Shwartz et al., 2012). A one-shot gradient estimate of ⇢i(Ai) is performed by first evaluating ⇢i(log(Ai) + ãi) where is a scalar and ãi ⇠ Usp(n) is drawn uniformly from the unit sphere in Rn. Then, an unbiased gradient is given by n ⇢i(log(Ai) + ãi)ãi where Ai 2 n 1. In practice, we cannot\nevaluate in one shot the d dt f A i (x(t)) term that appears in the definition of ⇢i. Instead, Algorithm 1 uses finite differences and we assume the evaluation remains accurate enough across training steps.\nAlgorithm 1 requires arguments: ⌘A is a global learning rate for each Ai, is a perturbation scalar for the one-shot gradient estimate, ⌧min and ⌧max specify the lower and upper bounds for the duration of the mixing trial for estimating a finite difference of d\ndt f A i (x(t)), l and h specify lower and upper\nbounds for clipping A in logit space (lb·eh), and Li is a learning algorithm that takes A as input (in order to mix rewards) and outputs discounted return. ◆ indicates elementwise division." }, { "heading": "2.7 ASSESSMENT", "text": "We assess Algorithm 1 with respect to our original design criteria. As described, agents perform gradient descent on a decentralized and local upper bound on the price of anarchy. Recall that a minimal global price of anarchy (⇢ = 1) implies that even the worst case Nash equilibrium of the game is socially optimal; similarly, Algorithm 1 searches for a locally socially optimal equilibrium. By design, Ai 2 n 1 ensures the approach is budget-balancing. We justify the agents learning weight vectors Ai by initializing them to attend primarily to their own losses as in the original game. If they can minimize their original loss, then they never shift attention according to Eqn (6) because dfi\ndt 0 for all t. They only shift Ai if their loss increases. We also include a KL term to\nencourage the weights to return to their initial values. In addition, in our experiments with symmetric games, learning A helps the agents’ outcomes in the long run. We also consider experiments in Appendix E.2.2 where only a subset of agents opt into the mechanism. If each agent’s original loss is convex with diagonally dominant Hessian and the strategy space is unconstrained, the unique, globally stable fixed point of the game defined with mixed losses is a Nash (see Appendix H.4). Exact gradients rAi⇢i require each agent differentiates through all other agents losses precluding a fully decentralized and scalable algorithm. We circumvent this issue with noisy oneshot gradients. All that is needed in terms of centralization is to share the mixed scalar rewards; this is cheap compared to sharing xi 2 Rd. As mentioned in the introduction, the cost of communicating rewards can be mitigated by learning Ai via sparse optimization or sampling but is outside the scope of this paper." }, { "heading": "3 EXPERIMENTS", "text": "Here, we show that agents minimizing local estimates of price of anarchy achieve lower loss on average than selfish, rational agents in three domains. Due to space, we leave two other domains to the appendix. In the first domain, a traffic network (4 players), players optimize using exact gradients (see Eqn (6)). Then in two RL domains, Coins and Cleanup, players optimize with approximate gradients as handled by Algorithm 1. Agents train with deep networks and A2C (Espeholt et al., 2018). We refer to both algorithms as D3C (decentralized, differentiable, dynamic compromise).\nFor D3C, we initialize Aii = 0.99 and Aij = 0.01n 1 , j 6= i. We initialize away from a onehot because we use entropic mirror descent (Beck and Teboulle, 2003) to update Ai, and this method requires iterates to be initialized to the interior of the simplex. In the RL domains, updates to Ai are clipped in logit-space to be within l = 5 and h = 5 (see Algorithm 1). We set the DKL coefficient to 0 except for in Coins, where ⌫ = 10 5. Additional hyperparameters are specified in §G. In experiments, reported price of anarchy refers to the ratio of the sum of losses of the strategy learning converged to over that of the strategy learned by fully cooperative agents (Aij = 1n )." }, { "heading": "3.1 TRAFFIC NETWORKS AND BRAESS’S PARADOX", "text": "In 2009, New York city’s mayor closed Broadway near Times Square to alleviate traffic congestion (Neuman and Barbaro, 2009). This counter-intuitive phenomenon, where restricting commuter choices improves outcomes, is called Braess’s paradox (Wardrop, 1952; Beckmann et al., 1956; Braess, 1968), and has been observed in real traffic networks (Youn et al., 2008; Steinberg and Zangwill, 1983). Braess’s paradox is also found in physics (Youn et al., 2008), decentralized energy grids (Witthaut and Timme, 2012), and can cause extinction cascades in ecosystems (Sahasrabudhe and Motter, 2011). Knowing when a network may exhibit this paradox is difficult, which means knowing when network dynamics may result in poor outcomes is difficult.\nFigure 2a presents a theoretical traffic network. Without edge AB, drivers commute according to the Nash equilibrium, either learned by gradient descent or D3C. Figure 3a shows the price of anarchy approaching 1 for both algorithms. If edge AB is added, the network now exhibits Braess’s paradox. Figure 3b shows that while gradient descent converges to Nash (⇢ = 8065 ), D3C achieves a price of anarchy near 1. Figure 2b shows that when faced with a randomly drawn network, D3C agents achieve shorter commutes on average than agents without the ability to compromise." }, { "heading": "3.2 COIN DILEMMA", "text": "In the Coins game (Eccles et al., 2019a; Lerer and Peysakhovich, 2017), two agents move on a fully-observed 5⇥ 5 gridworld, on which coins of two types corresponding to each agent randomly spawn at each time step with probability 0.005. When an agent moves into a square with a coin of either type, they get a reward of 1. When an agent picks up a coin of the other player’s type, the other agent gets a reward of 2. The episode lasts 500 steps. Total reward is maximized when each agent picks up only coins of their own type, but players are tempted to pick up all coins.\nD3C agents approach optimal cooperative returns (see Figure 4a). We compare against Metric Matching Imitation (Eccles et al., 2019b), which was previously tested on Coins and designed to exhibit reciprocal behavior towards co-players.\nFigure 4b shows D3C agents learning to cooperate, then temporarily defecting before rediscovering cooperation. Note that the relative reward attention of both players spikes towards selfish during this small defection window; agents collect more of their opponent’s coins during this time. Oscillating between cooperation and defection occurred across various hyperparameter settings. Relative reward attention trajectories between agents appear to be reciprocal, i.e., move in relative synchrony (see §H.2 for analysis)." }, { "heading": "3.3 CLEANUP", "text": "We provide additional results on Cleanup, a five-player gridworld game (Hughes et al., 2018). Agents are rewarded for eating apples, but must keep a river clean to ensure the apples receive sufficient nutrients. The option to be a freeloader and only eat apples presents a social dilemma. D3C is able to increase both welfare and individual reward over A2C (no loss sharing). We also observe that direct welfare maximization (Cooperation) always results in three agents collecting rewards from apples while two agents sacrifice themselves and clean the river. In contrast, D3C avoids this stark division of labor. Agents take turns on each task and all achieve some positive cumulative return over training." }, { "heading": "3.4 A ZERO-SUM ELECTION", "text": "Consider a hierarchical election in which two parties compete in a zero-sum game—for example, only one candidate becomes president. If, at the primary stage, candidates within one party engage in negative advertising, they hurt their chances of winning the presidential election because these ads are now out in the open. This presents a prisoner’s dilemma within each party. The goal then is for each party to solve their respective prisoner’s dilemma and come together as one team, but certainly not maximize welfare—the zero-sum game between the two parties should be retained. A simple simulation with two parties consisting of two candidates each initially participating in negative advertising converges to the desired result after running D3C.\nThe final 4⇥ 4 loss mixing matrix, A, after training 1000 steps is an approximate block matrix with 0.46 on the 2⇥ 2 block diagonal and 0.04 elsewhere. We make a duck-typing argument that when\nmultiple agents are optimizing the same loss, they are functioning as multiple components of a single agent because mathematically, there is no difference between this multi-agent system and a single agent optimization problem. This matrix then indicates that two approximate teams have formed: the first two agents captured by the upper left block and vice versa. Furthermore, the final eigenvalues of the game Jacobian are (1.84± 0.21i)⇥2; perfect team formation gives (2± 0.25i)⇥2. The existence of imaginary eigenvalues indicates that the zero-sum component of the game is retained. In contrast, minimizing total loss gives 0 imaginary part because Hessians (Jac(r)) are symmetric." }, { "heading": "4 RELATED WORK", "text": "Our work is most similar to (Hostallero et al., 2020). This work also provides a decentralized approach that transforms the game by modifying rewards, however, it does not guarantee “budget-balance” nor does it derive its proposed algorithm from any principle (e.g., price of anarchy); the proposed algorithm is a heuristic supported by experiments. In other work, Lupu and Precup (2020) explore gifting rewards to agents as well, but it does so by simply expanding the action space of agents to include a gifting action. It is also not budget balanced." }, { "heading": "4.1 LEARNING LOSS FUNCTIONS", "text": "Choosing the right loss function for a given task is a historically unsolved problem. Even in singleagent settings, the designated reward function can either often be suboptimal for learning (Sorg et al., 2010) or result in “reward hacking\" (Amodei et al., 2016). In the multiagent setting, OpenAI Five trains on an objective that mixes single agent and group agent rewards (OpenAI et al., 2019). The “team spirit\" mixture parameter (⌧ ) is manually annealed throughout training from 0.3 to 1.0 (i.e., Aii = 1 0.8⌧, Aij = 0.2⌧, j 6= i). Liu et al. (2019) find a team of soccer agents is better trained with agent-centric shaping rewards evolved via population based training, a technique that also led to human level performance in Capture the Flag (Jaderberg et al., 2019). Aguera y Arcas (2020) trains populations of simulated bacteria to maximize randomly drawn reward functions and discovers that a significant portion of the surviving populations are actually ones rewarded for dying. In contrast to the work just described, we provide a decentralized approach, devoid of a central authority, that automates the design of incentives for a multi-agent system." }, { "heading": "4.2 SOCIAL PSYCHOLOGY, NEUROSCIENCE, AND EVOLUTIONARY BIOLOGY", "text": "Loss transformation is also found in human behavior. Within social psychology, interdependence theory (Kelley and Thibaut, 1978) holds that humans make decisions based on a combination of self interest and social preferences. In game theoretic terms, humans deviate from rational play because they consider a transformed game rather than the original. Although rational play in the transformed game may result in lower payoff in a single round of play, groups with diverse transformations are oftentimes able to avoid poor Nash equilibria. McKee et al. (2020) mirrored this result empirically with RL agents. Neuroscience research also supports this interpretation, showing that neural processing responds to others’ losses, even if one’s own outcomes are not affected (Fukushima and Hiraki, 2009; Kang et al., 2010). The most fundamental account within evolutionary biology predicts that nature selects for individuals who care only for their own fitness. Absent other mechanisms, local selection for selfishness can drive a population to extinction (GrandePérez et al., 2005). The emergence of other-regarding preferences seems particularly important for humans. Empathy results in altruistic choices, raising the fitness of their group as a whole (Alexander and Bargia, 1978)." }, { "heading": "5 CONCLUSION", "text": "Directly maximizing welfare can solve many social dilemmas, but it fails to draw out the rich behavior we would expect from agents in other interesting scenarios. We formulate learning incentives as a price of anarchy minimization problem and propose a decentralized, gradient-based approach, namely D3C, that incrementally adapts agent incentives to the environment at hand. We demonstrate its effectiveness on achieving near-optimal agent outcomes in socially adversarial environments. Importantly, it also generates reasonable responses where welfare maximization is indifferent." } ]
2,020
null
SP:87bda29654ffe25cda14e3b27a6e4b53e2a40164
[ "The paper investigates whether languages are equally hard to Conditional-Language-Model (CLM). To do this, the authors perform controlled experiments by modeling text from parallel data from 6 typologically diverse languages. They pair the languages and perform experiments in 30 directions with Transformers, and compare 3 different unit representations: characters, bytes, and word-level (BPE). " ]
Inspired by the phenomenon of performance disparity between languages in machine translation, we investigate whether and to what extent languages are equally hard to “conditional-language-model”. Our goal is to improve our understanding and expectation of the relationship between language, data representation, size, and performance. We study one-to-one, bilingual conditional language modeling through a series of systematically controlled experiments with the Transformer and the 6 languages from the United Nations Parallel Corpus. We examine character, byte, and word models in 30 language directions and 5 data sizes, and observe indications suggesting a script bias on the character level, a length bias on the byte level, and a word bias that gives rise to a hierarchy in performance across languages. We also identify two types of sample-wise non-monotonicity — while word-based representations are prone to exhibit Double Descent, length can induce unstable performance across the size range studied in a novel meta phenomenon which we term erraticity. By eliminating statistically significant performance disparity on the character and byte levels by normalizing length and vocabulary in the data, we show that, in the context of computing with the Transformer, there is no complexity intrinsic to languages other than that related to their statistical attributes and that performance disparity is not a necessary condition but a byproduct of word segmentation. Our application of statistical comparisons as a fairness measure also serves as a novel rigorous method for the intrinsic evaluation of languages, resolving a decades-long debate on language complexity. While all these quantitative biases leading to disparity are mitigable through a shallower network, we find room for a human bias to be reflected upon. We hope our work helps open up new directions in the area of language and computing that would be fairer and more flexible and foster a new transdisciplinary perspective for DL-inspired scientific progress.
[]
[ { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "E.M. Bender" ], "title": "Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax", "venue": null, "year": 2013 }, { "authors": [ "Emily M. Bender" ], "title": "Linguistically naïve != language independent: Why NLP needs linguistic typology", "venue": "In Proceedings of the EACL", "year": 2009 }, { "authors": [ "Yoav Benjamini", "Ruth Heller" ], "title": "Screening for partial conjunction hypotheses", "venue": "ISSN 0006341X,", "year": 2008 }, { "authors": [ "Christian Bentz", "Tatyana Ruzsics", "Alexander Koplenig", "Tanja" ], "title": "Samardžić. A comparison between morphological complexity measures: Typological data vs. language corpora", "venue": "In Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity", "year": 2016 }, { "authors": [ "Emanuele Bugliarello", "Sabrina J. Mielke", "Antonios Anastasopoulos", "Ryan Cotterell", "Naoaki Okazaki" ], "title": "It’s easier to translate out of English than into it: Measuring neural translation difficulty by cross-mutual information", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Christian Girardi", "Marcello Federico" ], "title": "Wit3: Web inventory of transcribed and translated talks", "venue": "In Proceedings of the 16 Conference of the European Association for Machine Translation (EAMT),", "year": 2012 }, { "authors": [ "Lin Chen", "Yifei Min", "Mikhail Belkin", "Amin Karbasi" ], "title": "Multiple descent: Design your own generalization curve, 2020", "venue": null, "year": 2020 }, { "authors": [ "Stanley F. Chen", "Joshua Goodman" ], "title": "An empirical study of smoothing techniques for language modeling", "venue": "Computer Speech Language,", "year": 1999 }, { "authors": [ "Tianqi Chen", "Mu Li", "Yutian Li", "Min Lin", "Naiyan Wang", "Minjie Wang", "Tianjun Xiao", "Bing Xu", "Chiyuan Zhang", "Zheng Zhang" ], "title": "Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems", "venue": "arXiv preprint arXiv:1512.01274,", "year": 2015 }, { "authors": [ "Colin Cherry", "George Foster", "Ankur Bapna", "Orhan Firat", "Wolfgang Macherey" ], "title": "Revisiting character-based neural machine translation with capacity and compression", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart van Merriënboer", "Dzmitry Bahdanau", "Yoshua Bengio" ], "title": "On the properties of neural machine translation: Encoder–decoder approaches", "venue": "In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation,", "year": 2014 }, { "authors": [ "Ryan Cotterell", "Sabrina J. Mielke", "Jason Eisner", "Brian Roark" ], "title": "Are all languages equally hard to language-model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", "venue": null, "year": 2018 }, { "authors": [ "Rotem Dror", "Gili Baumer", "Marina Bogomolov", "Roi Reichart" ], "title": "Replicability analysis for natural language processing: Testing significance with multiple datasets", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "San Duanmu" ], "title": "Word and wordhood, modern", "venue": "In Rint Sybesma (ed.), Encyclopedia of Chinese Language and Linguistics,", "year": 2017 }, { "authors": [ "Nadir Durrani", "Fahim Dalvi", "Hassan Sajjad", "Yonatan Belinkov", "Preslav Nakov" ], "title": "One size does not fit all: Comparing NMT representations of different granularities", "venue": null, "year": 2019 }, { "authors": [ "Adam Fisch", "Jiang Guo", "Regina Barzilay" ], "title": "Working hard or hardly working: Challenges of integrating typology into neural dependency parsers", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Philip Gage" ], "title": "A new algorithm for data compression", "venue": "C Users J.,", "year": 1994 }, { "authors": [ "Yingqiang Gao", "Nikola I. Nikolov", "Yuhuang Hu", "Richard H.R. Hahnloser" ], "title": "Character-level translation with self-attention", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1591–1604,", "year": 2020 }, { "authors": [ "Martin Gellerstam" ], "title": "Translationese in Swedish novels translated from English", "venue": "Translation Studies in Scandinavia,", "year": 1986 }, { "authors": [ "Daniela Gerz", "Ivan Vulić", "Edoardo Ponti", "Jason Naradowsky", "Roi Reichart", "Anna Korhonen" ], "title": "Language modeling for morphologically rich languages: Character-aware modeling for word-level prediction", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Daniela Gerz", "Ivan Vulić", "Edoardo Maria Ponti", "Roi Reichart", "Anna Korhonen" ], "title": "On the relation between linguistic typology and (limitations of) multilingual language modeling", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Martin Haspelmath" ], "title": "The indeterminacy of word segmentation and the nature of morphology and syntax", "venue": "Folia Linguistica,", "year": 2011 }, { "authors": [ "Kenneth Heafield" ], "title": "KenLM: Faster and smaller language model queries", "venue": "In Proceedings of the Sixth Workshop on Statistical Machine Translation,", "year": 2011 }, { "authors": [ "Kenneth Heafield", "Ivan Pouzyrevsky", "Jonathan H. Clark", "Philipp Koehn" ], "title": "Scalable modified Kneser-Ney language model estimation", "venue": "In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "year": 2013 }, { "authors": [ "Felix Hieber", "Tobias Domhan", "Michael Denkowski", "David Vilar", "Artem Sokolov", "Ann Clifton", "Matt Post. The Sockeye neural machine translation toolkit at AMTA" ], "title": "In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), pp", "venue": "200–207. Association for Machine Translation in the Americas, 2018. URL http://aclweb.org/anthology/W18-1820.", "year": 2018 }, { "authors": [ "Sture Holm" ], "title": "A simple sequentially rejective multiple test procedure", "venue": "Scandinavian Journal of Statistics,", "year": 1979 }, { "authors": [ "Mark Johnson", "Peter Anderson", "Mark Dras", "Mark Steedman" ], "title": "Predicting accuracy on large datasets from smaller pilot data. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 450–455", "venue": "Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Melvin Johnson", "Mike Schuster", "Quoc V. Le", "Maxim Krikun", "Yonghui Wu", "Zhifeng Chen", "Nikhil Thorat", "Fernanda Viégas", "Martin Wattenberg", "Greg Corrado", "Macduff Hughes", "Jeffrey Dean" ], "title": "Google’s multilingual neural machine translation system: Enabling zero-shot translation", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Pratik Joshi", "Sebastin Santy", "Amar Budhiraja", "Kalika Bali", "Monojit Choudhury" ], "title": "The state and fate of linguistic diversity and inclusion in the NLP world", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 6282–6293,", "year": 2020 }, { "authors": [ "Marcin Junczys-Dowmunt", "Tomasz Dwojak", "Hieu Hoang" ], "title": "Is neural machine translation ready for deployment? A case study on 30 translation directions", "venue": "In IWSLT 2016,", "year": 2016 }, { "authors": [ "Daniel Jurafsky", "James H. Martin" ], "title": "Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition", "venue": null, "year": 2009 }, { "authors": [ "Reinhard Kneser", "Hermann Ney" ], "title": "Improved backing-off for m-gram language modeling", "venue": "International Conference on Acoustics, Speech, and Signal Processing,", "year": 1995 }, { "authors": [ "Philipp Koehn" ], "title": "Europarl: A Parallel Corpus for Statistical Machine Translation", "venue": "In Conference Proceedings: the tenth Machine Translation Summit,", "year": 2005 }, { "authors": [ "Philipp Koehn", "Rebecca Knowles" ], "title": "Six challenges for neural machine translation", "venue": "In Proceedings of the First Workshop on Neural Machine Translation,", "year": 2017 }, { "authors": [ "Jason Lee", "Kyunghyun Cho", "Thomas Hofmann" ], "title": "Fully character-level neural machine translation without explicit segmentation", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Bo Li", "Yu Zhang", "Tara Sainath", "Yonghui Wu", "William Chan" ], "title": "Bytes are all you need: Endto-end multilingual speech recognition and synthesis with bytes", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Xiaoya Li", "Yuxian Meng", "Xiaofei Sun", "Qinghong Han", "Arianna Yuan", "Jiwei Li" ], "title": "Is word segmentation necessary for deep learning of Chinese representations", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Thomas Mayer", "Michael Cysouw" ], "title": "Creating a massively parallel Bible corpus", "venue": "In Proceedings of the Ninth International Conference on Language Resources and Evaluation", "year": 2014 }, { "authors": [ "Sabrina J. Mielke", "Ryan Cotterell", "Kyle Gorman", "Brian Roark", "Jason Eisner" ], "title": "What kind of language is hard to language-model", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Einat Minkov", "Kristina Toutanova", "Hisami Suzuki" ], "title": "Generating complex morphology for machine translation", "venue": "In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics,", "year": 2007 }, { "authors": [ "Kenton Murray", "David Chiang" ], "title": "Correcting length bias in neural machine translation", "venue": "In Proceedings of the Third Conference on Machine Translation: Research Papers,", "year": 2018 }, { "authors": [ "Preetum Nakkiran" ], "title": "More data can hurt for linear regression: Sample-wise double descent, 2019", "venue": null, "year": 2019 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep double descent: Where bigger models and more data hurt", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Nikola Nikolov", "Yuhuang Hu", "Mi Xue Tan", "Richard H.R. Hahnloser" ], "title": "Character-level ChineseEnglish translation through ASCII encoding", "venue": "In Proceedings of the Third Conference on Machine Translation: Research Papers,", "year": 2018 }, { "authors": [ "Franz Josef Och", "Hermann Ney" ], "title": "A systematic comparison of various statistical alignment models", "venue": "Computational Linguistics,", "year": 2003 }, { "authors": [ "M Opper", "W Kinzel", "J Kleinz", "R Nehl" ], "title": "On the ability of the optimal perceptron to generalise", "venue": "Journal of Physics A: Mathematical and General, 23(11):L581–L586,", "year": 1990 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL-HLT 2019: Demonstrations,", "year": 2019 }, { "authors": [ "Edoardo Maria Ponti", "Ivan Vulić", "Ryan Cotterell", "Roi Reichart", "Anna Korhonen" ], "title": "Towards zero-shot language modeling", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Edoardo Maria Ponti", "Helen O’Horan", "Yevgeni Berzak", "Ivan Vulić", "Roi Reichart", "Thierry Poibeau", "Ekaterina Shutova", "Anna Korhonen" ], "title": "Modeling language variation and universals: A survey on typological linguistics for natural language processing, 2020", "venue": null, "year": 2020 }, { "authors": [ "R R Core Team" ], "title": "A Language and Environment for Statistical Computing", "venue": "R Foundation for Statistical Computing, Vienna,", "year": 2014 }, { "authors": [ "Patrick Royston" ], "title": "A remark on algorithm as 181: The w-test for normality", "venue": "Journal of the Royal Statistical Society. Series C (Applied Statistics),", "year": 1995 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715–1725, Berlin, Germany, August 2016", "venue": "Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://www.aclweb. org/anthology/P16-1162", "year": 2016 }, { "authors": [ "S.S. Shapiro", "M.B. Wilk" ], "title": "An analysis of variance test for normality (complete samples)", "venue": "Biometrika, 52(3-4):591–611,", "year": 1965 }, { "authors": [ "Pavel Sountsov", "Sunita Sarawagi" ], "title": "Length bias in encoder decoder models and a case for global conditioning", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "C. Spearman" ], "title": "The proof and measurement of association between two things", "venue": "The American Journal of Psychology,", "year": 1904 }, { "authors": [ "Felix Stahlberg", "Bill Byrne" ], "title": "On NMT search errors and model errors: Cat got your tongue", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Reut Tsarfaty", "Djamé Seddah", "Yoav Goldberg", "Sandra Kuebler", "Yannick Versley", "Marie Candito", "Jennifer Foster", "Ines Rehbein", "Lamia Tounsi" ], "title": "Statistical parsing of morphologically rich languages (SPMRL) what, how and whither", "venue": "In Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages,", "year": 2010 }, { "authors": [ "Reut Tsarfaty", "Djamé Seddah", "Sandra Kübler", "Joakim Nivre" ], "title": "Parsing morphologically rich languages: Introduction to the special issue", "venue": "Computational Linguistics,", "year": 2013 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "B.L. Welch" ], "title": "The Generalization of ‘Student’s’ Problem when Several Different Population Variances are Involved", "venue": "Biometrika, 34(1-2):28–35,", "year": 1947 }, { "authors": [ "Frank Wilcoxon" ], "title": "Individual comparisons by ranking methods", "venue": "Biometrics Bulletin,", "year": 1945 }, { "authors": [ "Yilun Xu", "Shengjia Zhao", "Jiaming Song", "Russell Stewart", "Stefano Ermon" ], "title": "A theory of usable information under computational constraints", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Longtu Zhang", "Mamoru Komachi" ], "title": "Neural machine translation of logographic language using sub-character level information", "venue": "In Proceedings of the Third Conference on Machine Translation: Research Papers,", "year": 2018 }, { "authors": [ "Wei Zhang", "Feifei Lin", "Xiaodong Wang", "Zhenshuang Liang", "Zhen Huang" ], "title": "Subcharacter ChineseEnglish neural machine translation with Wubi encoding, 2019", "venue": null, "year": 2019 }, { "authors": [ "Michał Ziemski", "Marcin Junczys-Dowmunt", "Bruno Pouliquen" ], "title": "The United Nations Parallel Corpus v1.0", "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016),", "year": 2016 }, { "authors": [ "Junczys-Dowmunt" ], "title": "2016)) for all various training sizes. As the implementation we used (SOCKEYE (Hieber et al., 2018)) only reports PP, we transform it back to entropy as defined above by noting that H(t, s) = log2 PP (t|s)×N", "venue": null, "year": 2018 }, { "authors": [ "e.g. Minkov" ], "title": "AR and RU are traditionally considered morphologically complex", "venue": "As pointed out by Zhang & Komachi (2018),", "year": 2007 } ]
[ { "heading": null, "text": "Inspired by the phenomenon of performance disparity between languages in machine translation, we investigate whether and to what extent languages are equally hard to “conditional-language-model”. Our goal is to improve our understanding and expectation of the relationship between language, data representation, size, and performance. We study one-to-one, bilingual conditional language modeling through a series of systematically controlled experiments with the Transformer and the 6 languages from the United Nations Parallel Corpus. We examine character, byte, and word models in 30 language directions and 5 data sizes, and observe indications suggesting a script bias on the character level, a length bias on the byte level, and a word bias that gives rise to a hierarchy in performance across languages. We also identify two types of sample-wise non-monotonicity — while word-based representations are prone to exhibit Double Descent, length can induce unstable performance across the size range studied in a novel meta phenomenon which we term erraticity. By eliminating statistically significant performance disparity on the character and byte levels by normalizing length and vocabulary in the data, we show that, in the context of computing with the Transformer, there is no complexity intrinsic to languages other than that related to their statistical attributes and that performance disparity is not a necessary condition but a byproduct of word segmentation. Our application of statistical comparisons as a fairness measure also serves as a novel rigorous method for the intrinsic evaluation of languages, resolving a decades-long debate on language complexity. While all these quantitative biases leading to disparity are mitigable through a shallower network, we find room for a human bias to be reflected upon. We hope our work helps open up new directions in the area of language and computing that would be fairer and more flexible and foster a new transdisciplinary perspective for DL-inspired scientific progress." }, { "heading": "1 INTRODUCTION", "text": "With a transdisciplinary approach to explore a space at the intersection of Deep Learning (DL) / Neural Networks (NNs), language sciences, and language engineering, we report our undertaking in use-inspired basic research — with an application-related phenomenon as inspiration, we seek fundamental scientific understanding through empirical experimentation. This is not an application or machine translation (MT) paper, but one that strives to evaluate and seek new insights on language in the context of DL with a consideration to contribute to our evaluation, segmentation, and model interpretation practice in multilingual Natural Language Processing (NLP).\nOur inspiration: performance disparity in MT The use case that inspired our investigation is the disparity of MT results reported in Junczys-Dowmunt et al. (2016). Of the 6 official languages of the United Nations (UN) — Arabic (AR), English (EN), Spanish (ES), French (FR), Russian (RU), and Chinese (ZH), results with target languages AR, RU, and ZH seem to be worse than those with EN/ES/FR, regardless of the algorithm, may it be from phrased-based Statistical MT (SMT/Moses\n(Koehn et al., 2007)) or Neural MT (NMT).1 The languages have the same amount of line-aligned, high-quality parallel data available for training, evaluation, and testing. This prompts the question: are some languages indeed harder to translate from or to?\nProblem statement: are all languages equally hard to Conditional-Language-Model (CLM)? A similar question concerning (monolingual) language modeling (LMing) was posed in Cotterell et al. (2018) and Mielke et al. (2019) along with the introduction of a method to evaluate LMs with multiway parallel corpora (multitexts) in information-theoretic terms. To explicitly focus on modeling the complexities that may or may not be intrinsic to the languages, we study the more fundamental process of CLMing without performing any translation. This allows us to eliminate confounds associated with generation and other evaluation metrics. One could think of our effort as estimating conditional probabilities with the Transformer, with a bilingual setup where perplexity of one target language (ltrg) is estimated given the parallel data in one source language (lsrc), where lsrc 6= ltrg. We focus on the very basics and examine the first step in our pipeline — input representation, holding everything else constant. Instead of measuring absolute cross-entropy scores, we evaluate the relative differences between languages from across 5 magnitudes of data sizes in 3 different representation types/levels. We consider bias to be present when performance disparity in our Transformer models is statistically significant." }, { "heading": "1.1 SUMMARY OF FINDINGS AND CONTRIBUTIONS", "text": "In investigating performance disparity as a function of size and data with respect to language and representation on the Transformer in the context of CLMing, we find:\n1. in a bilingual (one-to-one) CLMing setup, there is neutralization of source language instances, i.e. there are no statistically significant differences between source language pairs. Only pairs of target languages differ significantly (see Table 1). 2. We identify 2 types of sample-wise non-monotonicity on each of the primary representation levels we studied:\n(a) Double Descent (Belkin et al., 2019; Nakkiran et al., 2020): on the word level, for all languages, performance at 102 lines is typically better than at 103 before it improves again at 104 and beyond. This phenomenon can also be observed in character models with ZH as a target language as well as on the word level with non-neural n-gram LMs; (b) erraticity: performance is irregular and exhibits great variance across runs. We find sequence length to be predictive of this phenomenon. We show that this can be rectified by data transformation or hyperparameter tuning. In our study, erraticity affects AR and RU on the byte level where the sequences are too long with UTF-8 encoding and ZH when decomposed into strokes on the character level. 3. In eliminating performance disparity through lossless data transformation on the character and byte levels, we resolve language complexity (§ 4 and App. J). We show that, in the context of computing with the Transformer, unless word-based methods are used, there is no linguistic/morphological complexity applicable or necessary. There is no complexity that is intrinsic to a language aside from its statistical properties. Hardness in modeling is relative to and bounded by its representation level (representation relativity). On the character and byte levels, hardness is correlated with statistical properties concerning sequence length and vocabulary of a language, irrespective of its linguistic typological, phylogenetic, historical, or geographical profile, and can be eliminated. On the word level, hardness is correlated with vocabulary, and a complexity hierarchy arises through the manual preprocessing step of word tokenization. This complexity/disparity effected by word segmentation cannot be eliminated due to the fundamental qualitative differences in the definition of a “word” being one that neither holds universally nor is suitable/consistent for fair crosslinguistic comparisons. We find clarification of this expectation of disparity necessary because more diligent error analyses need to be afforded instead of simply accepting massively disparate results or inappropriately attributing under-performance to linguistic reasons. 4. Representational units of finer granularity can help close the gap in performance disparity. 5. Bigger/overparameterized models can magnify/exacerbate the effects of differences in data\nstatistics. Quantitative biases that lead to disparity are mitigable through numerical methods.\n1We provide a re-visualization of these grouped in 6 facets by target language in Figure 4 in Appendix A.\nOutline of the paper In § 2, we define our method and experimental setup. We present our results and analyses on the primary representations in § 3 and those from secondary set of controls in § 4 in a progressive manner to ease understanding. Meta analyses on fairness evaluation, non-monotonic behavior, and discussion on biases are in § 5. Additional related work is in § 6. We refer our readers to the Appendices for more detailed descriptions/discussions and reports on supplementary experiments." }, { "heading": "2 METHOD AND DEFINITIONS", "text": "Controlled experiments as basic research for scientific understanding Using the United Nations Parallel Corpus (Ziemski et al., 2016), the data from which the MT results in Junczys-Dowmunt et al. (2016) stem, we perform a series of controlled experiments on the Transformer, holding the hyperparameter settings for all 30 one-to-one language directions from the 6 languages constant. We control for size (from 102 to 106 lines) and language with respect to representational granularity. We examine 3 primary representation types — character, byte (UTF-8), and word, and upon encountering some unusual phenomena, we perform a secondary set of controls with 5 alternate representations — on the character level: Pinyin and Wubi (ASCII representations for ZH phones and character strokes, respectively), on the byte level: code page 1256 (for AR) and code page 1251 (for RU), and on the word level: Byte Pair Encoding (BPE) (Sennrich et al., 2016), an adapted compression algorithm from Gage (1994). These symbolic variants allow us to manipulate the statistical properties of the representations, while staying as “faithful” to the language as possible. We adopt this symbolic data-centric approach because we would like to more directly interpret the confounds, if any, that make language data different from other data types. We operate on a smaller data size range as this is more common in traditional domain sciences and one of our higher goals is to bridge an understanding between language sciences and engineering (the latter being the dominant focus in NLP). We run statistical tests to identify the strongest correlates of performance and to assess whether the differences between the mean performance of different groups are indeed significant. We are concerned not with the absolute scores, but with the relations between scores from different languages and the generalizations derived therefrom.\nInformation-theoretic, fair evaluation with multitexts Most sequence-to-sequence models are optimized using a cross-entropy loss (see Appendix B for definition). Cotterell et al. (2018) propose to use “renormalized” perplexity (PP) to evaluate LMs fairly using the total number of bits divided by some constant. In our case, we choose instead a simpler method of using an “unnormalized” PP, directly using the total number of bits needed to encode the development (dev) set, which has a constant size of 3,077 lines per language.\nDisparity/Inequality In the context of our CLMing experiments, we consider there to be “disparity” or “inequality” between languages l1 and l2 if there are significant differences between the performance distributions of these two languages with respect to each representation. Here, by performance we mean the number of bits required to encode the held-out data using a trained CLM. With 30 directions, there are 15 pairs of source languages (lsrc1, lsrc2) and 15 pairs of target languages (ltrg1, ltrg2) possible. To assess whether the differences are significant, we perform unpaired two-sided significance tests with the null hypothesis that the score distributions for the two languages are not different. Upon testing for normality with the Shapiro-Wilk test (Shapiro & Wilk, 1965; Royston, 1995), we use the parametric unpaired two-sample Welch’s t-test (Welch, 1947) (when normal) or the non-parametric unpaired Wilcoxon test (Wilcoxon, 1945) (when not normal) for the comparisons. We use the implementation in R (R Core Team, 2014) for these 3 tests. To account for the multiple comparisons we are performing, we correct all p-values using Bonferroni’s correction (Benjamini & Heller, 2008; Dror et al., 2017) and follow Holm’s procedure2 (Holm, 1979; Dror et al., 2017) to identify the pairs of l1 and l2 with significant differences after correction. We report all 3 levels of significance (α ≤ 0.05, 0.01, 0.001) for a more comprehensive evaluation.\nExperimental setup The systematic, identical treatment we give to our data is described as follows with further preprocessing and hyperparameter details in Appendices B and C, respectively. The distinctive point of our experiment is that the training regime is the same for all (intuition in App. O.1).\n2using implementation from https://github.com/rtmdrr/replicability-analysis-NLP\nAfter filtering length to 300 characters maximum per line in parallel for the 6 languages, we made 3 subsets of the data with 1 million lines each — one having lines in the order of the original corpus (dataset A) and two other randomly sampled (without replacement) from the full corpus (datasets B & C). Lines in all datasets are extracted in parallel and remain fully aligned for the 6 languages. For each run and each representation, there are 30 pairwise directions (i.e. one lsrc to one ltrg) that result from the 6 languages. We trained all 150 (for 5 sizes) 6-layer Transformer models for each run using the SOCKEYE Toolkit (Hieber et al., 2018). We optimize using PP and use early stopping if no PP improvement occurs after 3 checkpoints up to 50 epochs maximum, taking the best checkpoint. Characters and bytes are supposed to mitigate the out-of-vocabulary (OOV) problem on the word level. In order to assess the effect of modeling with finer granularity more precisely, all vocabulary items appearing once in the train set are accounted for (i.e. full vocabulary on train, as in Gerz et al. (2018a;b)). But we allow our system to categorize all unknown items in the dev set to be unknown (UNK) so to measure OOVs (open vocabulary on dev (Jurafsky & Martin, 2009)). To identify correlates of performance, we perform Spearman’s correlation (Spearman, 1904) with some basic statistical properties of the data (e.g. length, vocabulary size (|V |), type-token-ratio, OOV rate) as metrics — a complete list thereof is provided in Appendix F. For each of the 3 primary representations — character, byte, and word, we performed 5 runs total in 5 sizes (102-106 lines) (runs A0, B0, C0, A1, & A2) and 7 more runs in 4 sizes (102-105 lines) (A3-7, B1, & C1), also controlling for seeds. For the alternate/secondary representations, we ran 3 runs each in 5 sizes (102-106 lines) (A0, B0, & C0)." }, { "heading": "3 EXPERIMENTAL RESULTS OF PRIMARY REPRESENTATIONS", "text": "Subfigures 1a, 1b, and 1c present the mean results across 12 runs of the 3 primary representations — character, byte, and word, respectively. The x-axis represents data size in number of lines and y-axis the total conditional cross-entropy, measured in bits (Eq. 1 in Appendix B). Each line connects 5 data points corresponding to the number of bits the CLMs (trained with training data of 102, 103, 104, 105, and 106 lines) need to encode the target language dev set given the corresponding text in the source language. These are the same data in the same 30 language directions and 5 sizes with the same training regime, just preprocessed/segmented differently. This confirms representation relativity — languages (or any objects being modeled) need to be evaluated relative to their representation. “One size does not fit all” (Durrani et al., 2019), our conventional way of referring to “language” (as a socio-cultural product or with traditional word-based approaches, or even for most multilingual tasks and competitions) is too coarse-grained (see also Fisch et al. (2019) and Ponti et al. (2020)).\nSubfigures 1d, 1e, and 1f display the corresponding information sorted into facets by target language, source languages represented as line types. Through these we see more clearly that results can be grouped rather neatly by target language (cf. figures sorted by source language in Appendix H) — as implicit in the Transformer’s architecture, the decoder is unaware of the source language in the encoder. As shown in Table 1 in § 5 summarizing the number of source and target language pairs with significant differences, there are no significant differences across any source language pairs. The Transformer neutralizes source language instances. This could explain why transfer learning or multilingual/zero-shot translation (Johnson et al., 2017) is possible at all on a conceptual level.\nIn general, for character and byte models, most language directions do seem to converge at 104 lines to similar values across all target languages, with few notable exceptions. There are some fluctuations past 104, indicating further tuning of hyperparameters would be beneficial due to our present setting possibly working most favorably at 104. On the character level, target language ZH (ZHtrg) shows a different learning pattern throughout. And on the byte level, ARtrg and RUtrg display non-monotonic and unstable behavior, which we refer to as erratic. Word models exhibit Double Descent across the board (note the spike at 103), but overall, difficult/easy languages stay consistent, with AR and RU being the hardest, followed by ES and FR, then EN and ZH. A practical takeaway from this set of experiments: in order to obtain more robust training results, use bytes for ZH (as suggested in Li et al. (2019a)) and characters for AR and RU (e.g. Lee et al. (2017)) — also if one wanted to avoid any “class” problems in performance disparity with words. Performance disparity for these representations is reported in Table 1 under “CHAR”, “BYTE”, and “WORD”. Do note, however, that the intrinsic performance of ZH with word segmentation is not particularly subpar. But this often does not correlate with its poorer downstream tasks results (recall results from Junczys-Dowmunt et al. (2016)). Since the notion of word in ZH is highly contested and\nambiguous — 1) it is often aimed to align with that in other languages so to accommodate manual feature engineering and academic theories, 2) there is great variation among different conventions, 3) native ZH speakers identify characters as words — there are reasons to rethink this procedure now that fairer and language-independent processing in finer granularity is possible (cf. Li et al. (2019b) as well as Duanmu (2017) for a summary on the contested nature of wordhood in ZH). A more native analysis of ZH, despite being considered a high-resource language, has not yet been recognized in NLP." }, { "heading": "4 UNDERSTANDING THE PHENOMENA WITH ALTERNATE REPRESENTATIONS", "text": "To understand why some languages show different results than others, we carried out a secondary set of control experiments with representations targeting the problematic statistical properties of the corresponding target languages. (An extended version of this section is provided in Appendix P.)\nCharacter level We reduced the high |V | in ZH with representations in ASCII characters — Pinyin and Wubi. The former is a romanization of ZH characters based on their pronunciations and the latter an input algorithm that decomposes character-internal information into stroke shape and ordering and matches these to 5 classes of radicals (Lunde, 2008). We replaced the ZH data in these formats only on the target side and reran the experiments involving ZHtrg on the character level. Results in Figure 2 and Table 1 show that the elimination of disparity on character level is possible if ZH is represented\n500000\n1000000\n1500000\n2000000\n2500000\n1e+03 1e+05 number of lines\nnu m\nbe r\nof b\nits\nTRG\nAR EN ES FR RU ZH SRC\nAR EN ES FR RU ZH\n(a) Wubi\nAR EN ES\nFR RU ZH\n1000000\n1500000\n2000000\n1000000\n1500000\n2000000\n1e+03 1e+05 1e+03 1e+05 1e+03 1e+05 number of lines\nnu m\nbe r\nof b\nits\nTRG\nAR EN ES FR RU ZH SRC\nAR EN ES FR RU ZH\n(b) Wubi by target\n500000\n1000000\n1500000\n2000000\n2500000\n1e+03 1e+05 number of lines\nnu m\nbe r\nof b\nits\nTRG\nAR EN ES FR RU ZH SRC\nAR EN ES FR RU ZH\n(c) Pinyin\nAR EN ES\nFR RU ZH\n1000000\n1500000\n2000000\n1000000\n1500000\n2000000\n1e+03 1e+05 1e+03 1e+05 1e+03 1e+05 number of lines\nnu m\nbe r\nof b\nits\nTRG\nAR EN ES FR RU ZH SRC\nAR EN ES FR RU ZH\n(d) Pinyin by target\n(a) Code page 1256 & 1251\n(b) Code page by target\n(c) BPE\n(d) BPE by target\nFigure 3: Byte-level (Subfigures 3a & 3b) remedies with code page 1256 for target AR and 1251 for target RU, and word-level (Subfigures 3c & 3d) remedy with BPE for all languages.\nthrough Pinyin (transliteration), as in Subfigure 2c. But models with ZH logographic scripts display a behaviorial tendency unlike those with other (phonetic) alphabetic scripts (Subfigure 2a). Work published thus far using Wubi with the Transformer seems to have needed some form of architectural modification (Gao et al., 2020) or a different architecture altogether (Nikolov et al., 2018; Zhang et al., 2019), suggesting a possible script bias (to be further discussed in § 5 under “Basis for biases”).\nByte level Length is the most salient statistical attribute that makes AR and RU outliers. To shorten their sequence lengths, we tested with alternate encodings on ARtrg and RUtrg — code page 1256 and 1251, which provide 1-byte encodings specific to AR and RU, respectively. Results are shown in Subfigures 3a and 3b. Not only is erraticity resolved, the number of 15 possible target language pairs with significant differences reduces from 8 with the UTF-8 byte representation to 0 (Table 1 under “ARRUt”), indicating that we eliminated disparity with this optimization heuristic. Since our heuristic is a lossless and reversible transform, it shows that a complexity that is intrinsic and necessary in language3 does not exist in computing, however diverse they may be, as our 6 are, from the conventional linguistic typological, phylogenetic, historical, or geographical perspectives. Please refer to Appendix J for our discussion on language complexity.\nWord level The main difference between word and character/byte models is length not being a top contributing factor correlating with performance, but instead |V | is. This is understandable as word segmentation neutralizes sequence lengths. To remedy the OOV problem, we use BPE, which learns a fixed vocabulary of variable-length character sequences (on word level, as it presupposes word\n3aside from its statistical properties related to length and vocabulary. “Language” here refers to language represented through all representations.\nsegmentation) from the training data. It is more fine-grained than word segmentation and is known for its capability to model subword units for morphologically complex languages (e.g. AR and RU). We use the same vocabulary of 30,000 as specified in Junczys-Dowmunt et al. (2016). This reduced our averaged OOV token rate by 89-100% across the 5 sizes. The number of language pairs with significant differences reduced to 7 from 8 for word models, showing how finer-grained modeling has a positive effect on closing the disparity gap." }, { "heading": "5 META-RESULTS, ANALYSIS, AND DISCUSSION", "text": "Performance disparity Table 1 lists the number of language pairs with significant differences under the representations studied. Considering how it is possible for our character and byte models to effect no performance disparity for the same languages on the same data, this indicates that disparity is not a necessary condition. In fact, the customary expectation that languages ought to perform differently stems from our word segmentation practice. Furthermore, the order of AR/RU > ES/FR > EN/ZH (Figure 1c) resembles the idea of morphological complexity. Considering there are character-internal meaningful units in languages with logographic script such as ZH (cf. Zhang & Komachi (2018)) that are rarely captured, studied, or referred to as “morphemes”, this goes to show that linguistic morphology, along with its complexity, as is practiced today4 and that which has occurred in the NLP discourse thus far, has only been relevant on and is bounded to the “word” level. The definition of word, however, has been recognized as problematic for a very long time in the language sciences (see Haspelmath (2011) and references therein from the past century). Since the conventional notion of word, which has been centered on English and languages with alphabetic scripts, has a negative impact on languages both morphologically rich (see Minkov et al. (2007), Seddah et al. (2010), inter alia), AR and RU in our case, as well as morphologically “frugal” (Koehn, 2005), as in ZH, finer-grained modeling with characters and bytes (or n-gram variants/pieces thereof) is indeed a more sensible option and enables a greater variety of languages to be handled with more simplicity, fairness, independence, and flexibility.\nWhile the lack of significant differences between pairs of source languages would signify neutralization of source language instances, it does not mean that source languages have no effect on target. For our byte solutions with code pages, we experimented also with source side optimization in the directions that involve AR/RU as source. This affected the distribution of the disparity results for that representation — with 2 pairs being significantly different (see Table 1 under “ARRUs,t”). We defer further investigation on the nature of source language neutralization to future work.\nSample-wise Double Descent (DD) Sample-wise non-monotonicity/DD (Nakkiran et al., 2020) denotes a degradation followed by an improvement in performance with increasing data size. We notice word models and character models with ZHtrg, i.e. models with high target |V |, are prone to exhibit a spike at 103. A common pattern for these is the ratio of target training token count to number of parameters falls into O(10−4) for 102 lines, O(10−3) at 103, O(10−2) at 104, and O(10−1) for 105 lines and so on. But for more atomic units such as alphabetic (not logographic) characters (may it be Latin, Cyrillic, or Abjad) and for bytes, this progression instead begins at O(10−3) at 102 lines. Instead of thinking this spike of 103 as irregular, we may instead want to\n4But there are no reasons why linguistics or linguistic typology cannot encompass a statistical science of language beyond/without “words”, or with continuous representations of characters and bytes. In fact, that could complement the needs of language engineering and the NNs/DL/ML communities better.\nthink of this learning curve as shifted by 1 order of magnitude to the right for characters and bytes and/or the performance at 102 lines for words and ZH-characters due to being overparameterized and hence abnormal. This would also fit in with the findings by Belkin et al. (2019) and Nakkiran et al. (2020) attributing DD to overparameterization. If we could use this ratio and logic of higher |V | to automatically detect “non-atomic” units, ones that can be further decomposed, this observation could potentially be beneficial for advancing other sciences, e.g. biology. From a cognitive modeling perspective, the similarity in behavior of ZH characters and words of other languages can affirm the interpretation of wordhood for those ZH speakers who identify ZH characters as words (see also last paragraph in § 3 and Appendix J). While almost all work attribute DD to algorithmic reasons, concurrent work by Chen et al. (2020) corroborates our observation and confirms that DD arises due to “the interaction between the properties of the data and the inductive biases of learning algorithms”. Other related work on DD and its more recent development can also be found in their work.\nWe performed additional experiments testing our setting on the datasets used by the Nakkiran et al. (2020) and testing our data on a non-neural LM. Results support our findings and are provided in Appendix K. Number of model parameters can be found in Appendix L.\nErraticity We observe another type of sample-wise non-monotonicity, one that signals irregular and unstable performance across data sizes and runs. Within one run, erraticity can be observed directly as changes in direction on the y-axis. Across runs, large variance can be observed, even with the same dataset (see Figure 18 in Appendix M). Erraticity can also be observed indirectly through a negative correlation between data size and performance. Many work on length bias in NMT have focused on solutions related to search, e.g. Murray & Chiang (2018). Our experiments show that a kind of length bias can surface already with CLMing, without generation taking place. If the connection between erraticity and length bias can indeed be drawn, it could strengthen the case for global conditioning (Sountsov & Sarawagi, 2016). (See Appendix M for more discussion and results.)\nScript bias, erraticity, word bias — are these necessary conditions? To assess whether the observed phenomena are particular to this one setting, we performed one run with dataset A in 4 sizes with the primary representations on 1-layer Transformers (see Appendix N). We observed no significant disparity across the board. It shows that larger/overparameterized models can magnify/exacerbate the differences in the data statistics. That hyperparameter tuning — in this case, through the reduction of the number of layers — can mitigate effects from data statistics is, to the best of our knowledge, a novel insight, suggesting also that a general expectation of monotonic development as data size increases can indeed be held. Our other findings remain consistent (representational relativity, source language neutralization, and DD on word level).\nBases for biases Recall in § 1, we “consider bias to be present when performance disparity in our Transformer models is statistically significant”. As shown in our data statistics and analysis (Appendices D and P respectively), script bias, length bias wrt erraticity in CLMing, and word bias are all evident in the vocabulary and length information in the data statistics. Hence these disparities in performance are really a result of the Transformer being able to model these differences in data at such a magnitude that the differences are statistically significant. The meta phenomenon of erraticity, however, warrants an additional consideration indicative of the empirical limits of our compute (cf. Xu et al. (2020)), even when the non-monotonicity is not observed during the training of each model.\nIn eliminating performance disparity in character and byte models by normalizing vocabulary and length statistics in the data, we demonstrated that performance disparity as expected from the morphological complexity hierarchy is due to word tokenization, not intrinsic or necessary in language. This is the word bias. Qualitative issues in the concept of word will persist and make crosslinguistic comparison involving “words” unfair even if one were to be able to find a quantitative solution to mitigate the OOV issue, the bottleneck in word-based processing. We humans have a choice in how we see/process languages. That some might still prefer to continue with a crosslinguistic comparison with “words” and exert the superiority of “word” tokenization speaks for a view that is centered on “privileged” languages — in that case, word bias is a human bias.\nAnd, in eliminating performance disparity across the board with our one-layer models, we show that all quantitative differences in data statistics between languages can also be modeled in a “zoomed-\nout”/“desensitized” mode, suggesting that while languages can be perceived as being fundamentally different in different ways in different granularities, they can also be viewed as fundamentally similar." }, { "heading": "6 ADDITIONAL RELATED WORK", "text": "Similar to our work in testing for hardness are Cotterell et al. (2018), Mielke et al. (2019), and Bugliarello et al. (2020). The first two studied (monolingual) LMs — the former tested on the Europarl languages (Koehn, 2005) with n-gram and character models and concluded that morphological complexity was the culprit to hardness, the latter studied 62 languages of the Bible corpus (Mayer & Cysouw, 2014) in addition and refuted the relevance of linguistic features in hardness based on character and BPE models on both corpora in word-tokenized form. Bugliarello et al. (2020) compared translation results of the Europarl languages with BPEs at one data size and concluded that it is easier to translate out of EN than into it, statistical significance was, however, not assessed. In contrast, we ablated away the confound of generation and studied CLMing with controls with a broader range of languages with more diverse statistical profiles in 3 granularities and up to 5 orders of magnitude in data size. That basic data statistics are the driver of success in performance in multilingual modeling has so far only been explicitly argued for in Mielke et al. (2019). We go beyond their work in monolingual LMs to study CLMs and evaluate also in relation to data size, representational granularity, and quantitative and qualitative fairness.\nBender (2009) advocated the relevance of linguistic typology for the design of language-independent NLP systems based on crosslinguistic differences in word-based structural notions, such as parts of speech. Ponti et al. (2019) found typological information to be beneficial in the few-shot setting on the character level for 77 languages with Latin scripts. But no multilingual work has thus far explicitly examined the relation between linguistic typology and the statistical properties of the data, involving languages with diverse statistical profiles in different granularities.\nAs obtaining training data is often the most difficult part of an NLP or Machine Learning (ML) project, Johnson et al. (2018) introduced an extrapolation methodology to directly model the relation between data size and performance. Our work can be viewed as one preliminary step towards this goal. To the best of our knowledge, there has been no prior work on demonstrating the neutralization of source language instances through statistical comparisons, a numerical analysis on DD for sequence-tosequence models, the meta phenomenon of a sample-wise non-monotonicity (erraticity) being related to length, or the connection between effects of data statistics and modification in architectural depth." }, { "heading": "7 CONCLUSION", "text": "Summary We performed a novel, rigorous relational assessment of performance disparity across different languages, representations, and data sizes in CLMing with the Transformer. Different disparity patterns were observed on different representation types (character, byte, and word), which can be traced back to the data statistics. The disparity pattern reflected on the word level corresponds to the morphological complexity hierarchy, reminding us that the definition of morphology is predicated on the notion of word and indicating how morphological complexity can be modeled by the Transformer simply through word segmentation. As we were able to eliminate disparity on the same data on the character and byte levels by normalizing length and vocabulary, we showed that morphological complexity is not a necessary concept but one that results from word segmentation and is bounded to the word level, orthogonal to the performance of character or byte models. Representational units of finer granularity were shown to help eliminate performance disparity though at the cost of longer sequence length, which can have a negative impact on robustness. In addition, we found all word models and character models with ZHtrg to behave similarly in their being prone to exhibit a peak (as sample-wise DD) around 103 lines in our setting. While bigger/overparameterized models can magnify the effect of data statistics, exacerbating the disparity, we found a decrease in model depth can eliminate these quantitative biases, leaving only the qualitative aspect of “word” and the necessity of word segmentation in question.\nOutlook Machine learning has enabled greater diversity in NLP (Joshi et al., 2020). Fairness, in the elimination of disparity, does not require big data. This paper made a pioneering attempt to bridge research in DL/NNs, language sciences, and language engineering through a data-centric perspective.\nWe believe a statistical science for NLP as a data science can well complement algorithmic analyses with an empirical view contributing to a more generalizable pool of knowledge for NNs/DL/ML. A more comprehensive study not only can lead us to new scientific frontiers, but also better design and evaluation, benefitting the development of a more general, diverse and inclusive Artificial Intelligence." }, { "heading": "B DATA SELECTION AND PREPROCESSING DETAILS 17", "text": "" }, { "heading": "C HYPERPARAMETER SETTING 19", "text": "" }, { "heading": "D DATA STATISTICS 20", "text": "" }, { "heading": "E SCORE TABLES 24", "text": "" }, { "heading": "F CORRELATION STATISTICS 25", "text": "G ENLARGED FIGURES FOR ALL 30 LANGUAGE DIRECTIONS (AGGREGATE RESULTS FROM ALL RUNS) 26\nH SAMPLE FIGURES FROM RUN A0, ALSO SORTED BY SOURCE LANGUAGE FOR CONTRAST 42" }, { "heading": "I LANGUAGE PAIRS WITH SIGNIFICANT DIFFERENCES 43", "text": "" }, { "heading": "J LANGUAGE COMPLEXITY 44", "text": "" }, { "heading": "K SAMPLE-WISE DOUBLE DESCENT (DD) 46", "text": "K.1 OUR EXPERIMENTAL FRAMEWORK ON DD DATASETS FROM (NAKKIRAN\nET AL., 2020) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 K.2 TOKEN-TO-PARAMETER RATIO FOR NON-NEURAL MONOLINGUAL LMS . . . 47" }, { "heading": "L NUMBER OF MODEL PARAMETERS 50", "text": "" }, { "heading": "M ERRATICITY 53", "text": "M.1 ERRATICITY AS LARGE VARIANCE: EVIDENCE FROM DIFFERENT RUNS OF THE\nSAME DATA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 M.2 ADDITIONAL EXPERIMENT WITH LENGTH FILTERING TO 300 BYTES . . . . . 53" }, { "heading": "N EXPERIMENTS WITH ONE-LAYER TRANSFORMER 56", "text": "" }, { "heading": "O PAQS (PREVIOUSLY ASKED QUESTIONS) 57", "text": "O.1 ONE SETTING FOR ALL . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 O.2 TRANSLATIONESE / WORD ORDER . . . . . . . . . . . . . . . . . . . . . . 57\nP UNDERSTANDING THE PHENOMENA WITH ALTERNATE REPRESENTATIONS (EXTENDED VERSION) 59" }, { "heading": "A RE-VISUALIZATION OF FIGURE 1 IN JUNCZYS-DOWMUNT ET AL. (2016) IN", "text": "6 FACETS BY TARGET LANGUAGE" }, { "heading": "B DATA SELECTION AND PREPROCESSING DETAILS", "text": "The UN Parallel Corpus v1.0 (Ziemski et al., 2016) consists of manually translated UN documents from 1990 to 2014 in the 6 official UN languages. Therein is a subcorpus that is fully aligned by line, comprising the 6-way parallel corpus we use. We tried to have as little preprocessing or filtering as necessary to eliminate possible confounds. But as the initial runs of our experiment failed due to insufficient memory on a single GPU with 12 GB VRAM5, we filtered out lines with more than 300 characters in any language in lockstep with one another for all the 6 languages such that the subcorpora would remain parallel, thereby keeping the material of each language semantically equivalent to one another. 8,944,859 lines for each language were retained as our training data which cover up to the 75th percentile in line length for all 6 languages. In order to monitor the effect of data size, we made subcorpora of each language in 5 sizes by heading the first 102, 103, 104, 105, 106 lines6. We refer to this as dataset A. In addition, to better understand and verify the consistency of the phenomena observed, we made 2 supplemental datasets by shuffling the 8,944,859 lines two different times randomly and heading the number of lines in our 5 sizes for each language, again in lockstep with one another (datasets B and C).\n5GPUs used for experiments in this paper range from a NVIDIA TITAN RTX (24 GB), NVIDIA GeForce RTX 2080 Ti (11 GB), a GTX Titan X (12 GB), to a GTX 1080 (8 GB). All jobs were run on a single GPU setting. Some word-level experiments involving ARtrg or RUtrg at 106 had to be run on a CPU as 24 GB VRAM were not sufficient. Models with higher maximum sequence lengths (e.g. byte models) were trained with 24 GB VRAM. Difference in equipment does not necessarily lead to degradation/improvement in scores.\n6The terms “line” and “sentence” have been used interchangeably in the NLP literature. We use “line” to denote a sequence that ends with a newline character and “sentence” as one with an ending punctuation. Most parallel corpora, such as ours, are aligned by line, as a line may be part of a sentence or without an ending punctuation (e.g. a header/title). Using a standardized unit such as “line” would also be a fairer measure to linguae/scriptiones continuae (languages/scripts with no explicit punctuation).\nFor character modeling, we used a dummy symbol to denote each whitespace. For byte, we turned each UTF-8-encoded character into a byte string in decimal value, such that each token is a number between 0 and 255, inclusive. For word, we followed (Junczys-Dowmunt et al., 2016) and used the Moses tokenizer (Koehn et al., 2007) as is standard in NMT practice when word tokenization is applied and Jieba7 for segmentation in ZH.\nFor Pinyin, we used the implementation from https://github.com/lxyu/pinyin in the numerical format such that each character/syllable is followed by a single digit indicating its lexical tone in Mandarin. For Wubi, we used the dictionary from the implementation from https:// github.com/arcsecw/wubi.\nWe have implemented all representations such that they would be reversible even when the sequence contains code-mixing.\nWe used the official dev set as provided in (Ziemski et al., 2016), 3,077 lines per language remained from 4,000 after filtering line length to 300 characters. Data statistics is provided in Appendix D for reference.\nThe systematic training regime that we give to our language directions are identical for all. For each primary representation type (character, byte, and word), we performed:\n• 5 runs in 5 sizes (102 − 106): A0 (seed=13), B0 (13), C0 (9948), A1 (9948), A2 (265), and • 7 more runs in 4 sizes (102 − 105): A3 (777), A4 (42), A5 (340589), A6 (1000), A7 (83146),\nB1 (9948), & C1 (13).\nFor each run and each size, there are 30 pairwise directions (i.e. 1 source language to 1 target language, e.g. AR-EN for Arabic to English) that result from the 6 languages. We trained all 150 jobs for each run and representation using the Transformer model (Vaswani et al., 2017) as supported by the SOCKEYE Toolkit (Hieber et al., 2018) (version 1.18.85), based on MXNet (Chen et al., 2015). A detailed description of the architecture of the Transformer can be found in (Vaswani et al., 2017). The same set of hyperparameters applies to all and its values are listed in Appendix C.\nNotes on training time Each run of 30 directions in 5 sizes took approximately 8-12 days for character and byte models. Byte models generally took longer — hence training time is positively correlated with length (concurring with observations by Cherry et al. (2018) as they compared character with BPE models). A maximum length of 300 characters entails a maximum length of at least 300 bytes in UTF-8. Each run of word models (30 directions, 5 sizes) took about 6 days (excluding the training of some 7-9 directions out of 30 per run involving ARtrg or RUtrg at 106 on word level which took about 12-18 hours each direction to train on a CPU as these required more space and would run out of memory (OOM) on our GPUs otherwise). These figures do not include the additional probing experiments described in § 4.\nEvaluation metric Most sequence-to-sequence models are optimized using a cross-entropy loss, defined as:\nH(t, s) = − N∑ i=1 log2 p(ti | t<i, s) (1)\nwhere t is the sequence of tokens to be predicted, ti refers to the ith token in that sequence, s is the sequence of tokens conditioned on, and N = |t|. It is customary to report scores as PP, which is 2 1 N H(t,s), i.e. 2 to the power of the cross-entropy averaged by the number of tokens (based on whichever granularity of unit is used for training) in the data. Cotterell et al. (2018) propose to use “renormalized” PP to evaluate LMs fairly through the division of an arbitrary constant. In our case, we choose instead a simpler method of using an “unnormalized” PP, i.e. the total number of bits needed to encode the development (dev) set, which has a constant size of 3,077 lines per language (after length filtering of the same dev set used in Junczys-Dowmunt et al. (2016)) for all various training sizes. As the implementation we used (SOCKEYE (Hieber et al., 2018)) only reports PP, we transform it back to entropy as defined above by noting that H(t, s) = log2 PP (t|s)×N .\n7https://github.com/fxsjy/jieba\nC HYPERPARAMETER SETTING\n• encoder transformer; • decoder transformer; • num-layers 6:6; • num-embed 512:512; • transformer-model-size 512; • transformer-attention-heads 8; • transformer-feed-forward-num-hidden 2048; • transformer-activation-type relu; • transformer-positional-embedding-type fixed; • transformer-preprocess d; transformer-postprocess drn; • transformer-dropout-attention 0.1; • transformer-dropout-act 0.1; • transformer-dropout-prepost 0.1; • batch-size 15; • batch-type sentence; • max-num-checkpoint-not-improved 3; • max-num-epochs 50; • optimizer adam; • optimized-metric perplexity; • optimizer-params epsilon: 0.000000001, beta1: 0.9, beta2: 0.98; • label-smoothing 0.0; • learning-rate-reduce-num-not-improved 4; • learning-rate-reduce-factor 0.001; • loss-normalization-type valid; • max-seq-len 300 for character, word, and BPE, 672 for all bytes, 688 for Wubi, 680 for Pinyin; • checkpoint-frequency/interval 4000.\n(For smaller datasets, the end of 50 epochs is often reached before the first checkpoint. Since SOCKEYE only outputs scores at checkpoints, we adjusted the checkpoint frequency as follows to get a score outputted by the end of 50 epochs: 1000 for 100 lines for all character & byte instances, 400 for 100 lines for word and 500 for 100 lines BPE, 3450 for 1000 lines for word & BPE. For the very few cases that this default does not suffice due to bucketing of similar length sequences, we manually set the checkpoint frequency to the last batch.)\nD D\nA TA\nS TA\nT IS\nT IC S • N um be ro ft yp es ,i .e .v oc ab\nul ar\ny si\nze (| V |)\n.N ot\ne th\nat So\nck ey\ne ad\nds fo\nri ts\nca lc\nul at\nio n 4\nad di\ntio na\nlt yp\nes :<\npa d>\n,< s>\n,< /s\n>, <u\nnk >.\n• N\num be\nro ft\nok en\ns. T\nhi s\nex cl\nud es\nth e 1\nE O\nS/ B\nO S\n(e nd\n-/ be\ngi nn\nin g-\nof -s\nen te\nnc e)\nm ar\nke ra\ndd ed\nby So\nck ey\ne to\nea ch\nlin e.\n• O\nut -o\nfvo\nca bu\nla ry\n(O O\nV )t\nyp e\nra te\n(i n\n% ),\ni.e .t\nhe fr\nac tio\nn of\nth e\nty pe\ns in\nth e\nde v\nda ta\nth at\nis no\ntc ov\ner ed\nby th\ne ty\npe s\nin th\ne tr\nai ni\nng da\nta .\n• O\nO V\nto ke\nn ra\nte (i\nn %\n), i.e\n.t he\nfr ac\ntio n\nof to\nke ns\nin th\ne de\nv da\nta th\nat is\ntr ea\nte d\nas U\nN K\nno w\nns .\n• Ty\npe -t\nok en\n-r at\nio (i\nn %\n), i.e\n.t he\nra tio\nbe tw\nee n\nth e\nnu m\nbe ro\nft yp\nes an\nd to\nke ns\nin th\ne da\nta .T\nhi s\nis a\nro ug\nh pr\nox y\nfo rl\nex ic\nal di\nve rs\nity in\nth at\na va\nlu e\nof 1\nw ou\nld in\ndi ca\nte th\nat no\nty pe\nis ev\ner se\nen tw\nic e,\nan d\na va\nlu e\nve ry\ncl os\ne to\n0 w\nou ld\nin di\nca te\nth at\nve ry\nfe w\ndi st\nin ct\nty pe\ns ac\nco un\ntf or\nal m\nos ta\nll of\nth e\nda ta . • L in e le ng th (e xc l. E O S/ B O S m ar ke r) :m ea n±\nst an\nda rd\nde vi\nat io\nn, an\nd th\ne 0/\n25 /5\n0/ 75\n/1 00\n-t h\npe rc\nen til\ne. S ta\ntis tic\ns fo\nrd at\nas et A R ep re se nt at io n C H A R B Y T\nE W\nO R\nD B PE N um be ro fl in es 10 0 1, 00 0 1 0, 00 0 1 00 ,0 0 0 1 ,0 00 ,0 0 0 10 0 1 ,0 00 1 0, 00 0 1 00 ,0 00 1 ,0 00 ,0 00 1 00 1 ,0 00 10 ,0 00 1 0 0, 00 0 1 ,0 0 0, 0 00 1 00 1 ,0 0 0 10 ,0 00\n10 0, 00 0\n1 ,0 00 ,0 00\nN um\nbe r\nof T\nY PE S A R 82\n10 7\n1 4 9\n1 76\n2 39\n73 95\n1 30\n1 45\n1 62\n8 66\n5 ,8 65\n26 ,1 05\n9 7, 99 4\n30 4, 9 75\n7 0 0\n4, 17 1\n18 ,2 02\n29 ,7 06\n29 ,9 61\nE N\n75 8 1\n1 22\n16 7\n24 4\n76 82\n12 7\n15 1\n16 6\n67 9\n3 ,2 89\n1 2, 0 1 1\n37 ,2 61\n1 22 ,4 5 4\n64 5\n2, 85 6\n10 ,2 25\n26 ,1 84\n29 ,0 68\nE S\n84 9 0\n1 14\n17 3\n25 4\n85 91\n11 8\n15 1\n16 6\n70 7\n3 ,8 36\n1 5, 1 1 3\n49 ,4 96\n1 42 ,8 0 6\n66 1\n3, 23 2\n12 ,0 83\n27 ,9 08\n29 ,4 45\nFR 86\n9 4\n1 30\n17 5\n25 6\n87 96\n13 0\n15 3\n17 2\n7 42\n3 ,8 78\n1 3, 9 9 5\n42 ,8 67\n1 18 ,2 0 1\n72 3\n3, 19 3\n11 ,7 31\n27 ,3 93\n29 ,3 33\nR U\n9 5\n1 0 8\n1 60\n18 7\n26 0\n97 11 0\n14 8\n15 7\n16 6\n8 83\n5 ,4 30\n2 3, 40 0\n8 1, 3 39\n2 3 6, 08 5\n8 42\n4, 3 1 9\n17 ,9 9 5\n29 ,2 39\n29 ,8 07\nZ H\n6 02\n1, 2 36\n2, 2 67\n3, 4 00\n4, 8 85\n13 7\n14 3\n15 3\n16 4\n19 3\n62 7\n3, 17 8\n1 3, 1 3 4\n46 ,9 38\n1 34 ,4 8 9\n80 0\n2, 8 49\n1 0, 65 6\n27 ,2 84\n29 ,1 48\nZ H\n_p in\nyi n\n79 90\n1 0 4\n1 5 4\n5 64\nZ H\n_w ub\ni 10 0\n10 6\n1 2 3\n1 70\n5 42\nA R\n_c p1\n25 6\n8 2\n10 7\n14 9\n17 6\n23 9\nR U\n_c p1\n25 1\n9 5\n10 8\n16 0\n18 7\n26 0\nN um\nbe r\nof TO\nK E\nN S\nA R\n9 ,0 7 9\n1 23 ,8 32\n1 ,0 83 ,5 1 7\n1 0 ,6 2 5 ,0 47\n1 0 2 ,0 6 4 ,2 30\n16 ,6 55\n22 7, 1 63\n1 ,9 85 ,0 1 4\n19 ,4 8 7, 68 9\n18 6 ,1 7 1, 18 0\n1 ,7 76\n2 3, 4 60\n2 06 ,5 49\n2 ,0 35 ,1 9 0\n19 ,4 1 0, 5 02\n2 ,9 07\n2 8, 22 7\n2 23 ,5 10\n2 ,2 18 ,0 44\n21 ,3 44 ,9 00\nE N\n1 1 ,7 30\n15 9, 44 4\n1 ,3 44 ,0 0 1\n1 3 ,1 3 2 ,8 62\n1 2 3 ,4 9 1 ,8 71\n11 ,7 31\n15 9, 4 49\n1 ,3 45 ,7 7 1\n13 ,1 5 8, 94 8\n12 3 ,7 0 5, 12 8\n2 ,0 71\n2 7, 39 8\n23 6, 5 6 9\n2 ,3 3 9, 1 09\n2 1 ,9 4 3 ,1 3 9\n2 ,9 8 0\n29 ,7 78\n24 3, 62 8\n2 ,3 69 ,7 23\n22 ,3 51 ,0 59\nE S\n1 2 ,3 7 4\n1 71 ,1 04\n1 ,4 84 ,8 0 4\n14 ,5 49 ,7 0 3\n13 8 ,5 96 ,0 3 6\n12 ,6 29\n17 5, 28 6\n1 ,5 13 ,7 82\n1 4 ,8 21 ,4 95\n14 1 ,2 76 ,7 66\n2 ,2 3 2\n29 ,4 61\n2 63 ,0 2 4\n2 ,5 88 ,7 9 1\n2 4 ,6 54 ,4 4 9\n3 ,1 80\n3 2, 3 2 3\n2 71 ,9 9 5\n2 ,6 44 ,0 55\n25 ,2 64 ,4 39\nFR 1 2 ,4 56\n1 7 9, 04 8\n1 ,4 9 0 ,9 8 3\n1 4 ,5 2 8 ,5 93\n1 3 8 ,0 4 9, 1 89\n12 ,8 75\n1 85 ,2 27\n1 ,5 42 ,1 05\n15 ,0 55 ,6 57\n1 43 ,4 95 ,6 67\n2 ,2 98\n3 2, 0 11\n2 73 ,1 95\n2 ,6 8 4, 98 2\n25 ,5 9 5, 4 87\n3 ,2 4 5\n3 4, 80 1\n2 8 0, 9 28\n2 ,7 27 ,3 86\n26 ,0 77 ,5 53\nR U\n1 1 ,9 80\n16 8, 15 6\n1 ,4 3 6 ,0 7 8\n1 4 ,1 5 1 ,7 28\n1 3 4 ,7 0 6, 1 20\n2 1 ,7 5 1\n3 09 ,2 79\n2 ,6 36 ,5 91\n25 ,9 90 ,2 63\n2 47 ,0 98 ,7 58\n1 ,8 54\n2 4, 74 6\n21 6, 63 8\n2 ,1 5 0 ,7 4 6\n2 0 ,4 2 1 ,9 6 5\n3 ,1 1 0\n29 ,1 06\n23 1, 29 3\n2 ,3 07 ,6 56\n22 ,0 66 ,9 45\nZ H\n3, 3 18\n4 2, 5 7 2\n37 2, 0 03\n3 ,6 59 ,6 1 7\n34 ,6 72 ,6 1 2\n8 ,5 59\n1 16 ,6 67\n1 ,0 19 ,9 6 9\n9 ,9 9 0, 04 6\n94 ,2 68 ,8 4 0\n1 ,7 5 1\n23 ,5 68\n2 07 ,7 14\n2 ,0 38 ,6 3 9\n1 9 ,3 61 ,1 0 1\n2 ,2 23\n2 5, 45 9\n2 15 ,1 79\n2 ,0 83 ,6 32\n19 ,8 85 ,7 89\nZ H\n_p in\nyi n\n10 ,8 11\n1 48 ,0 8 3\n1 ,2 7 9, 58 6\n1 2 ,5 3 0 ,5 67\n1 18 ,4 71 ,2 2 9\nZ H\n_w ub\ni 9, 2 45\n12 7, 19 0\n1 ,1 0 4 ,5 2 9\n1 0 ,8 8 5 ,5 08\n1 0 2 ,9 0 8, 1 55\nA R\n_c p1\n25 6\n9, 07 9\n12 3, 83 2\n1 ,0 83 ,5 17\n10 ,6 25 ,0 47\n1 02 ,0 64 ,2 30\nR U\n_c p1\n25 1\n11 ,9 80\n16 8, 15 6\n1 ,4 36 ,0 78\n14 ,1 51 ,7 28\n13 4 ,7 06 ,1 20\nO O\nV ty\npe ra\nte (% ) A R 40 .7 7\n23 .0 8\n2 .3 1\n0 .0 0\n0 .0 0\n40 .6 5\n2 2 .7 6\n2 .4 4\n0 .0 0\n0 .0 0\n95 .8 0\n79 .8 0\n4 8 .0 2\n2 4 .1 9\n8 .1 4\n1 0 .1 7\n1 .3 9\n0 .0 9\n0 .0 0\n0 .0 0\nE N\n2 3. 71\n2 0 .6 2\n11 .3 4\n4 .1 2\n2 .0 6\n2 6 .4 7\n23 .5 3\n6 .8 6\n0 .9 8\n0 .0 0\n92 .0 7\n6 9 .0 8\n33 .5 6\n1 5 .2 7\n7 .3 8\n4 .7 3\n1 .0 1\n0 .1 8\n0 .0 5\n0 .0 1\nE S\n2 5. 23\n20 .7 2\n1 6 .2 2\n8 .1 1\n7 .2 1\n2 3 .6 4\n19 .0 9\n11 .8 2\n4 .5 5\n0 .9 1\n9 3 .3 2\n71 .9 0\n3 4 .7 2\n13 .2 4\n6 .1 2\n5 .4 4\n1 .0 6\n0 .2 7\n0 .1 0\n0 .0 8\nFR 25 .6 6\n19 .4 7\n1 1 .5 0\n4 .4 2\n2 .6 5\n27 .1 2\n20 .3 4\n7 .6 3\n2 .5 4\n0 .0 0\n9 2 .5 0\n70 .2 5\n3 2 .1 6\n1 2 .3 1\n5 .6 2\n5 .1 1\n0 .9 9\n0 .2 6\n0 .0 5\n0 .0 2\nR U\n34 .9 3\n2 6 .7 1\n4 .1 1\n2 .0 5\n0 .0 0\n32 .6 4\n24 .3 1\n3 .4 7\n1 .3 9\n0 .0 0\n94 .5 2\n7 5 .8 4\n39 .1 4\n1 4 .5 4\n5 .6 9\n6 .3 8\n1 .2 7\n0 .0 6\n0 .0 2\n0 .0 0\nZ H\n6 9 .7 9\n3 9 .7 3\n10 .5 8\n1 .8 7\n0 .5 6\n10 .4 6\n7 .1 9\n3 .2 7\n1 .3 1\n0 .0 0\n93 .1 9\n7 3 .2 9\n38 .5 9\n1 5 .6 3\n5 .6 5\n76 .0 9\n40 .9 5\n8 .5 2\n0 .8 9\n0 .2 1\nZ H\n_p in\nyi n\n2 0 .4 1\n1 5 .3 1\n8 .1 6\n4 .0 8\n0 .0 0\nZ H\n_w ub\ni 17 .5 0\n1 3 .3 3\n5 .8 3\n3 .3 3\n0 .0 0\nA R\n_c p1\n25 6\n4 0 .7 7\n2 3 .0 8\n2 .3 1\n0 .0 0\n0 .0 0\nR U\n_c p1\n25 1\n34 .9 3\n26 .7 1\n4 .1 1\n2 .0 5\n0 .0 0\nO O\nV to\nke n\nra te\n(% )\nA R\n0 .7 5\n0 .3 5\n0 .0 0\n0 .0 0\n0 .0 0\n0 .4 1\n0 .1 9\n0 .0 0\n0 .0 0\n0 .0 0\n60 .8 6\n40 .2 8\n17 .9 4\n8 .6 2\n2 .1 0\n2 .5 7\n1 .9 9\n0 .1 1\n0 .0 0\n0 .0 0\nE N\n0 .0 5\n0 .0 2\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 6\n0 .0 2\n0 .0 0\n0 .0 0\n0 .0 0\n3 9 .0 4\n18 .5 7\n6 .1 6\n2 .7 0\n1 .0 1\n0 .3 5\n0 .0 8\n0 .0 2\n0 .0 1\n0 .0 0\nE S\n0 .0 7\n0 .0 1\n0 .0 1\n0 .0 1\n0 .0 1\n0 .0 7\n0 .0 2\n0 .0 1\n0 .0 1\n0 .0 0\n36 .2 1\n1 8 .2 8\n6 .2 8\n2 .3 4\n0 .8 6\n0 .2 8\n0 .1 3\n0 .0 5\n0 .0 3\n0 .0 3\nFR 0 .1 9\n0 .1 0\n0 .0 7\n0 .0 0\n0 .0 0\n0 .2 1\n0 .1 1\n0 .0 7\n0 .0 0\n0 .0 0\n35 .4 4\n17 .4 7\n6 .0 4\n2 .1 3\n0 .7 6\n0 .6 4\n0 .5 2\n0 .3 6\n0 .0 1\n0 .0 0\nR U\n0 .3 0\n0 .2 4\n0 .1 6\n0 .0 0\n0 .0 0\n0 .1 6\n0 .1 3\n0 .0 8\n0 .0 0\n0 .0 0\n53 .2 2\n30 .9 4\n1 2 .5 9\n4 .1 7\n1 .2 8\n0 .8 8\n1 .0 7\n0 .8 4\n0 .0 1\n0 .0 0\nZ H\n1 7. 83\n3 .6 4\n0 .3 8\n0 .0 4\n0 .0 1\n0 .0 5\n0 .0 2\n0 .0 0\n0 .0 0\n0 .0 0\n4 5 .4 8\n22 .0 8\n8 .3 4\n3 .2 0\n0 .8 3\n31 .6 7\n11 .7 6\n1 .6 6\n0 .1 4\n0 .0 3\nZ H\n_p in\nyi n\n0 .0 6\n0 .0 3\n0 .0 0\n0 .0 0\n0 .0 0\nZ H\n_w ub\ni 0 .0 7\n0 .0 4\n0 .0 0\n0 .0 0\n0 .0 0\nA R\n_c p1\n25 6\n0. 7 5\n0 .3 5\n0 .0 0\n0 .0 0\n0 .0 0\nR U\n_c p1\n25 1\n0. 3 0\n0 .2 4\n0 .1 6\n0 .0 0\n0 .0 0\nT T\nR (% ) A\nR 0 .9 0\n0 .0 9\n0 .0 1\n0 .0 0\n0 .0 0\n0 .4 4\n0 .0 4\n0 .0 1\n0 .0 0\n0 .0 0\n48 .7 6\n25 .0 0\n12 .6 4\n4 .8 1\n1 .5 7\n2 4 .0 8\n1 4 .7 8\n8 .1 4\n1 .3 4\n0 .1 4\nE N\n0 .6 4\n0 .0 5\n0 .0 1\n0 .0 0\n0 .0 0\n0 .6 5\n0 .0 5\n0 .0 1\n0 .0 0\n0 .0 0\n3 2 .7 9\n12 .0 0\n5 .0 8\n1 .5 9\n0 .5 6\n2 1 .6 4\n9 .5 9\n4 .2 0\n1 .1 0\n0 .1 3\nE S\n0 .6 8\n0 .0 5\n0 .0 1\n0 .0 0\n0 .0 0\n0 .6 7\n0 .0 5\n0 .0 1\n0 .0 0\n0 .0 0\n31 .6 8\n1 3 .0 2\n5 .7 5\n1 .9 1\n0 .5 8\n20 .7 9\n10 .0 0\n4 .4 4\n1 .0 6\n0 .1 2\nFR 0 .6 9\n0 .0 5\n0 .0 1\n0 .0 0\n0 .0 0\n0 .6 8\n0 .0 5\n0 .0 1\n0 .0 0\n0 .0 0\n32 .2 9\n12 .1 1\n5 .1 2\n1 .6 0\n0 .4 6\n2 2 .2 8\n9 .1 8\n4 .1 8\n1 .0 0\n0 .1 1\nR U\n0. 79\n0 .0 6\n0 .0 1\n0 .0 0\n0 .0 0\n0 .4 5\n0 .0 4\n0 .0 1\n0 .0 0\n0 .0 0\n47 .6 3\n21 .9 4\n1 0 .8 0\n3 .7 8\n1 .1 6\n27 .0 7\n14 .8 4\n7 .7 8\n1 .2 7\n0 .1 4\nZ H\n18 .1 4\n2 .9 0\n0 .6 1\n0 .0 9\n0 .0 1\n1 .6 0\n0 .1 2\n0 .0 2\n0 .0 0\n0 .0 0\n3 5 .8 1\n13 .4 8\n6 .3 2\n2 .3 0\n0 .6 9\n3 5 .9 9\n1 1 .1 9\n4 .9 5\n1 .3 1\n0 .1 5\nZ H\n_p in\nyi n\n0. 7 3\n0 .0 6\n0 .0 1\n0 .0 0\n0 .0 0\nZ H\n_w ub\ni 1. 08\n0 .0 8\n0 .0 1\n0 .0 0\n0 .0 0\nA R\n_c p1\n25 6\n0. 90\n0 .0 9\n0 .0 1\n0 .0 0\n0 .0 0\nR U\n_c p1\n25 1\n0. 79\n0 .0 6\n0 .0 1\n0 .0 0\n0 .0 0\nM ea\nn lin\ne le\nng th ±\nst d\n0/ 25\n/5 0/\n75 /1\n00 -t\nh\nA R\n90 .7 9 ± 5 4 .4 5\n12 3 .8 3 ± 47 .0 9\n1 0 8 .3 5 ± 5 3 .7 9\n1 06 .2 5 ± 5 4 .8 8\n1 02 .0 6 ± 5 7 .2 5\n16 6 .5 5 ± 99 .8 8\n22 7 .1 6 ± 86 .0 9\n19 8 .5 0 ± 99 .0 2\n19 4 .8 8 ± 1 01 .0 3\n18 6 .1 7 ± 10 4 .8 6\n17 .7 6 ± 1 0 .7 8\n23 .4 6 ± 9. 12\n2 0 .6 5 ± 10 .2 4\n2 0 .3 5 ± 1 0 .4 6\n19 .4 1 ± 10 .8 3\n2 9 .0 7 ± 1 7 .4 8\n2 8 .2 3 ± 1 1 .3 3\n22 .3 5 ± 11 .3 0\n22 .1 8 ± 11 .6 6\n21 .3 4 ± 12 .1 3\n8/ 33 / 91 / 13 6 /2 2 3\n5 /9 4 /1 2 6/ 15 5 / 26 3\n1/ 69 /1 09 / 14 7 /2 99\n1/ 64 / 10 7 /1 4 7/ 29 9\n1 /5 4 /1 0 2/ 1 4 5/ 30 0\n16 /6 2 /1 66 / 2 47 /4 04\n9/ 17 2/ 23 0/ 28 5/ 47 9\n1/ 12 6/ 20 0/ 26 9/ 55 4\n1 /1 17 /1 96 / 27 0 /5 61\n1/ 9 8/ 18 7/ 26 5/ 56 5\n1 /7 / 17 / 27 / 48\n1/ 1 8 /2 4 /2 9 /5 6\n1 / 13 / 2 0/ 28 /1 0 4\n1/ 1 3/ 20 / 2 8/ 10 7\n1/ 11 / 1 9/ 27 / 1 3 7\n3/ 1 4/ 27 / 4 3/ 75\n1/ 2 1 /2 8 /3 5 /7 1\n1/ 14 / 2 2/ 3 0/ 1 1 3\n1/ 14 / 22 / 30 / 12 2\n1/ 12 / 21 / 30 / 17 3\nE N\n11 7 .3 0 ± 72 .5 1\n1 5 9 .4 4 ± 5 8 .8 8\n1 34 .4 0 ± 66 .7 9\n13 1 .3 3 ± 68 .2 5\n1 2 3 .4 9 ± 69 .5 3\n1 1 7 .3 1 ± 7 2 .5 0\n1 59 .4 5 ± 5 8. 88\n13 4 .5 8 ± 6 6 .8 0\n13 1 .5 9 ± 68 .3 4\n12 3 .7 1 ± 69 .6 4\n2 0 .7 1 ± 12 .1 6\n27 .4 0 ± 10 .3 5\n23 .6 6 ± 1 1 .6 8\n2 3 .3 9 ± 1 2 .0 2\n2 1 .9 4 ± 1 2 .2 9\n29 .8 0 ± 1 7 .1 8\n29 .7 8 ± 11 .3 2\n2 4 .3 6 ± 1 2 .0 9\n23 .7 0 ± 12 .2 0\n22 .3 5 ± 12 .5 3\n8 /4 9 /1 12 /1 7 0 / 26 5\n8/ 11 7 /1 6 3/ 2 03 /2 8 5\n1 /8 6 /1 37 /1 8 5 / 3 0 0\n1/ 7 9/ 1 34 /1 83 /3 0 0\n1 /6 5 /1 25 / 17 7 /3 00\n8/ 4 9/ 1 12 /1 70 /2 6 5\n8 /1 17 / 1 63 /2 03 / 28 5\n1/ 87 / 13 7/ 18 5/ 30 0\n1 / 80 / 13 4/ 18 4/ 30 0\n1 /6 5 /1 25 / 1 77 /3 11\n1 /9 / 19 / 30 / 46\n1/ 2 0 /2 8 /3 5 /6 0\n1 /1 5 /2 4 /3 2 /9 3\n1 / 1 5/ 24 / 3 2/ 10 6\n1/ 12 /2 2 /3 1 /1 3 6\n3 /1 6 /2 7 /4 2 /7 0\n1 /2 2 /3 0 /3 7 /6 8\n1 / 16 / 2 4/ 33 / 1 01\n1/ 15 /2 4 /3 2 /1 06\n1 /1 2 /2 2 /3 2 /1 38\nE S\n12 3 .7 4 ± 7 8 .9 1\n17 1 .1 0 ± 64 .0 3\n1 4 8 .4 8 ± 73 .3 8\n1 45 .5 0 ± 7 5 .3 2\n1 38 .6 0 ± 7 7 .5 6\n12 6 .2 9 ± 8 0. 2 1\n17 5 .2 9 ± 65 .4 4\n15 1 .3 8 ± 74 .7 0\n14 8 .2 1 ± 76 .6 7\n14 1 .2 8 ± 78 .9 6\n22 .3 2 ± 13 .9 7\n29 .4 6 ± 11 .3 5\n2 6 .3 0 ± 1 3. 03\n25 .8 9 ± 13 .3 5\n2 4 .6 5 ± 1 3. 7 8\n3 1 .8 0 ± 19 .3 9\n32 .3 2 ± 1 2 .6 0\n2 7 .2 0 ± 13 .5 7\n26 .4 4 ± 13 .6 7\n25 .2 6 ± 14 .1 4\n6/ 4 2/ 1 19 /1 8 6 /2 96\n6/ 12 7 /1 7 5/ 21 7 / 29 7\n1/ 9 5/ 1 5 0/ 20 5 /3 0 0\n1 /8 8 /1 48 / 20 4 /3 0 0\n1/ 7 3/ 1 40 /2 0 0 /3 00\n7 /4 4 /1 21 / 19 1 /3 0 2\n7 /1 30 /1 79 / 2 22 /3 06\n1 /9 7 /1 53 / 20 9/ 30 7\n1 /8 9 /1 50 / 20 8/ 31 2\n1 /7 4 /1 43 /2 03 / 31 8\n1/ 7 /2 2 /3 2 /5 2\n1/ 2 2/ 2 9/ 3 7/ 63\n1 /1 7 /2 6 /3 6 /1 04\n1 /1 6 /2 6 /3 6 /1 07\n1 /1 3 /2 5 /3 5 /1 3 9\n4 /1 6 /3 1 /4 3 /7 9\n1/ 24 / 3 2/ 41 / 7 2\n1 /1 8 /2 7 /3 7 /1 06\n1 /1 6 /2 7 /3 7 /1 07\n1 /1 4 /2 5 /3 6 /1 40\nFR 12 4 .5 6 ± 79 .3 8\n1 79 .0 5 ± 6 6 .4 4\n14 9 .1 0 ± 74 .1 9\n1 4 5 .2 9 ± 75 .7 5\n1 3 8 .0 5 ± 7 7 .9 0\n1 28 .7 5 ± 82 .1 6\n1 85 .2 3 ± 68 .5 2\n1 54 .2 1 ± 7 6. 66\n15 0 .5 6 ± 78 .4 2\n14 3 .5 0 ± 80 .9 9\n22 .9 8 ± 1 4 .1 6\n3 2 .0 1 ± 11 .9 9\n2 7 .3 2 ± 13 .5 3\n26 .8 5 ± 1 3 .8 9\n25 .6 0 ± 14 .4 5\n3 2 .4 5 ± 19 .5 2\n34 .8 0 ± 1 3 .0 9\n28 .0 9 ± 13 .9 6\n27 .2 7 ± 14 .1 3\n26 .0 8 ± 14 .7 1\n5/ 4 5 /1 16 / 1 9 0/ 28 7\n5/ 1 33 / 1 83 /2 2 8/ 30 0\n1 /9 3 /1 5 2/ 2 0 6 /3 0 0\n1 /8 7 /1 4 8/ 20 4 / 30 0\n1/ 7 2 /1 40 / 19 9 /3 00\n5/ 4 7 /1 2 1/ 19 6 / 29 7\n5/ 1 38 /1 90 /2 36 / 3 12\n1 /9 6 /1 57 /2 13 / 32 2\n1 /9 0 /1 53 /2 12 / 32 2\n1/ 75 / 14 5/ 20 7/ 33 1\n1 /9 /2 2 /3 4 /5 0\n1/ 2 4 /3 2 /4 1 /7 0\n1 /1 7 /2 8 /3 7 /1 0 7\n1 /1 6 /2 7 /3 7 /1 10\n1/ 1 4/ 26 /3 7 /1 3 8\n2 /1 6 /3 0 /4 6 /7 6\n2 / 2 7/ 3 5/ 4 4/ 7 7\n1 /1 8 /2 8 /3 8 /1 0 7\n1 /1 7 /2 8 /3 8 /1 10\n1 /1 4 /2 6 /3 7 /1 38\nR U\n11 9 .8 0 ± 7 2 .0 9\n16 8 .1 6 ± 62 .1 1\n1 43 .6 1 ± 7 1 .5 5\n14 1 .5 2 ± 7 3. 9 5\n1 34 .7 1 ± 76 .2 0\n21 7 .5 1 ± 13 4 .5 1\n30 9 .2 8 ± 11 4. 9 8\n26 3 .6 6 ± 13 2. 7 7\n25 9 .9 0 ± 13 7. 10\n2 47 .1 0 ± 1 41 .1 4\n1 8 .5 4 ± 10 .6 5\n24 .7 5 ± 9. 53\n2 1 .6 6 ± 10 .8 1\n2 1 .5 1 ± 1 1 .1 2\n20 .4 2 ± 1 1 .4 4\n31 .1 0 ± 1 7. 42\n2 9 .1 1 ± 11 .5 1\n2 3 .1 3 ± 1 1 .7 1\n23 .0 8 ± 12 .1 7\n22 .0 7 ± 12 .5 7\n1 0/ 4 4 /1 19 /1 83 / 2 6 0\n5/ 12 6 / 17 4 /2 13 /2 9 8\n1 /9 1 /1 46 / 19 7 /3 0 0\n1 /8 5 /1 44 /1 9 9/ 3 00\n1/ 70 /1 3 6/ 1 9 4/ 30 0\n20 /7 7 /2 15 / 3 32 /4 89\n10 /2 3 1/ 31 8 /3 9 2/ 55 4\n1 /1 67 / 2 68 /3 63 /5 62\n1 /1 55 /2 64 / 36 5/ 56 6\n1 /1 27 /2 49 / 35 7 /5 67\n1 /8 / 18 / 27 / 39\n1/ 19 /2 5 /3 1 /5 8\n1 / 14 /2 2 /2 9 /1 04\n1/ 1 3/ 22 / 3 0/ 10 7\n1/ 1 1/ 2 0/ 2 9/ 1 36\n4/ 1 8/ 2 9/ 4 5/ 69\n1/ 21 /2 9 /3 7 /6 8\n1 /1 5 /2 3 /3 1 /1 0 4\n1/ 14 / 23 / 32 /1 10\n1/ 12 / 22 / 31 / 16 6\nZ H\n33 .1 8 ± 18 .3 8\n4 2 .5 7 ± 1 6 .9 1\n37 .2 0 ± 18 .4 6\n3 6 .6 0 ± 19 .0 5\n3 4 .6 7 ± 19 .2 9\n85 .5 9 ± 51 .1 8\n1 16 .6 7 ± 43 .8 2\n1 02 .0 0 ± 50 .6 1\n9 9 .9 0 ± 51 .4 9\n94 .2 7 ± 52 .8 0\n17 .5 1 ± 1 0 .6 8\n2 3 .5 7 ± 9. 52\n2 0 .7 7 ± 10 .4 8\n2 0 .3 9 ± 1 0 .6 0\n19 .3 6 ± 10 .8 7\n2 2 .2 3 ± 1 2 .3 3\n2 5 .4 6 ± 1 0. 29\n21 .5 2 ± 10 .9 4\n20 .8 4 ± 10 .9 2\n19 .8 9 ± 11 .2 3\n3 /1 9 /3 0 /4 5 /8 8\n2 /3 2 /4 2 /5 3 /1 27\n1 /2 4 /3 7 /4 9 /1 6 5\n1 /2 3 /3 6 /4 9 /2 4 1\n1 / 19 / 34 / 48 / 28 1\n7/ 3 5 /8 1 /1 27 / 19 2\n6/ 88 /1 17 / 14 8/ 24 3\n1/ 66 / 10 3 /1 3 9/ 29 7\n1 / 61 / 10 1 /1 38 /3 37\n1 /5 1 /9 5 /1 34 / 59 4\n2 /7 / 17 / 25 / 46\n1/ 1 8 /2 3 /2 9 /7 6\n1 / 13 / 2 0/ 28 / 1 08\n1/ 1 3/ 2 0/ 2 8/ 11 1\n1/ 1 1 /1 9 /2 7 /1 67\n2 /1 3 /2 0 /3 1 /5 8\n1 /1 9 /2 5 /3 2 /7 9\n1/ 1 4/ 2 1/ 2 9/ 1 0 8\n1/ 13 /2 1 /2 8 /1 11\n1 /1 1 /2 0 /2 8 /1 67\nZ H\n_p in\nyi n\n1 08 .1 1 ± 65 .5 1\n1 48 .0 8 ± 5 5 .5 7\n12 7 .9 6 ± 63 .9 6\n12 5 .3 1 ± 65 .2 5\n1 1 8 .4 7 ± 6 6 .8 8\n7 /4 3 /9 8 /1 61 /2 5 5\n7 / 1 0 9 /1 4 9 /1 88 / 31 2\n1/ 82 / 12 9 /1 7 5 /3 6 9\n1 /7 6 /1 2 6/ 17 4 / 40 7\n1/ 6 3/ 1 1 9/ 1 69 /6 4 5\nZ H\n_w ub\ni 9 2. 4 5 ± 55 .2 6\n1 2 7 .1 9 ± 4 8 .0 0\n11 0 .4 5 ± 5 4. 7 1\n10 8 .8 6 ± 56 .0 4\n1 0 2 .9 1 ± 57 .4 1\n8/ 3 9 /8 6 /1 37 / 21 8\n7/ 94 /1 2 9 / 1 5 8/ 26 6\n1/ 71 / 11 1 /1 5 0 /3 2 7\n1 /6 7 /1 1 0/ 1 50 /3 6 4\n1/ 5 5/ 1 0 3/ 1 46 / 62 7\nA R\n_c p1\n25 6\n90 .7 9 ± 54 .4 5\n1 23 .8 3 ± 47 .0 9\n10 8 .3 5 ± 53 .7 9\n10 6 .2 5 ± 54 .8 8\n10 2 .0 6 ± 57 .2 5\n8 /3 3 /9 1 /1 3 6/ 22 3\n5 /9 4 /1 26 /1 5 5/ 26 3\n1/ 69 /1 09 / 14 7 /2 99\n1 /6 4 /1 07 / 1 47 /2 99\n1 /5 4 /1 0 2/ 14 5/ 30 0\nR U\n_c p1\n25 1\n1 1 9. 8 0 ± 72 .0 9\n16 8 .1 6 ± 62 .1 1\n1 43 .6 1 ± 71 .5 5\n14 1 .5 2 ± 7 3 .9 5\n13 4 .7 1 ± 76 .2 0\n10 /4 4 /1 19 /1 8 3 / 26 0\n5/ 1 26 / 1 74 /2 13 /2 98\n1 / 91 / 14 6 /1 97 /3 00\n1 /8 5 /1 44 / 19 9/ 30 0\n1 /7 0 /1 36 / 19 4 /3 00\nSt at\nis tic\ns fo\nrd at\nas et\nB\nR ep\nre se\nnt at\nio n\nC H\nA R\nB Y\nT E\nW O\nR D\nB PE\nN um\nbe ro\nfl in\nes 1 00\n1, 0 0 0\n10 ,0 00\n10 0, 0 00\n1 ,0 00 ,0 0 0\n1 00\n1 ,0 00\n1 0, 00 0\n1 00 ,0 00\n1 ,0 00 ,0 00\n1 00\n1 ,0 0 0\n1 0, 00 0\n1 0 0, 0 00\n1 ,0 00 ,0 0 0\n10 0\n1 ,0 0 0\n10 ,0 00\n10 0, 00 0\n1 ,0 00 ,0 00\nN um\nbe r\nof T\nY PE S A R 8 7\n1 34\n1 68\n25 1\n43 1\n88 12 4\n14 7\n15 9\n18 1\n1 ,1 2 9\n6, 71 9\n28 ,2 22\n9 7, 46 7\n31 1, 35 5\n8 10\n4, 40 8\n18 ,8 00\n29 ,6 80\n29 ,9 59\nE N\n7 6\n9 9\n1 3 1\n1 8 9\n3 43\n7 9\n10 3\n13 5\n16 3\n18 1\n8 76\n4 ,1 48\n13 ,6 60\n4 2, 84 4\n14 0, 91 2\n7 84\n3, 40 8\n10 ,7 56\n26 ,8 20\n29 ,1 78\nE S\n86 1 0 4\n1 34\n1 89\n36 6\n89 10 8\n13 3\n16 1\n18 5\n97 0\n4 ,7 75\n16 ,7 35\n5 1, 36 6\n15 4, 48 2\n8 14\n3, 70 5\n12 ,5 29\n28 ,0 68\n29 ,4 71\nFR 8 9\n11 2\n1 4 7\n2 02\n35 2\n91 12 0\n14 2\n16 2\n18 5\n95 5\n4 ,6 36\n1 5, 6 39\n4 5, 34 8\n13 2, 88 1\n8 12\n3, 67 0\n12 ,2 58\n27 ,7 44\n29 ,3 70\nR U\n1 0 9\n1 4 2\n1 73\n2 04\n32 0\n11 2\n14 1\n15 9\n16 1\n18 0\n1 ,1 42\n6, 5 20\n2 5, 9 59\n8 3, 93 7\n25 1, 3 55\n99 3\n4, 82 9\n18 ,8 37\n29 ,2 61\n29 ,8 11\nZ H\n7 61\n1, 5 75\n2, 4 02\n3, 4 65\n4, 7 05\n1 23\n15 3\n16 5\n17 5\n18 8\n89 7\n4, 16 2\n14 ,7 14\n4 6, 7 8 8\n14 0, 18 8\n1, 06 8\n3, 7 17\n1 1, 06 9\n27 ,4 09\n29 ,1 07\nZ H\n_p in\nyi n\n7 1\n1 04\n14 0\n24 1\n4 3 0\nZ H\n_w ub\ni 8 7\n1 24\n15 4\n24 6\n4 2 1\nA R\n_c p1\n25 6\n8 7\n13 4\n16 8\n25 1\n43 1\nR U\n_c p1\n25 1\n1 09\n1 42\n1 73\n2 04\n3 20\nN um\nbe r\nof TO\nK E\nN S\nA R\n9 ,7 9 8\n1 00 ,5 99\n1 ,0 1 9, 69 6\n10 ,2 48 ,9 7 6\n10 2 ,4 81 ,8 1 6\n1 7 ,6 5 6\n18 1, 8 66\n1 ,8 42 ,1 38\n1 8 ,5 1 7, 66 9\n18 5 ,1 72 ,6 1 8\n1 ,8 2 5\n18 ,7 6 0\n19 0, 46 5\n1 ,9 1 5 ,7 4 6\n1 9 ,1 38 ,7 2 4\n3 ,3 8 7\n2 5, 2 34\n21 0, 62 2\n2 ,0 90 ,4 03\n20 ,9 86 ,1 16\nE N\n1 1, 81 6\n1 1 9, 7 9 6\n1 ,2 0 1, 8 39\n1 2 ,0 8 7, 92 2\n1 20 ,9 6 9, 16 3\n11 ,8 20\n11 9, 83 9\n1 ,2 02 ,2 17\n12 ,0 91 ,6 02\n12 1 ,0 06 ,8 03\n2 ,0 47\n21 ,0 58\n2 11 ,0 29\n2 ,1 22 ,3 6 5\n21 ,2 3 4, 4 37\n3 ,2 62\n2 4, 63 4\n2 20 ,3 90\n2 ,1 59 ,5 66\n21 ,6 69 ,7 69\nE S\n13 ,5 4 3\n13 5, 1 06\n1 ,3 63 ,4 5 4\n13 ,7 09 ,4 9 3\n13 7 ,1 80 ,5 4 2\n13 ,7 9 0\n13 7, 6 13\n1 ,3 89 ,3 15\n1 3 ,9 67 ,0 72\n1 39 ,7 53 ,2 63\n2 ,3 90\n23 ,8 58\n24 1, 78 9\n2 ,4 3 1 ,5 7 9\n24 ,3 26 ,0 5 5\n3 ,7 0 2\n2 8, 1 7 9\n25 3, 07 5\n2 ,4 85 ,2 92\n24 ,9 33 ,9 87\nFR 1 3, 17 3\n1 34 ,7 02\n1 ,3 5 8 ,5 6 7\n13 ,6 62 ,0 2 4\n13 6 ,8 11 ,8 7 0\n13 ,6 17\n1 39 ,2 50\n1 ,4 04 ,0 01\n14 ,1 21 ,6 29\n1 41 ,4 03 ,2 57\n2 ,3 62\n24 ,4 42\n24 7, 02 7\n2 ,4 8 5 ,2 0 3\n24 ,8 74 ,3 5 5\n3 ,6 6 4\n2 8, 5 4 1\n25 7, 30 5\n2 ,5 27 ,8 89\n25 ,3 74 ,6 29\nR U\n13 ,1 06\n1 3 0, 28 9\n1 ,3 2 5 ,7 9 3\n1 3 ,3 5 6 ,7 17\n1 3 3 ,6 9 9 ,4 73\n24 ,0 59\n23 9, 14 5\n2 ,4 33 ,1 54\n24 ,5 12 ,9 4 9\n24 5 ,3 96 ,3 32\n1 ,9 60\n19 ,6 42\n2 0 0, 1 8 8\n2 ,0 1 4, 8 89\n2 0 ,1 53 ,1 2 4\n3 ,7 6 0\n26 ,1 36\n21 8, 3 2 5\n2 ,1 70 ,4 10\n21 ,7 92 ,2 16\nZ H\n3 ,3 29\n3 3, 76 8\n3 40 ,6 9 2\n3 ,4 19 ,8 0 3\n34 ,2 06 ,1 0 6\n9 ,1 2 2\n92 ,6 89\n9 35 ,1 72\n9 ,4 02 ,7 43\n94 ,0 64 ,3 53\n1 ,8 66\n18 ,6 97\n1 8 9, 7 7 3\n1 ,9 0 6, 7 04\n1 9 ,0 73 ,1 3 3\n2 ,5 6 3\n21 ,6 30\n19 9, 0 2 8\n1 ,9 46 ,5 43\n19 ,5 74 ,5 72\nZ H\n_p in\nyi n\n1 1, 39 1\n1 15 ,5 77\n1 ,1 6 6, 2 27\n1 1 ,7 32 ,2 6 4\n1 17 ,3 6 7, 34 9\nZ H\n_w ub\ni 10 ,0 35\n10 1, 04 9\n1 ,0 1 9 ,1 1 3\n1 0 ,2 4 2, 5 40\n1 0 2 ,4 6 0 ,5 73\nA R\n_c p1\n25 6\n9, 79 8\n10 0, 59 9\n1 ,0 19 ,6 96\n10 ,2 48 ,9 76\n10 2 ,4 81 ,8 16\nR U\n_c p1\n25 1\n13 ,1 0 6\n13 0, 2 89\n1 ,3 25 ,7 93\n13 ,3 56 ,7 17\n13 3 ,6 99 ,4 7 3\nO O\nV ty\npe ra\nte (% ) A R 35 .3 8\n5 .3 8\n0 .7 7\n0 .0 0\n0 .0 0\n28 .4 6\n4 .8 8\n0 .8 1\n0 .0 0\n0 .0 0\n93 .6 1\n72 .0 7\n35 .5 1\n1 3 .0 7\n4 .8 2\n7 .8 6\n0 .3 0\n0 .0 2\n0 .0 0\n0 .0 0\nE N\n2 5 .7 7\n1 1 .3 4\n8 .2 5\n2 .0 6\n0 .0 0\n26 .4 7\n11 .7 6\n5 .8 8\n0 .0 0\n0 .0 0\n89 .5 8\n61 .0 4\n2 6 .2 8\n10 .2 5\n4 .1 8\n4 .0 0\n0 .4 4\n0 .1 5\n0 .0 4\n0 .0 0\nE S\n2 6 .1 3\n12 .6 1\n1 1 .7 1\n3 .6 0\n0 .0 0\n24 .5 5\n1 0 .9 1\n8 .1 8\n0 .9 1\n0 .0 0\n90 .5 9\n64 .2 7\n2 7 .7 3\n9 .6 1\n3 .9 5\n4 .0 8\n0 .4 4\n0 .1 7\n0 .0 6\n0 .0 0\nFR 2 3 .0 1\n1 1 .5 0\n7 .9 6\n2 .6 5\n0 .0 0\n24 .5 8\n7 .6 3\n5 .0 8\n0 .8 5\n0 .0 0\n9 0 .2 3\n62 .6 1\n2 5 .7 9\n8 .7 1\n3 .3 6\n3 .8 7\n0 .4 7\n0 .1 2\n0 .0 5\n0 .0 0\nR U\n26 .7 1\n6 .1 6\n2 .7 4\n0 .0 0\n0 .0 0\n23 .6 1\n4 .8 6\n2 .0 8\n0 .0 0\n0 .0 0\n93 .2 0\n69 .7 5\n32 .2 0\n1 1 .1 0\n3 .9 1\n4 .3 9\n0 .2 3\n0 .0 3\n0 .0 1\n0 .0 0\nZ H\n61 .8 4\n2 6 .9 7\n6 .6 8\n1 .0 1\n0 .1 5\n20 .2 6\n2 .6 1\n0 .6 5\n0 .0 0\n0 .0 0\n8 9 .7 3\n64 .1 0\n2 9 .5 4\n1 0 .4 4\n3 .8 4\n6 7 .8 9\n27 .0 0\n5 .2 7\n0 .6 7\n0 .0 4\nZ H\n_p in\nyi n\n27 .5 5\n8 .1 6\n2 .0 4\n0 .0 0\n0 .0 0\nZ H\n_w ub\ni 2 8 .3 3\n4 .1 7\n0 .8 3\n0 .0 0\n0 .0 0\nA R\n_c p1\n25 6\n35 .3 8\n5 .3 8\n0 .7 7\n0 .0 0\n0 .0 0\nR U\n_c p1\n25 1\n2 6 .7 1\n6 .1 6\n2 .7 4\n0 .0 0\n0 .0 0\nO O\nV to\nke n\nra te\n(% )\nA R\n0. 31\n0 .0 1\n0 .0 0\n0 .0 0\n0 .0 0\n0 .1 1\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n53 .9 4\n25 .8 9\n9 .2 6\n3 .1 4\n1 .1 5\n0 .8 2\n0 .0 8\n0 .0 0\n0 .0 0\n0 .0 0\nE N\n0. 05\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 5\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n34 .1 1\n12 .1 8\n3 .5 0\n1 .2 2\n0 .5 0\n0 .3 0\n0 .0 2\n0 .0 1\n0 .0 1\n0 .0 0\nE S\n0. 04\n0 .0 1\n0 .0 1\n0 .0 0\n0 .0 0\n0 .0 4\n0 .0 1\n0 .0 1\n0 .0 0\n0 .0 0\n3 1 .5 2\n12 .5 7\n3 .7 2\n1 .1 5\n0 .4 6\n0 .1 9\n0 .0 3\n0 .0 4\n0 .0 2\n0 .0 0\nFR 0. 07\n0 .0 1\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 8\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n30 .9 6\n11 .9 1\n3 .3 8\n0 .9 9\n0 .3 9\n0 .2 2\n0 .0 5\n0 .0 1\n0 .0 1\n0 .0 0\nR U\n0. 22\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n0 .1 1\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n4 9 .0 5\n22 .7 5\n7 .5 5\n2 .3 6\n0 .8 1\n0 .7 1\n0 .0 2\n0 .0 0\n0 .0 0\n0 .0 0\nZ H\n10 .4 4\n1 .2 1\n0 .1 6\n0 .0 2\n0 .0 0\n0 .1 0\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n3 7 .8 1\n13 .7 4\n4 .4 2\n1 .4 2\n0 .5 3\n22 .0 2\n4 .9 9\n0 .7 9\n0 .1 0\n0 .0 0\nZ H\n_p in\nyi n\n0. 0 3\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\nZ H\n_w ub\ni 0. 0 9\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\nA R\n_c p1\n25 6\n0. 31\n0 .0 1\n0 .0 0\n0 .0 0\n0 .0 0\nR U\n_c p1\n25 1\n0. 22\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\nT T\nR (% ) A\nR 0. 89\n0 .1 3\n0 .0 2\n0 .0 0\n0 .0 0\n0 .5 0\n0 .0 7\n0 .0 1\n0 .0 0\n0 .0 0\n61 .8 6\n35 .8 2\n14 .8 2\n5 .0 9\n1 .6 3\n2 3 .9 1\n1 7 .4 7\n8 .9 3\n1 .4 2\n0 .1 4\nE N\n0. 6 4\n0 .0 8\n0 .0 1\n0 .0 0\n0 .0 0\n0 .6 7\n0 .0 9\n0 .0 1\n0 .0 0\n0 .0 0\n4 2 .7 9\n19 .7 0\n6 .4 7\n2 .0 2\n0 .6 6\n24 .0 3\n1 3 .8 3\n4 .8 8\n1 .2 4\n0 .1 3\nE S\n0. 6 4\n0 .0 8\n0 .0 1\n0 .0 0\n0 .0 0\n0 .6 5\n0 .0 8\n0 .0 1\n0 .0 0\n0 .0 0\n40 .5 9\n20 .0 1\n6 .9 2\n2 .1 1\n0 .6 4\n2 1 .9 9\n1 3 .1 5\n4 .9 5\n1 .1 3\n0 .1 2\nFR 0. 68\n0 .0 8\n0 .0 1\n0 .0 0\n0 .0 0\n0 .6 7\n0 .0 9\n0 .0 1\n0 .0 0\n0 .0 0\n40 .4 3\n1 8 .9 7\n6 .3 3\n1 .8 2\n0 .5 3\n22 .1 6\n12 .8 6\n4 .7 6\n1 .1 0\n0 .1 2\nR U\n0. 8 3\n0 .1 1\n0 .0 1\n0 .0 0\n0 .0 0\n0 .4 7\n0 .0 6\n0 .0 1\n0 .0 0\n0 .0 0\n58 .2 7\n33 .1 9\n12 .9 7\n4 .1 7\n1 .2 5\n2 6 .4 1\n1 8 .4 8\n8 .6 3\n1 .3 5\n0 .1 4\nZ H\n22 .8 6\n4 .6 6\n0 .7 1\n0 .1 0\n0 .0 1\n1 .3 5\n0 .1 7\n0 .0 2\n0 .0 0\n0 .0 0\n48 .0 7\n22 .2 6\n7 .7 5\n2 .4 5\n0 .7 4\n4 1 .6 7\n1 7 .1 8\n5 .5 6\n1 .4 1\n0 .1 5\nZ H\n_p in\nyi n\n0. 62\n0 .0 9\n0 .0 1\n0 .0 0\n0 .0 0\nZ H\n_w ub\ni 0. 87\n0 .1 2\n0 .0 2\n0 .0 0\n0 .0 0\nA R\n_c p1\n25 6\n0. 8 9\n0 .1 3\n0 .0 2\n0 .0 0\n0 .0 0\nR U\n_c p1\n25 1\n0. 83\n0 .1 1\n0 .0 1\n0 .0 0\n0 .0 0\nM ea\nn lin\ne le\nng th ±\nst d\n0/ 25\n/5 0/\n75 /1\n00 -t\nh\nA R\n9 7 .9 8 ± 55 .4 7\n1 00 .6 0 ± 5 9 .6 0\n10 1 .9 7 ± 59 .1 0\n10 2 .4 9 ± 58 .3 6\n1 0 2 .4 8 ± 5 8. 39\n1 76 .5 6 ± 10 1 .9 7\n18 1 .8 7 ± 10 8. 68\n1 84 .2 1 ± 10 7. 4 9\n18 5 .1 8 ± 1 06 .2 2\n18 5 .1 7 ± 10 6 .2 9\n18 .2 5 ± 10 .1 2\n18 .7 6 ± 11 .1 0\n19 .0 5 ± 10 .9 5\n1 9 .1 6 ± 1 0. 83\n19 .1 4 ± 10 .8 2\n33 .8 7 ± 1 8 .3 8\n2 5 .2 3 ± 15 .2 6\n2 1 .0 6 ± 1 2. 27\n20 .9 0 ± 12 .0 2\n20 .9 9 ± 12 .0 8\n9/ 5 3/ 9 4/ 14 7 /2 34\n2 /4 9 /1 0 1/ 1 47 /2 7 4\n1/ 5 1/ 1 0 1/ 1 47 / 30 0\n1/ 5 3 /1 03 / 14 7 /3 00\n1 /5 3 /1 02 /1 4 7/ 3 00\n1 4/ 98 / 1 70 /2 6 6/ 4 32\n2 / 88 / 18 2/ 26 6/ 50 1\n1 /9 0 /1 84 / 2 67 /5 51\n1 /9 5 /1 85 / 26 6 /5 52\n1 /9 5 /1 85 / 2 66 /5 61\n2/ 10 / 17 / 27 / 43\n1 /9 / 18 / 27 / 73\n1/ 1 0/ 19 / 2 7/ 84\n1/ 1 0 /1 9 /2 7 /1 4 3\n1 /1 0 /1 9 /2 7 /1 2 9\n2 /2 0 /3 2 /5 0 /7 3\n1 / 13 / 2 4/ 36 /9 6\n1 /1 1 /2 1 /3 0 /9 5\n1 /1 1 /2 1 /2 9 /1 46\n1 /1 1 /2 1 /2 9 /1 65\nE N\n1 18 .1 6 ± 6 8. 05\n1 19 .8 0 ± 7 1. 1 5\n12 0 .1 8 ± 70 .0 0\n12 0 .8 8 ± 69 .4 3\n1 2 0 .9 7 ± 6 9 .5 4\n11 8 .2 0 ± 68 .0 5\n11 9 .8 4 ± 71 .1 7\n12 0 .2 2 ± 70 .0 2\n12 0 .9 2 ± 69 .4 4\n12 1 .0 1 ± 69 .5 5\n20 .4 7 ± 11 .0 2\n2 1 .0 6 ± 12 .6 0\n2 1 .1 0 ± 12 .2 5\n2 1 .2 2 ± 1 2. 14\n2 1 .2 3 ± 12 .1 5\n32 .6 2 ± 1 7. 93\n2 4 .6 3 ± 1 4 .9 0\n2 2 .0 4 ± 12 .7 8\n21 .6 0 ± 12 .3 4\n21 .6 7 ± 12 .4 0\n7/ 6 4/ 1 14 /1 76 /2 56\n2 /5 8 /1 2 2/ 17 4 / 28 8\n1/ 6 0/ 1 2 1/ 1 76 /2 9 9\n1 /6 2 /1 22 /1 7 5/ 3 0 0\n1/ 6 2/ 1 22 /1 7 5 /3 00\n7 / 6 4/ 11 4/ 17 6 /2 5 6\n2 /5 8 /1 22 / 17 5 /2 88\n1 /6 0 /1 21 / 17 6/ 29 9\n1 /6 2 /1 22 /1 75 / 30 0\n1 /6 2 /1 22 /1 75 / 3 03\n1/ 12 /1 9 /3 1 /4 2\n1 /1 1 /2 1 /3 1 /7 4\n1 /1 1 /2 1 /3 0 /8 6\n1 /1 1 /2 1 /3 0 /1 17\n1 /1 1 /2 1 /3 0 /1 2 7\n2 /1 9 /3 1 /4 7 /8 1\n1/ 1 3/ 2 4/ 3 5/ 8 5\n1/ 1 1/ 2 2/ 3 1 /8 9\n1 /1 2 /2 2 /3 1 /1 17\n1 /1 2 /2 2 /3 1 /1 29\nE S\n13 5 .4 3 ± 7 7 .5 4\n1 35 .1 1 ± 7 9. 66\n1 36 .3 5 ± 7 9 .0 0\n13 7 .0 9 ± 78 .2 2\n13 7 .1 8 ± 78 .4 5\n1 37 .9 0 ± 78 .9 3\n13 7 .6 1 ± 81 .0 3\n1 38 .9 3 ± 8 0. 43\n13 9 .6 7 ± 79 .6 1\n13 9 .7 5 ± 79 .8 4\n23 .9 0 ± 1 3 .2 9\n23 .8 6 ± 14 .1 2\n2 4 .1 8 ± 1 3 .9 4\n24 .3 2 ± 1 3 .8 3\n2 4 .3 3 ± 13 .8 6\n3 7 .0 2 ± 20 .6 8\n28 .1 8 ± 1 6 .9 6\n25 .3 1 ± 14 .6 0\n24 .8 5 ± 14 .1 3\n24 .9 3 ± 14 .2 1\n1 1 /8 0 /1 33 / 20 0 /2 86\n2/ 65 / 13 7 /2 00 /3 0 0\n1 /6 7 /1 3 8/ 2 00 /3 0 0\n1 /7 0 /1 39 /1 9 9/ 3 0 0\n1/ 70 /1 3 9/ 2 0 0/ 30 0\n11 /8 1 /1 34 / 2 02 /2 88\n2 /6 5 /1 40 /2 05 / 30 5\n1 /6 9 /1 40 /2 04 / 30 9\n1 /7 1 /1 41 /2 03 / 31 1\n1/ 71 / 14 1 /2 03 /3 18\n1 /1 4 /2 4 /3 4 /4 9\n1/ 12 / 2 4/ 3 5/ 80\n1/ 1 2 /2 4 /3 5 /8 7\n1 /1 3 /2 4 /3 5 /1 1 9\n1/ 13 / 2 4/ 35 /1 3 0\n2 /2 2 /3 6 /5 4 /8 1\n1 /1 4 /2 7 /4 1 /9 3\n1 / 1 3/ 25 / 3 7/ 91\n1 /1 3 /2 5 /3 6 /1 19\n1 /1 3 /2 5 /3 6 /1 33\nFR 13 1 .7 3 ± 7 5 .9 5\n1 34 .7 0 ± 8 0 .4 8\n1 35 .8 6 ± 7 9 .5 1\n13 6 .6 2 ± 78 .7 1\n13 6 .8 1 ± 78 .9 0\n1 36 .1 7 ± 78 .5 0\n13 9 .2 5 ± 83 .0 9\n14 0 .4 0 ± 82 .1 8\n14 1 .2 2 ± 8 1 .3 5\n1 41 .4 0 ± 81 .5 5\n23 .6 2 ± 13 .4 0\n24 .4 4 ± 14 .6 9\n2 4 .7 0 ± 1 4 .4 1\n2 4 .8 5 ± 1 4 .2 7\n2 4 .8 7 ± 14 .3 0\n3 6 .6 4 ± 21 .1 8\n28 .5 4 ± 17 .3 1\n2 5 .7 3 ± 1 4 .9 6\n25 .2 8 ± 14 .4 9\n25 .3 7 ± 14 .5 7\n8/ 7 4 /1 28 / 19 6 /2 8 9\n2/ 6 4/ 1 35 /2 0 2 /3 00\n1 /6 5 /1 3 7 / 1 9 9/ 30 0\n1/ 69 / 13 8 /1 99 / 3 00\n1/ 6 9 /1 38 / 2 00 /3 0 0\n8 /7 7 /1 32 / 20 3 /2 96\n2 /6 5 /1 40 /2 07 / 31 3\n1 /6 8 /1 42 /2 06 / 3 15\n1/ 7 1/ 1 43 /2 06 /3 22\n1 / 7 1/ 1 43 /2 06 /3 25\n1 /1 3 /2 3 /3 4 /5 3\n1 / 12 / 24 / 36 / 7 8\n1 /1 2 /2 5 /3 6 /9 2\n1/ 1 3/ 25 / 3 6/ 11 8\n1/ 1 3/ 2 5/ 36 / 1 32\n2/ 2 1/ 35 / 5 4/ 83\n1/ 15 /2 7 /4 2 /9 4\n1 /1 3 /2 6 /3 7 /9 7\n1 / 13 / 25 / 36 / 11 8\n1/ 13 / 26 / 36 / 13 4\nR U\n1 31 .0 6 ± 7 6 .8 9\n1 30 .2 9 ± 7 7 .8 0\n13 2 .5 8 ± 77 .8 0\n13 3 .5 7 ± 77 .0 9\n1 3 3 .7 0 ± 7 7. 19\n2 40 .5 9 ± 14 4 .2 5\n2 39 .1 5 ± 1 44 .3 5\n24 3 .3 2 ± 14 4 .1 4\n24 5 .1 3 ± 14 2 .8 9\n24 5 .4 0 ± 14 3. 0 8\n1 9 .6 0 ± 1 0. 5 8\n1 9 .6 4 ± 1 1 .5 8\n20 .0 2 ± 11 .6 3\n2 0 .1 5 ± 1 1 .5 0\n20 .1 5 ± 1 1 .5 1\n37 .6 0 ± 2 1 .3 0\n2 6 .1 4 ± 15 .9 5\n2 1 .8 3 ± 1 2 .7 9\n21 .7 0 ± 12 .5 5\n21 .7 9 ± 12 .6 3\n10 /7 1 /1 2 3 /1 9 5/ 2 71\n2/ 62 /1 3 0/ 1 9 3 /2 9 9\n1 /6 5 /1 3 3/ 19 5 / 30 0\n1/ 6 7/ 1 3 4/ 19 4 /3 0 0\n1 /6 7 /1 34 / 19 5 /3 00\n1 2 /1 2 5/ 22 9 /3 6 3 /5 05\n2 /1 13 /2 38 / 3 52 /5 58\n1 /1 17 /2 45 / 35 8 /5 65\n1 /1 2 2/ 24 6/ 35 7/ 56 6\n1/ 12 2/ 24 7 / 35 8/ 56 9\n1 /1 2 /2 0 /2 8 /4 3\n1 / 10 / 20 / 29 / 7 2\n1 /1 0 /2 0 /2 9 /8 5\n1/ 1 1/ 20 / 2 9/ 11 7\n1/ 1 1/ 2 0/ 29 / 1 28\n2/ 2 1/ 33 / 5 3/ 91\n1/ 13 /2 5 /3 8 /8 9\n1 /1 1 /2 2 /3 1 /8 7\n1 / 11 / 22 / 31 / 12 0\n1/ 11 /2 2 /3 1 /1 62\nZ H\n33 .2 9 ± 1 7 .4 1\n33 .7 7 ± 19 .7 7\n3 4 .0 7 ± 19 .5 0\n3 4 .2 0 ± 1 9 .2 1\n34 .2 1 ± 1 9 .2 6\n9 1 .2 2 ± 50 .8 8\n92 .6 9 ± 55 .2 8\n93 .5 2 ± 54 .2 9\n94 .0 3 ± 53 .6 7\n9 4 .0 6 ± 53 .7 3\n18 .6 6 ± 10 .2 8\n1 8 .7 0 ± 11 .1 8\n1 8 .9 8 ± 11 .0 5\n1 9 .0 7 ± 1 0. 88\n1 9 .0 7 ± 10 .9 0\n2 5 .6 3 ± 1 3 .6 5\n2 1 .6 3 ± 1 3. 19\n19 .9 0 ± 11 .6 1\n19 .4 7 ± 11 .1 3\n19 .5 7 ± 11 .2 4\n4 /1 9 /3 2 /4 7 /7 6\n2 / 18 / 33 / 48 / 14 8\n1 /1 8 /3 4 /4 8 /1 5 2\n1/ 1 9/ 34 / 48 / 20 1\n1 /1 9 /3 4 /4 8 /2 89\n8 / 5 3/ 90 / 13 4 /2 0 9\n2 /4 5 /9 2 /1 33 / 29 0\n1/ 4 8 /9 3 /1 35 /2 83\n1 /4 9 /9 4 /1 3 5/ 36 9\n1 /4 9 /9 4 /1 35 /6 67\n2 / 11 / 18 / 26 / 48\n1/ 10 /1 8 /2 7 /6 8\n1/ 1 0/ 1 9/ 2 7/ 9 9\n1/ 10 /1 9 /2 7 /1 1 6\n1 /1 0 /1 9 /2 7 /1 67\n2 /1 6 /2 4 /3 4 /6 3\n1/ 11 / 2 1/ 3 1 /8 6\n1/ 1 0 /2 0 /2 8 /1 0 1\n1 /1 1 /1 9 /2 7 /1 16\n1 /1 1 /1 9 /2 8 /1 67\nZ H\n_p in\nyi n\n1 13 .9 1 ± 6 4 .4 6\n1 15 .5 8 ± 6 9. 1 8\n11 6 .6 2 ± 68 .1 3\n11 7 .3 2 ± 67 .4 1\n1 1 7 .3 7 ± 6 7 .5 0\n8/ 6 6/ 1 09 /1 7 4 /2 65\n2 /5 5 /1 1 5 /1 67 / 3 25\n1/ 5 8 /1 1 7/ 1 69 /3 5 7\n1 /6 0 /1 18 /1 6 9 / 3 9 4\n1/ 6 0/ 1 18 /1 6 9 /6 80\nZ H\n_w ub\ni 10 0 .3 5 ± 5 5. 7 0\n10 1 .0 5 ± 60 .1 3\n1 0 1 .9 1 ± 5 8 .9 4\n1 0 2 .4 3 ± 5 8 .2 1\n1 02 .4 6 ± 5 8. 2 5\n8/ 59 / 9 9/ 15 0 /2 26\n2 /4 8 /1 0 0/ 14 6 / 32 5\n1/ 5 2/ 10 2/ 1 47 /3 1 8\n1 /5 3 /1 02 / 1 47 /3 8 1\n1/ 5 4/ 1 03 /1 4 6 /6 88\nA R\n_c p1\n25 6\n9 7 .9 8 ± 5 5 .4 7\n10 0 .6 0 ± 59 .6 0\n10 1 .9 7 ± 59 .1 0\n10 2 .4 9 ± 58 .3 6\n1 02 .4 8 ± 5 8 .3 9\n9/ 5 3/ 94 / 14 7 /2 34\n2/ 49 /1 01 / 1 47 /2 74\n1 /5 1 /1 0 1/ 14 7/ 30 0\n1 /5 3 /1 03 / 14 7/ 30 0\n1 /5 3 /1 02 /1 47 / 3 00\nR U\n_c p1\n25 1\n13 1 .0 6 ± 7 6 .8 9\n13 0 .2 9 ± 77 .8 0\n13 2 .5 8 ± 77 .8 0\n13 3 .5 7 ± 77 .0 9\n13 3 .7 0 ± 77 .1 9\n10 / 7 1/ 12 3 /1 95 /2 71\n2 / 62 / 13 0/ 19 3/ 29 9\n1 /6 5 /1 33 / 19 5 /3 00\n1 /6 7 /1 34 / 1 94 /3 00\n1 /6 7 /1 3 4/ 19 5/ 30 0\nSt at\nis tic\ns fo\nrd at\nas et\nC\nR ep\nre se\nnt at\nio n\nC H\nA R\nB Y\nT E\nW O\nR D\nB PE\nN um\nbe ro\nfl in\nes 1 00\n1, 0 0 0\n10 ,0 00\n10 0, 0 00\n1 ,0 00 ,0 0 0\n10 0\n1 ,0 00\n10 ,0 00\n10 0, 00 0\n1 ,0 0 0, 00 0\n10 0\n1 ,0 00\n1 0, 00 0\n1 00 ,0 00\n1 ,0 00 ,0 00\n10 0\n1 ,0 0 0\n10 ,0 00\n10 0, 00 0\n1 ,0 00 ,0 00\nN um\nbe r\nof T\nY PE S A R 97\n13 6\n16 2\n2 6 1\n4 27\n92 12 3\n14 4\n16 0\n17 6\n1 ,1 6 5\n6, 82 3\n2 8, 2 9 6\n97 ,3 80\n3 1 0, 6 32\n86 4\n4, 5 30\n1 8, 7 3 5\n29 ,6 73\n29 ,9 54\nE N\n78 98\n12 6\n18 6\n3 1 0\n81 10 3\n13 1\n15 9\n17 8\n90 6\n4 ,1 73\n1 3, 6 5 8\n42 ,6 77\n1 4 1, 1 63\n80 2\n3, 4 84\n1 0, 8 0 0\n26 ,7 66\n29 ,2 15\nE S\n84 10 8\n13 3\n1 8 6\n3 06\n85 11 2\n13 3\n16 0\n17 5\n97 0\n4 ,7 34\n1 6, 8 5 0\n51 ,2 81\n1 5 4, 6 49\n80 3\n3, 6 86\n1 2, 4 7 0\n28 ,1 34\n29 ,4 70\nFR 87\n11 3\n13 9\n1 9 5\n3 30\n91 11 7\n13 4\n15 9\n17 9\n99 1\n4 ,6 03\n1 5, 5 8 6\n45 ,2 35\n1 3 2, 4 93\n85 3\n3, 6 93\n1 2, 1 7 0\n27 ,7 31\n29 ,3 55\nR U\n9 8\n1 41\n16 7\n21 1\n2 8 7\n1 00\n14 0\n15 4\n16 6\n17 7\n1 ,1 65\n6, 49 0\n2 6, 1 1 2\n84 ,1 37\n2 5 0, 7 78\n1, 0 02\n4, 8 58\n1 9, 0 2 9\n29 ,3 22\n29 ,8 14\nZ H\n78 3\n1, 55 0\n2, 44 3\n3, 47 7\n4, 69 8\n12 8\n15 2\n16 2\n17 7\n18 9\n88 2\n4, 16 0\n1 4, 7 3 0\n46 ,6 67\n1 3 9, 8 95\n1, 0 84\n3, 7 16\n1 1, 2 1 4\n27 ,4 43\n29 ,1 10\nZ H\n_p in\nyi n\n7 4\n1 07\n13 9\n24 0\n4 1 3\nZ H\n_w ub\ni 9 5\n1 29\n15 4\n24 2\n4 0 8\nA R\n_c p1\n25 6\n97 13 6\n16 2\n26 1\n42 7\nR U\n_c p1\n25 1\n98 14 1\n16 7\n21 1\n28 7\nN um\nbe r\nof TO\nK E\nN S\nA R\n1 0, 21 2\n1 04 ,1 44\n1 ,0 2 1, 13 6\n10 ,2 31 ,0 8 1\n10 2 ,4 48 ,3 3 9\n1 8 ,4 48\n18 8, 50 7\n1 ,8 4 5, 13 6\n18 ,4 8 2, 68 9\n18 5 ,1 05 ,4 9 8\n1 ,8 8 9\n1 9, 28 1\n1 90 ,7 38\n1 ,9 12 ,6 6 9\n1 9 ,1 3 7, 2 40\n3 ,5 18\n2 5, 60 1\n2 10 ,7 3 8\n2 ,0 87 ,4 31\n20 ,9 81 ,7 60\nE N\n12 ,3 00\n1 2 3, 6 4 6\n1 ,2 0 8, 8 27\n1 2 ,0 8 9, 4 49\n1 20 ,9 3 3, 96 2\n12 ,3 05\n12 3, 68 1\n1 ,2 09 ,1 87\n12 ,0 93 ,1 17\n12 0 ,9 71 ,4 60\n2 ,1 16\n21 ,5 45\n2 1 2, 0 3 4\n2 ,1 2 3, 1 53\n2 1 ,2 31 ,4 6 5\n3 ,3 6 9\n25 ,1 11\n22 1, 26 4\n2 ,1 60 ,1 62\n21 ,6 66 ,4 82\nE S\n1 4, 1 25\n13 9, 56 3\n1 ,3 6 9 ,7 7 9\n1 3 ,7 0 1 ,3 8 7\n1 3 7 ,1 3 3 ,4 40\n14 ,3 87\n1 42 ,2 37\n1 ,3 95 ,3 0 5\n13 ,9 58 ,2 21\n13 9 ,7 0 2, 85 3\n2 ,4 72\n24 ,6 00\n24 2, 67 2\n2 ,4 3 0 ,5 8 6\n24 ,3 1 9, 76 0\n3 ,7 84\n2 8, 7 2 6\n2 54 ,0 92\n2 ,4 83 ,9 95\n24 ,9 25 ,2 09\nFR 1 3, 79 7\n1 38 ,5 28\n1 ,3 6 5, 36 0\n13 ,6 61 ,1 5 0\n13 6 ,7 54 ,3 6 0\n1 4 ,2 4 3\n14 3, 1 39\n1 ,4 11 ,4 51\n14 ,1 19 ,8 15\n14 1 ,3 43 ,6 7 5\n2 ,4 53\n24 ,9 5 0\n2 48 ,4 88\n2 ,4 85 ,1 18\n2 4 ,8 6 4, 4 49\n3 ,7 5 5\n2 8, 90 7\n2 5 8, 55 0\n2 ,5 27 ,9 57\n25 ,3 62 ,7 24\nR U\n1 3, 76 3\n1 36 ,6 7 4\n1 ,3 3 9, 2 86\n1 3 ,3 6 9, 69 0\n1 33 ,6 7 1, 32 2\n25 ,3 45\n25 1, 23 1\n2 ,4 5 7, 9 38\n24 ,5 35 ,7 71\n24 5 ,3 38 ,1 39\n2 ,0 24\n2 0, 47 4\n20 1, 6 1 8\n2 ,0 1 6, 4 5 1\n2 0 ,1 56 ,4 6 7\n3 ,8 1 3\n26 ,7 72\n21 9, 79 4\n2 ,1 72 ,6 16\n21 ,7 87 ,5 39\nZ H\n3 ,3 71\n3 4, 42 9\n3 41 ,5 36\n3 ,4 19 ,6 4 8\n34 ,2 00 ,5 4 3\n9 ,2 35\n95 ,1 00\n93 8, 8 05\n9 ,4 04 ,3 86\n94 ,0 52 ,8 81\n1 ,8 24\n19 ,1 20\n19 0, 48 5\n1 ,9 07 ,6 4 3\n19 ,0 7 3, 4 63\n2 ,5 59\n2 2, 06 7\n1 9 9, 7 06\n1 ,9 47 ,3 00\n19 ,5 73 ,3 72\nZ H\n_p in\nyi n\n11 ,6 0 5\n11 9, 2 58\n1 ,1 71 ,6 0 5\n1 1 ,7 32 ,0 10\n1 1 7 ,3 5 6 ,8 8 0\nZ H\n_w ub\ni 10 ,0 7 1\n10 3, 9 44\n1 ,0 22 ,8 7 0\n1 0 ,2 45 ,7 16\n1 0 2 ,4 5 0 ,5 8 1\nA R\n_c p1\n25 6\n10 ,2 12\n1 04 ,1 44\n1 ,0 21 ,1 36\n10 ,2 31 ,0 81\n1 02 ,4 48 ,3 39\nR U\n_c p1\n25 1\n13 ,7 63\n13 6, 67 4\n1 ,3 39 ,2 86\n13 ,3 69 ,6 9 0\n13 3 ,6 71 ,3 22\nO O\nV ty\npe ra\nte (% ) A R 28 .4 6\n5 .3 8\n0 .0 0\n0 .0 0\n0 .0 0\n26 .0 2\n4 .8 8\n0 .0 0\n0 .0 0\n0 .0 0\n93 .7 2\n7 1 .1 3\n3 5 .7 0\n1 3 .4 4\n5 .0 2\n6 .0 6\n0 .3 4\n0 .0 0\n0 .0 0\n0 .0 0\nE N\n21 .6 5\n1 2 .3 7\n8 .2 5\n2 .0 6\n0 .0 0\n22 .5 5\n11 .7 6\n4 .9 0\n0 .9 8\n0 .0 0\n89 .4 7\n60 .5 6\n26 .2 7\n10 .4 3\n4 .3 2\n3 .5 2\n0 .4 9\n0 .1 8\n0 .0 4\n0 .0 0\nE S\n2 5 .2 3\n1 4 .4 1\n1 0 .8 1\n5 .4 1\n0 .9 0\n2 3 .6 4\n1 0 .9 1\n6 .3 6\n0 .9 1\n0 .0 0\n90 .7 1\n63 .9 6\n2 7 .1 8\n9 .7 1\n3 .8 8\n4 .4 4\n0 .5 3\n0 .1 9\n0 .0 8\n0 .0 0\nFR 23 .0 1\n9 .7 3\n7 .0 8\n1 .7 7\n0 .0 0\n22 .8 8\n9 .3 2\n5 .0 8\n0 .8 5\n0 .0 0\n89 .9 2\n62 .2 7\n2 5 .9 6\n8 .8 9\n3 .4 8\n3 .9 6\n0 .4 9\n0 .1 5\n0 .0 2\n0 .0 0\nR U\n3 3 .5 6\n7 .5 3\n2 .0 5\n0 .0 0\n0 .0 0\n3 1 .2 5\n5 .5 6\n1 .3 9\n0 .0 0\n0 .0 0\n9 3 .1 1\n69 .6 6\n32 .1 5\n1 0 .9 1\n3 .8 1\n4 .8 2\n0 .2 9\n0 .0 2\n0 .0 1\n0 .0 0\nZ H\n6 1 .2 3\n26 .8 7\n6 .7 3\n1 .4 2\n0 .3 5\n1 6 .9 9\n3 .9 2\n0 .6 5\n0 .0 0\n0 .0 0\n90 .3 4\n6 3 .4 8\n29 .8 9\n10 .3 1\n4 .0 9\n6 7 .9 8\n2 6 .2 2\n5 .0 3\n0 .5 5\n0 .1 0\nZ H\n_p in\nyi n\n27 .5 5\n8 .1 6\n1 .0 2\n0 .0 0\n0 .0 0\nZ H\n_w ub\ni 2 3 .3 3\n4 .1 7\n0 .8 3\n0 .0 0\n0 .0 0\nA R\n_c p1\n25 6\n28 .4 6\n5 .3 8\n0 .0 0\n0 .0 0\n0 .0 0\nR U\n_c p1\n25 1\n33 .5 6\n7 .5 3\n2 .0 5\n0 .0 0\n0 .0 0\nO O\nV to\nke n\nra te\n(% )\nA R\n0. 1 3\n0 .0 1\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 5\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n53 .4 0\n2 5 .2 4\n9 .3 1\n3 .2 5\n1 .1 8\n0 .7 3\n0 .1 2\n0 .0 0\n0 .0 0\n0 .0 0\nE N\n0. 03\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 3\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n34 .2 6\n11 .6 3\n3 .4 1\n1 .2 6\n0 .5 1\n0 .2 0\n0 .0 3\n0 .0 2\n0 .0 0\n0 .0 0\nE S\n0. 0 4\n0 .0 1\n0 .0 1\n0 .0 0\n0 .0 0\n0 .0 5\n0 .0 1\n0 .0 1\n0 .0 0\n0 .0 0\n32 .1 7\n12 .0 5\n3 .5 6\n1 .1 6\n0 .4 6\n0 .2 9\n0 .0 9\n0 .0 4\n0 .0 2\n0 .0 0\nFR 0. 14\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n0 .1 2\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n31 .3 9\n11 .6 6\n3 .3 0\n1 .0 3\n0 .4 1\n0 .5 1\n0 .0 4\n0 .0 2\n0 .0 0\n0 .0 0\nR U\n0. 15\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 8\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n4 9 .4 8\n22 .4 4\n7 .5 3\n2 .3 3\n0 .7 9\n0 .3 8\n0 .0 4\n0 .0 0\n0 .0 0\n0 .0 0\nZ H\n11 .1 5\n1 .2 5\n0 .1 8\n0 .0 4\n0 .0 1\n0 .1 0\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\n38 .6 5\n13 .9 6\n4 .5 2\n1 .4 2\n0 .5 6\n22 .0 1\n4 .7 5\n0 .8 1\n0 .0 9\n0 .0 1\nZ H\n_p in\nyi n\n0. 0 7\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\nZ H\n_w ub\ni 0 .0 9\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\nA R\n_c p1\n25 6\n0. 13\n0 .0 1\n0 .0 0\n0 .0 0\n0 .0 0\nR U\n_c p1\n25 1\n0. 15\n0 .0 0\n0 .0 0\n0 .0 0\n0 .0 0\nT T\nR (% ) A\nR 0. 9 5\n0 .1 3\n0 .0 2\n0 .0 0\n0 .0 0\n0 .5 0\n0 .0 7\n0 .0 1\n0 .0 0\n0 .0 0\n61 .6 7\n3 5 .3 9\n1 4 .8 4\n5 .0 9\n1 .6 2\n24 .5 6\n1 7 .6 9\n8 .8 9\n1 .4 2\n0 .1 4\nE N\n0. 63\n0 .0 8\n0 .0 1\n0 .0 0\n0 .0 0\n0 .6 6\n0 .0 8\n0 .0 1\n0 .0 0\n0 .0 0\n42 .8 2\n19 .3 7\n6 .4 4\n2 .0 1\n0 .6 6\n2 3 .8 1\n1 3 .8 7\n4 .8 8\n1 .2 4\n0 .1 3\nE S\n0. 5 9\n0 .0 8\n0 .0 1\n0 .0 0\n0 .0 0\n0 .5 9\n0 .0 8\n0 .0 1\n0 .0 0\n0 .0 0\n39 .2 4\n19 .2 4\n6 .9 4\n2 .1 1\n0 .6 4\n21 .2 2\n12 .8 3\n4 .9 1\n1 .1 3\n0 .1 2\nFR 0. 63\n0 .0 8\n0 .0 1\n0 .0 0\n0 .0 0\n0 .6 4\n0 .0 8\n0 .0 1\n0 .0 0\n0 .0 0\n40 .4 0\n18 .4 5\n6 .2 7\n1 .8 2\n0 .5 3\n2 2 .7 2\n1 2 .7 8\n4 .7 1\n1 .1 0\n0 .1 2\nR U\n0. 71\n0 .1 0\n0 .0 1\n0 .0 0\n0 .0 0\n0 .3 9\n0 .0 6\n0 .0 1\n0 .0 0\n0 .0 0\n5 7 .5 6\n31 .7 0\n12 .9 5\n4 .1 7\n1 .2 4\n2 6 .2 8\n1 8 .1 5\n8 .6 6\n1 .3 5\n0 .1 4\nZ H\n23 .2 3\n4 .5 0\n0 .7 2\n0 .1 0\n0 .0 1\n1 .3 9\n0 .1 6\n0 .0 2\n0 .0 0\n0 .0 0\n48 .3 6\n21 .7 6\n7 .7 3\n2 .4 5\n0 .7 3\n42 .3 6\n16 .8 4\n5 .6 2\n1 .4 1\n0 .1 5\nZ H\n_p in\nyi n\n0 .6 4\n0 .0 9\n0 .0 1\n0 .0 0\n0 .0 0\nZ H\n_w ub\ni 0 .9 4\n0 .1 2\n0 .0 2\n0 .0 0\n0 .0 0\nA R\n_c p1\n25 6\n0. 95\n0 .1 3\n0 .0 2\n0 .0 0\n0 .0 0\nR U\n_c p1\n25 1\n0. 71\n0 .1 0\n0 .0 1\n0 .0 0\n0 .0 0\nM ea\nn lin\ne le\nng th ±\nst d\n0/ 25\n/5 0/\n75 /1\n00 -t\nh\nA R\n10 2 .1 2 ± 57 .4 0\n1 0 4 .1 4 ± 5 7 .8 3\n10 2 .1 1 ± 58 .0 6\n10 2 .3 1 ± 58 .3 3\n1 0 2 .4 5 ± 58 .4 6\n1 8 4 .4 8 ± 10 5. 32\n18 8 .5 1 ± 10 5. 63\n18 4 .5 1 ± 10 5. 7 8\n18 4 .8 3 ± 10 6. 15\n1 85 .1 1 ± 10 6 .4 0\n18 .8 9 ± 10 .2 7\n19 .2 8 ± 10 .5 9\n19 .0 7 ± 10 .7 2\n1 9 .1 3 ± 1 0. 82\n19 .1 4 ± 10 .8 4\n3 5 .1 8 ± 1 8 .9 2\n2 5 .6 0 ± 1 4. 4 2\n2 1 .0 7 ± 12 .0 1\n20 .8 7 ± 12 .0 0\n20 .9 8 ± 12 .1 0\n5/ 5 7/ 9 6/ 15 2 /2 1 3\n4 /5 4 /1 0 6 / 14 9 /2 64\n3/ 53 / 10 2 /1 4 7/ 29 3\n1 /5 3 /1 0 2/ 1 4 7 /2 9 8\n1 /5 3 /1 0 2 /1 4 7 / 3 0 0\n9/ 1 01 / 17 3 /2 80 /3 8 3\n6 / 9 8/ 1 91 /2 70 /4 87\n3 /9 4 /1 84 / 26 6 /5 33\n1 /9 5 /1 84 / 2 66 /5 51\n1 /9 5 /1 8 5/ 26 6 /5 59\n2/ 11 / 18 / 27 / 4 3\n1 /1 0 /2 0 /2 7 /5 0\n1 / 10 / 1 9/ 27 / 8 2\n1 /1 0 /1 9 /2 7 /1 20\n1 /1 0 /1 9 /2 7 /1 43\n4 /1 8 /3 4 /5 0 /7 3\n1/ 1 4/ 26 / 3 6/ 79\n1/ 11 /2 1 /3 0 /8 6\n1 /1 1 /2 1 /2 9 /1 23\n1/ 11 / 21 / 29 /1 47\nE N\n12 3 .0 0 ± 72 .3 0\n1 2 3 .6 5 ± 7 1. 1 2\n12 0 .8 8 ± 69 .6 2\n12 0 .8 9 ± 69 .5 8\n1 2 0 .9 3 ± 6 9 .6 0\n12 3 .0 5 ± 72 .3 5\n12 3 .6 8 ± 71 .1 5\n12 0 .9 2 ± 69 .6 4\n12 0 .9 3 ± 69 .6 0\n12 0 .9 7 ± 69 .6 2\n21 .1 6 ± 1 2 .2 4\n2 1 .5 5 ± 12 .3 1\n2 1 .2 0 ± 1 2 .1 5\n21 .2 3 ± 12 .1 7\n2 1 .2 3 ± 1 2 .1 7\n3 3 .6 9 ± 19 .3 2\n25 .1 1 ± 1 4 .3 2\n2 2 .1 3 ± 1 2 .6 7\n21 .6 0 ± 12 .3 6\n21 .6 7 ± 12 .4 2\n9/ 6 3/ 1 16 /1 87 /2 6 8\n3 /6 0 /1 28 /1 7 8 / 3 0 0\n1/ 6 1/ 1 22 /1 7 5 /3 00\n1 /6 2 /1 22 /1 75 / 3 00\n1/ 61 / 12 2 /1 7 5/ 30 0\n9 / 63 / 1 16 /1 87 /2 68\n3 /6 0 /1 2 8/ 17 8/ 30 0\n1 /6 1 /1 22 / 1 75 /3 00\n1 /6 2 /1 2 2/ 17 5/ 30 0\n1 /6 1 /1 22 /1 75 / 30 4\n2/ 10 /2 1 /3 1 /4 8\n1 /1 1 /2 2 /3 1 /5 5\n1 /1 1 /2 1 /3 0 /8 3\n1 /1 1 /2 1 /3 0 /1 18\n1 /1 1 /2 1 /3 0 /1 36\n3 /1 5 /3 3 /4 9 /7 8\n1/ 1 3/ 26 / 3 6/ 86\n1/ 12 /2 2 /3 2 /8 8\n1/ 12 / 22 / 31 / 12 0\n1/ 12 / 22 / 31 / 13 8\nE S\n1 41 .2 5 ± 8 2 .8 3\n13 9 .5 6 ± 79 .5 9\n13 6 .9 8 ± 78 .3 5\n13 7 .0 1 ± 78 .4 5\n1 3 7 .1 3 ± 7 8 .4 7\n1 43 .8 7 ± 84 .3 7\n14 2 .2 4 ± 81 .0 5\n1 39 .5 3 ± 7 9. 72\n13 9 .5 8 ± 7 9 .8 4\n1 39 .7 0 ± 79 .8 7\n24 .7 2 ± 14 .3 9\n24 .6 0 ± 13 .9 1\n24 .2 7 ± 1 3 .8 0\n2 4 .3 1 ± 1 3 .8 7\n2 4 .3 2 ± 1 3 .8 7\n37 .8 4 ± 2 1. 74\n2 8 .7 3 ± 1 6 .2 7\n2 5 .4 1 ± 14 .4 6\n24 .8 4 ± 14 .1 7\n24 .9 3 ± 14 .2 2\n11 /5 9 /1 3 4 /2 1 5/ 2 92\n3/ 67 / 1 45 /2 0 1 /3 00\n1 /7 0 /1 3 8 / 2 0 0/ 30 0\n1/ 70 / 13 8 /1 99 / 30 0\n1/ 7 0 /1 39 / 1 99 /3 0 0\n1 1 /6 0 /1 36 /2 1 8/ 29 8\n3 /6 9 /1 49 /2 0 6/ 30 6\n1 /7 1 /1 40 /2 04 / 30 8\n1/ 7 1/ 14 1/ 20 3/ 31 2\n1 / 7 1/ 1 41 /2 03 /3 16\n2 /1 2 /2 4 /3 7 /5 6\n1 / 12 / 25 / 36 / 60\n1/ 13 /2 4 /3 5 /8 4\n1 / 13 / 2 4/ 35 / 1 19\n1/ 1 3/ 24 / 3 5/ 13 9\n4/ 17 / 3 5/ 56 / 8 6\n1 /1 4 /3 0 /4 1 /9 0\n1 / 1 3/ 25 / 3 6 /9 0\n1/ 13 /2 5 /3 6 /1 21\n1 /1 3 /2 5 /3 6 /1 40\nFR 13 7 .9 7 ± 80 .1 2\n13 8 .5 3 ± 79 .4 9\n13 6 .5 4 ± 78 .8 0\n13 6 .6 1 ± 7 8 .9 2\n1 36 .7 5 ± 7 8. 96\n1 42 .4 3 ± 8 2 .9 7\n14 3 .1 4 ± 82 .1 8\n14 1 .1 5 ± 81 .4 7\n14 1 .2 0 ± 81 .5 6\n14 1 .3 4 ± 81 .6 1\n24 .5 3 ± 13 .7 9\n24 .9 5 ± 14 .1 9\n2 4 .8 5 ± 1 4. 28\n2 4 .8 5 ± 14 .3 1\n2 4 .8 6 ± 1 4. 31\n37 .5 5 ± 20 .9 7\n28 .9 1 ± 16 .4 1\n25 .8 6 ± 1 4 .8 5\n25 .2 8 ± 14 .5 3\n25 .3 6 ± 14 .5 8\n6 /6 2 /1 33 / 21 5 /2 9 3\n3 /6 8 /1 4 1/ 2 0 3 /3 0 0\n1 /6 8 /1 3 7/ 20 0 / 30 0\n1/ 6 9/ 1 3 8/ 19 9 /3 0 0\n1 /6 9 /1 38 /1 9 9/ 3 00\n6 /6 4 /1 37 /2 19 / 30 3\n3/ 70 / 14 5 /2 09 /3 15\n1 / 71 / 14 2/ 20 6/ 31 9\n1 /7 1 /1 4 2/ 20 6/ 32 3\n1 /7 1 /1 43 / 20 6/ 33 0\n2/ 13 / 24 / 37 / 50\n1/ 1 2 /2 6 /3 6 /6 2\n1/ 1 3 /2 5 /3 6 /8 4\n1/ 13 /2 5 /3 6 /1 1 8\n1 /1 3 /2 5 /3 6 /1 38\n3 /2 0 /3 7 /5 6 /8 0\n1 /1 4 /3 0 /4 2 /8 6\n1 /1 3 /2 6 /3 7 /8 8\n1 /1 3 /2 5 /3 6 /1 20\n1 /1 3 /2 6 /3 6 /1 38\nR U\n13 7 .6 3 ± 8 0. 8 0\n13 6 .6 7 ± 78 .7 7\n1 3 3 .9 3 ± 7 7 .2 5\n1 33 .7 0 ± 7 7 .2 6\n13 3 .6 7 ± 77 .2 5\n25 3 .4 5 ± 15 0 .9 0\n2 51 .2 3 ± 14 6. 2 1\n24 5 .7 9 ± 1 43 .3 1\n2 45 .3 6 ± 14 3 .1 8\n24 5 .3 4 ± 1 43 .1 7\n20 .2 4 ± 11 .2 7\n20 .4 7 ± 11 .6 1\n20 .1 6 ± 1 1 .4 4\n2 0 .1 6 ± 1 1. 53\n20 .1 6 ± 1 1 .5 3\n3 8 .1 3 ± 2 1 .1 0\n2 6 .7 7 ± 15 .2 6\n21 .9 8 ± 1 2 .6 1\n21 .7 3 ± 12 .5 9\n21 .7 9 ± 12 .6 4\n11 /7 1 /1 3 0 / 21 2 /2 96\n3 /6 3 /1 3 9 /2 0 0 / 3 00\n1/ 67 / 13 5 /1 9 6/ 30 0\n1 /6 7 /1 3 4/ 19 5 / 30 0\n1/ 6 7/ 1 3 4/ 1 95 /3 0 0\n21 / 1 2 6/ 24 2 /3 92 / 55 0\n4/ 11 8 /2 58 /3 66 / 56 1\n1/ 12 1/ 2 47 /3 60 / 5 62\n1/ 12 2/ 24 7/ 35 8/ 56 6\n1 /1 22 / 2 47 /3 58 /5 69\n2 /1 1 /2 0 /2 9 /4 7\n1 /1 0 /2 1 /2 9 /5 5\n1 /1 1 /2 0 /2 9 /8 3\n1 /1 1 /2 0 /2 9 /1 17\n1 /1 1 /2 0 /2 9 /1 36\n3 /1 8 /3 8 /5 6 /8 1\n1/ 14 / 2 7/ 39 / 7 8\n1 /1 2 /2 2 /3 1 /8 7\n1/ 11 / 22 / 31 / 12 1\n1/ 11 / 22 / 31 / 13 8\nZ H\n3 3 .7 1 ± 1 8. 0 3\n34 .4 3 ± 19 .4 0\n3 4 .1 5 ± 19 .0 6\n3 4 .2 0 ± 1 9 .2 7\n34 .2 0 ± 1 9 .2 8\n92 .3 5 ± 52 .4 2\n95 .1 0 ± 54 .2 9\n93 .8 8 ± 53 .4 2\n94 .0 4 ± 5 3. 75\n94 .0 5 ± 53 .7 9\n18 .2 4 ± 9. 97\n19 .1 2 ± 10 .8 7\n19 .0 5 ± 1 0 .8 6\n1 9 .0 8 ± 1 0 .9 1\n1 9 .0 7 ± 1 0 .9 2\n25 .5 9 ± 1 3. 84\n2 2 .0 7 ± 12 .6 8\n19 .9 7 ± 11 .4 3\n19 .4 7 ± 11 .1 7\n19 .5 7 ± 11 .2 6\n4/ 19 / 33 / 48 / 7 0\n2/ 18 /3 5 /4 9 /1 2 7\n1 / 19 / 34 / 48 / 1 33\n1/ 19 /3 4 /4 8 /2 2 0\n1 / 19 / 34 / 48 /2 7 2\n1 2 /4 7 /9 3 /1 3 2/ 1 98\n3/ 50 /9 6 /1 38 /2 48\n3/ 4 9/ 94 / 13 5/ 29 1\n1 / 49 / 94 / 13 5 /3 78\n1/ 48 / 94 / 13 5/ 59 4\n2 /1 0 /1 7 /2 6 /3 9\n1 /1 0 /2 0 /2 7 /5 5\n1/ 1 0 /1 9 /2 7 /8 7\n1 /1 0 /1 9 /2 7 /1 1 3\n1 /1 0 /1 9 /2 7 /2 4 8\n2 /1 4 /2 6 /3 7 /5 2\n1/ 12 / 2 2/ 31 / 9 0\n1 /1 1 /2 0 /2 8 /9 2\n1/ 11 / 19 / 27 / 11 5\n1/ 11 / 19 / 28 / 24 9\nZ H\n_p in\nyi n\n1 16 .0 5 ± 6 7 .5 4\n11 9 .2 6 ± 68 .5 4\n11 7 .1 6 ± 67 .2 5\n1 1 7 .3 2 ± 6 7. 49\n1 1 7 .3 6 ± 6 7 .5 5\n1 1 /5 8 /1 17 /1 67 / 26 3\n3/ 61 / 12 2 /1 73 /3 2 0\n3 /6 1 /1 1 7/ 1 69 /3 3 5\n1/ 6 0/ 1 1 8/ 16 9 /3 76\n1 /6 0 /1 18 / 16 9 /5 84\nZ H\n_w ub\ni 10 0 .7 1 ± 57 .0 5\n10 3 .9 4 ± 59 .4 8\n10 2 .2 9 ± 5 7 .9 2\n1 02 .4 6 ± 5 8. 29\n1 02 .4 5 ± 5 8 .3 1\n13 /5 4 /1 0 1/ 1 39 /2 1 7\n3/ 5 4 /1 0 5/ 15 0 / 29 6\n3/ 5 4 /1 02 / 14 7 /2 97\n1 /5 3 /1 02 /1 46 / 3 98\n1/ 53 / 10 3 /1 4 6/ 62 7\nA R\n_c p1\n25 6\n1 02 .1 2 ± 57 .4 0\n10 4 .1 4 ± 57 .8 3\n10 2 .1 1 ± 58 .0 6\n10 2 .3 1 ± 58 .3 3\n1 02 .4 5 ± 5 8. 4 6\n5/ 57 /9 6 /1 52 / 21 3\n4/ 54 /1 06 / 14 9 /2 64\n3 /5 3 /1 0 2/ 14 7/ 29 3\n1 /5 3 /1 02 /1 47 / 29 8\n1 /5 3 /1 02 /1 47 / 3 00\nR U\n_c p1\n25 1\n13 7 .6 3 ± 8 0. 8 0\n13 6 .6 7 ± 78 .7 7\n13 3 .9 3 ± 77 .2 5\n13 3 .7 0 ± 77 .2 6\n13 3 .6 7 ± 77 .2 5\n11 / 7 1/ 13 0 /2 12 /2 96\n3 /6 3 /1 3 9/ 20 0 /3 00\n1 /6 7 /1 35 / 19 6/ 30 0\n1 /6 7 /1 34 / 19 5 /3 00\n1 /6 7 /1 34 /1 95 / 3 00\nSt at\nis tic\ns fo\nrd ev\nel op\nm en\nt( de\nv) se\nt\nA s\na di\nff er\nen ts\net of\nvo ca\nbu la\nry is\nle ar\nne d\nfr om\nea ch\ntr ai\nni ng\nda ta\nse ta\nnd da\nta si\nze ,B\nPE ha\ns a\ndi st\nin ct\nde v\nse tf\nor ea\nch .\nR ep\nre se\nnt at\nio n\nC H\nA R\nB Y\nT E\nW O\nR D\nB PE\n_A B\nPE _B\nB PE _C N um be ro fl in es in tr ai n se t 1 00 1, 0 0 0 10 ,0 0 0 1 00 ,0 00 1 ,0 0 0, 00 0 10 0 1 ,0 00 10 ,0 00 10 0, 00 0 1 ,0 0 0 ,0 0 0 10 0 1 ,0 00 10 ,0 00\n10 0, 00 0\n1 ,0 00 ,0 0 0\nN um\nbe r\nof T\nY PE S A R 13 0\n12 3\n1 3, 83 6\n7 0 8\n3, 23 2\n8, 9 94\n11 ,4 7 3\n12 ,4 30\n8 6 5\n3, 9 9 1\n1 0, 7 3 5\n12 ,9 54\n1 3, 0 90\n9 0 8\n4, 1 1 9\n1 0, 6 8 1\n12 ,9 34\n1 3, 09 3\nE N\n9 7\n1 0 2\n7, 19 9\n6 5 6\n2, 5 6 7\n5, 9 9 8\n7, 4 61\n7, 5 56\n8 01\n3, 1 49\n6, 5 1 8\n7, 5 57\n7, 5 7 8\n82 3\n3, 24 5\n6, 5 51\n7, 52 4\n7, 5 73\nE S\n1 11\n11 0\n8, 55 1\n6 8 0\n2, 8 32\n6, 9 1 9\n8, 75 2\n8, 8 7 1\n8 3 3\n3, 4 2 6\n7, 52 8\n8, 8 6 9\n8, 9 5 6\n8 3 4\n3, 4 2 4\n7, 52 7\n8, 8 7 1\n8, 93 9\nFR 1 1 3\n1 1 8\n8, 31 2\n7 4 4\n2, 8 21\n6, 9 2 5\n8, 5 26\n8, 66 6\n82 7\n3, 40 7\n7, 4 89\n8, 66 5\n8, 7 0 5\n88 3\n3, 46 3\n7, 4 31\n8, 67 9\n8, 7 08\nR U\n1 46\n14 4\n1 2, 81 9\n8 78\n3, 7 6 9\n10 ,0 8 5\n1 2, 5 05\n1 2, 7 88\n1, 02 4\n4, 4 38\n11 ,0 5 2\n12 ,8 83\n1 2, 9 5 4\n1, 03 8\n4, 5 09\n11 ,0 12\n1 2, 8 82\n1 2, 9 5 8\nZ H\n1 ,9 7 6\n15 3\n7, 4 13\n3, 22 4\n4, 2 15\n6, 3 8 6\n7, 5 41\n7, 65 4\n3, 2 6 1\n4, 48 1\n6, 7 35\n7, 71 9\n7, 7 0 2\n3, 2 6 0\n4, 52 4\n6, 7 85\n7, 65 4\n7, 7 21\nZ H\n_p in\nyi n 98 Z H _w ub i 1 20\nA R\n_c p1\n25 6\n13 0\nR U\n_c p1\n25 1\n14 6\nN um\nbe r\nof TO\nK E\nN S\nA R\n3 34 ,3 5 8\n60 5, 5 1 6\n61 ,3 71\n1 67 ,5 74\n1 15 ,6 9 3\n83 ,0 01\n76 ,2 8 4\n7 0, 5 2 7\n1 4 9, 6 89\n9 7, 8 4 3\n73 ,3 1 4\n6 8, 5 3 8\n68 ,2 7 0\n1 4 9, 62 3\n9 7, 2 3 1\n7 3, 54 1\n6 8, 5 7 9\n68 ,2 78\nE N\n39 1 ,2 2 2\n39 1, 2 60\n67 ,6 2 9\n1 5 5, 82 6\n1 0 1, 7 82\n7 7, 0 8 9\n71 ,2 76\n7 0, 3 3 9\n1 4 0, 2 56\n9 0, 8 7 1\n73 ,5 3 1\n6 9, 6 3 3\n69 ,3 4 8\n1 4 0, 3 77\n9 0, 3 6 0\n73 ,4 6 8\n6 9, 6 3 3\n69 ,3 41\nE S\n4 4 3, 9 58\n4 5 2, 19 0\n7 8, 0 8 7\n1 70 ,4 33\n1 13 ,9 8 3\n88 ,6 34\n8 2, 28 1\n81 ,3 41\n1 55 ,6 8 7\n1 03 ,7 48\n8 4, 91 4\n80 ,5 69\n8 0, 3 37\n1 54 ,7 4 6\n1 03 ,1 72\n8 4, 8 7 0\n80 ,5 79\n8 0, 37 1\nFR 4 3 8, 0 83\n4 5 2, 55 6\n7 8, 7 4 5\n1 66 ,2 80\n1 14 ,6 9 4\n88 ,7 26\n8 2, 44 8\n81 ,5 59\n1 56 ,2 5 6\n1 04 ,0 71\n8 5, 11 5\n80 ,8 44\n80 ,6 71\n1 53 ,7 4 5\n1 04 ,0 91\n8 5, 12 5\n80 ,9 12\n8 0, 65 4\nR U\n4 31 ,5 3 8\n7 93 ,2 1 4\n6 4, 1 80\n17 7, 8 18\n11 3, 6 2 8\n79 ,7 6 3\n7 2, 4 8 0\n7 1, 08 1\n16 3, 3 1 9\n10 0, 29 4\n7 5, 0 51\n7 0, 20 0\n6 9, 9 9 1\n16 3, 8 0 6\n9 9, 97 8\n7 5, 0 96\n7 0, 19 6\n6 9, 9 8 2\nZ H\n1 07 ,9 90\n3 01 ,0 8 5\n6 0, 0 13\n9 6, 74 5\n80 ,2 31\n6 8, 12 9\n6 3, 6 1 4\n6 2, 81 0\n93 ,7 7 5\n7 5, 63 6\n6 5, 2 12\n6 1, 86 7\n6 1, 83 2\n94 ,1 2 7\n7 5, 71 8\n6 5, 2 03\n6 1, 91 6\n6 1, 8 2 3\nZ H\n_p in\nyi n\n37 6 ,9 7 9\nZ H\n_w ub\ni 33 0 ,7 3 4\nA R\n_c p1\n25 6\n3 3 4, 3 58\nR U\n_c p1\n25 1\n43 1 ,5 38\nT T\nR (% ) A\nR 0. 04\n0 .0 2\n22 .5 4\n0 .4 2\n2 .7 9\n10 .8 4\n1 5 .0 4\n17 .6 2\n0 .5 8\n4 .0 8\n1 4 .6 4\n1 8 .9 0\n1 9 .1 7\n0 .6 1\n4 .2 4\n1 4 .5 2\n1 8 .8 6\n19 .1 8\nE N\n0. 02\n0 .0 3\n10 .6 4\n0 .4 2\n2 .5 2\n7 .7 8\n1 0 .4 7\n1 0 .7 4\n0 .5 7\n3 .4 7\n8 .8 6\n1 0 .8 5\n1 0 .9 3\n0 .5 9\n3 .5 9\n8 .9 2\n10 .8 1\n1 0 .9 2\nE S\n0. 0 3\n0 .0 2\n10 .9 5\n0 .4 0\n2 .4 8\n7 .8 1\n10 .6 4\n1 0 .9 1\n0 .5 4\n3 .3 0\n8 .8 7\n1 1 .0 1\n11 .1 5\n0 .5 4\n3 .3 2\n8 .8 7\n1 1 .0 1\n1 1 .1 2\nFR 0. 0 3\n0 .0 3\n10 .5 6\n0 .4 5\n2 .4 6\n7 .8 0\n1 0 .3 4\n10 .6 3\n0 .5 3\n3 .2 7\n8 .8 0\n10 .7 2\n1 0 .7 9\n0 .5 7\n3 .3 3\n8 .7 3\n1 0 .7 3\n10 .8 0\nR U\n0. 0 3\n0 .0 2\n1 9 .9 7\n0 .4 9\n3 .3 2\n1 2 .6 4\n1 7 .2 5\n1 7 .9 9\n0 .6 3\n4 .4 2\n14 .7 3\n18 .3 5\n1 8 .5 1\n0 .6 3\n4 .5 1\n14 .6 6\n1 8 .3 5\n18 .5 2\nZ H\n1. 83\n0 .0 5\n1 2 .3 5\n3 .3 3\n5 .2 5\n9 .3 7\n1 1 .8 5\n1 2 .1 9\n3 .4 8\n5 .9 2\n1 0 .3 3\n12 .4 8\n1 2 .4 6\n3 .4 6\n5 .9 7\n1 0 .4 1\n12 .3 6\n1 2 .4 9\nZ H\n_p in\nyi n\n0. 03\nZ H\n_w ub\ni 0. 0 4\nA R\n_c p1\n25 6\n0. 0 4\nR U\n_c p1\n25 1\n0. 0 3\nM ea\nn lin\ne le\nng th ±\nst d\n0/ 25\n/5 0/\n75 /1\n00 -t\nh\nA R\n1 08 .6 6 ± 5 8 .0 1\n19 6 .7 9 ± 1 05 .8 5\n19 .9 5 ± 1 0 .5 8\n54 .4 6 ± 2 9 .5 1\n37 .6 0 ± 20 .5 1\n26 .9 7 ± 1 4 .8 0\n2 4 .7 9 ± 1 3 .5 9\n22 .9 2 ± 12 .5 6\n4 8 .6 5 ± 26 .3 5\n3 1 .8 0 ± 17 .7 6\n2 3 .8 3 ± 1 3 .1 5\n22 .2 7 ± 12 .1 2\n22 .1 9 ± 1 2 .0 7\n4 8 .6 3 ± 26 .2 1\n3 1 .6 0 ± 1 7 .5 3\n23 .9 0 ± 1 3 .2 0\n2 2 .2 9 ± 1 2 .1 4\n2 2 .1 9 ± 12 .0 6\n3/ 6 0 /1 1 0/ 1 5 3/ 27 7\n6 /1 07 /1 99 / 27 7 /5 0 3\n1 /1 2 /2 0 /2 7 /5 8\n1/ 30 / 5 4/ 7 6/ 1 52\n1/ 22 / 37 / 5 2/ 1 2 5\n1 /1 6 /2 7 /3 7 /9 5\n1 /1 4 /2 4 /3 4 /8 6\n1 /1 4 /2 2 /3 2 /7 5\n1 /2 7 /4 9 /6 8 /1 5 6\n1 /1 8 /3 2 /4 4 /1 10\n1/ 14 / 2 4/ 3 3/ 8 0\n1/ 1 3/ 22 / 3 1/ 71\n1/ 1 3 /2 2 /3 1 /7 1\n1 /2 8 /4 9 /6 7 /1 45\n1 /1 8 /3 1 /4 4 /1 10\n1/ 14 / 2 4/ 33 / 8 7\n1/ 1 3/ 22 / 3 1/ 73\n1/ 1 3 /2 2 /3 1 /7 2\nE N\n1 2 7. 1 4 ± 6 8 .6 4\n1 27 .1 6 ± 6 8 .6 5\n2 1 .9 8 ± 11 .8 1\n50 .6 4 ± 27 .4 4\n33 .0 8 ± 1 8 .0 7\n25 .0 5 ± 1 3 .5 6\n2 3 .1 6 ± 1 2. 4 2\n2 2 .8 6 ± 1 2 .2 7\n4 5 .5 8 ± 24 .8 6\n2 9 .5 3 ± 1 6 .3 0\n23 .9 0 ± 1 2 .9 5\n2 2 .6 3 ± 1 2 .1 3\n22 .5 4 ± 1 2. 0 6\n4 5 .6 2 ± 24 .6 7\n2 9 .3 7 ± 1 6 .1 3\n23 .8 8 ± 1 2 .9 1\n2 2 .6 3 ± 1 2 .1 2\n22 .5 4 ± 12 .0 5\n6/ 6 8 /1 3 0/ 1 8 1/ 29 9\n6/ 68 /1 30 / 1 81 /2 9 9\n1 / 1 3/ 2 2/ 31 / 6 1\n1/ 2 7 /5 1 /7 2 /1 3 6\n1 /1 9 /3 3 /4 6 /1 10\n1/ 1 5/ 2 5 /3 5 /7 8\n1/ 1 4/ 23 / 3 2/ 71\n1 /1 3 /2 3 /3 2 /7 1\n1/ 2 6/ 4 6/ 6 5/ 1 2 9\n1/ 1 7 /2 9 /4 1 /9 4\n1 /1 4 /2 4 /3 3 /7 4\n1 /1 3 /2 3 /3 1 /6 5\n1 /1 3 /2 3 /3 1 /6 5\n1/ 2 6/ 46 / 6 4/ 12 2\n1/ 17 /2 9 /4 1 /9 2\n1 /1 4 /2 4 /3 3 /7 3\n1 /1 3 /2 3 /3 1 /6 5\n1 /1 3 /2 3 /3 1 /6 6\nE S\n1 4 4. 2 8 ± 77 .7 8\n1 4 6 .9 6 ± 7 9 .2 1\n25 .3 8 ± 13 .5 6\n5 5 .3 9 ± 3 0 .1 3\n3 7 .0 4 ± 20 .1 3\n2 8 .8 1 ± 1 5. 53\n2 6 .7 4 ± 1 4 .2 9\n2 6 .4 4 ± 1 4. 14\n50 .6 0 ± 27 .6 6\n3 3 .7 2 ± 1 8 .4 8\n27 .6 0 ± 14 .8 8\n2 6 .1 8 ± 1 3 .9 8\n2 6 .1 1 ± 1 3. 94\n50 .2 9 ± 27 .2 7\n3 3 .5 3 ± 1 8. 38\n27 .5 8 ± 14 .8 6\n2 6 .1 9 ± 1 3 .9 9\n2 6 .1 2 ± 13 .9 4\n5 /7 7 /1 4 5/ 2 0 7/ 3 00\n5/ 78 /1 4 8/ 2 1 1 /3 0 7\n1/ 1 5 /2 5 /3 6 /6 3\n1 /3 0 /5 6 /7 9 /1 41\n1 /2 1 /3 7 /5 2 /1 11\n1/ 17 / 29 / 4 1/ 8 5\n1/ 1 5 /2 7 /3 8 /7 6\n1 / 1 5/ 2 6/ 3 7/ 73\n1/ 2 7 /5 1 /7 2 /1 3 3\n1 /1 9 /3 4 /4 7 /1 03\n1/ 1 5/ 28 / 3 9/ 75\n1 /1 5 /2 6 /3 7 /7 1\n1 / 15 / 2 6/ 3 7/ 7 0\n1/ 28 /5 1 /7 1 /1 3 3\n1 /1 9 /3 4 /4 7 /9 5\n1/ 1 5/ 28 /3 9 /7 5\n1 /1 5 /2 6 /3 7 /7 1\n1 / 1 5/ 26 / 3 7/ 71\nFR 1 42 .3 7 ± 7 7 .8 1\n1 4 7 .0 8 ± 8 0. 30\n2 5 .5 9 ± 13 .8 6\n5 4 .0 4 ± 2 9. 5 5\n37 .2 7 ± 2 0 .6 5\n2 8 .8 4 ± 1 5 .7 6\n26 .7 9 ± 14 .4 7\n2 6 .5 1 ± 1 4 .3 5\n50 .7 8 ± 2 7. 79\n33 .8 2 ± 18 .7 4\n2 7 .6 6 ± 15 .1 1\n2 6 .2 7 ± 1 4. 20\n2 6 .2 2 ± 1 4 .1 6\n49 .9 7 ± 2 7. 2 5\n33 .8 3 ± 1 8 .8 0\n27 .6 6 ± 15 .0 9\n2 6 .3 0 ± 1 4 .2 1\n2 6 .2 1 ± 1 4 .1 6\n4/ 74 / 14 5/ 20 5 /3 0 0\n4/ 7 7/ 1 50 /2 1 2 / 3 1 0\n1/ 14 /2 6 /3 6 /6 6\n1/ 2 9/ 5 5/ 76 / 13 9\n1/ 2 0/ 3 8/ 5 2/ 11 9\n1/ 1 6 /2 9 /4 0 /8 9\n1 /1 5 /2 7 /3 7 /7 4\n1 /1 5 /2 7 /3 7 /7 0\n1 /2 7 /5 2 /7 2 /1 3 5\n1 /1 8 /3 4 /4 7 /1 0 8\n1/ 1 5/ 28 / 3 9/ 8 0\n1/ 14 / 2 7/ 37 / 6 9\n1 /1 4 /2 7 /3 7 /6 9\n1 /2 7 /5 1 /7 0 /1 3 1\n1 /1 8 /3 4 /4 7 /1 0 7\n1/ 1 5/ 28 / 3 9/ 84\n1/ 14 / 2 7/ 37 / 6 9\n1 /1 4 /2 7 /3 7 /6 9\nR U\n1 40 .2 5 ± 76 .2 1\n2 57 .7 9 ± 1 41 .5 3\n20 .8 6 ± 1 1 .2 5\n5 7 .7 9 ± 32 .1 2\n36 .9 3 ± 20 .7 3\n25 .9 2 ± 1 4 .4 8\n23 .5 6 ± 12 .9 5\n2 3 .1 0 ± 1 2 .7 1\n5 3 .0 8 ± 29 .0 0\n3 2 .5 9 ± 1 8 .4 2\n24 .3 9 ± 13 .5 9\n2 2 .8 1 ± 1 2 .5 0\n2 2 .7 5 ± 1 2. 46\n5 3 .2 4 ± 29 .1 8\n3 2 .4 9 ± 1 8 .4 9\n24 .4 1 ± 13 .5 7\n2 2 .8 1 ± 1 2 .5 0\n2 2 .7 4 ± 12 .4 5\n5/ 7 5 /1 4 1/ 20 0 /3 0 0\n7/ 1 36 /2 59 /3 70 / 56 9\n1/ 1 2/ 21 / 29 / 69\n1/ 3 1 /5 8 /8 2 /1 85\n1 /2 1 /3 6 /5 2 /1 85\n1 /1 5 /2 6 /3 6 /1 28\n1 /1 4 /2 3 /3 3 /1 01\n1 /1 3 /2 3 /3 2 /9 5\n1/ 2 9/ 53 / 7 5/ 18 5\n1/ 1 8/ 3 2/ 46 / 1 66\n1/ 1 4 /2 4 /3 4 /1 1 2\n1 /1 3 /2 3 /3 2 /8 5\n1 /1 3 /2 3 /3 2 /9 3\n2/ 3 0/ 53 / 7 6/ 18 5\n1/ 1 8/ 3 2/ 4 5/ 1 61\n1/ 1 4 /2 4 /3 4 /1 0 5\n1 /1 3 /2 3 /3 2 /9 0\n1 / 1 3/ 2 3 /3 2 /9 2\nZ H\n3 5. 10 ± 1 8 .4 8\n9 7 .8 5 ± 52 .1 0\n1 9 .5 0 ± 10 .4 2\n3 1 .4 4 ± 1 6. 96\n26 .0 7 ± 1 4. 1 9\n22 .1 4 ± 1 2 .0 2\n20 .6 7 ± 11 .1 6\n2 0 .4 1 ± 1 1. 05\n30 .4 8 ± 1 6 .4 7\n24 .5 8 ± 1 3. 42\n2 1 .1 9 ± 11 .4 9\n2 0 .1 1 ± 1 0 .7 9\n2 0 .0 9 ± 1 0 .7 9\n3 0 .5 9 ± 1 6. 4 5\n24 .6 1 ± 1 3. 49\n2 1 .1 9 ± 11 .4 9\n2 0 .1 2 ± 1 0. 81\n2 0 .0 9 ± 10 .7 9\n2 /2 1 /3 5 /4 9 /1 2 5\n4/ 5 3/ 9 9/ 1 38 /2 68\n1 /1 1 /2 0 /2 7 /6 4\n1 /1 8 /3 1 /4 4 /1 0 9\n1/ 1 5/ 2 6/ 36 / 93\n1/ 13 /2 2 /3 1 /8 0\n1 /1 2 /2 1 /2 9 /7 4\n1 / 12 / 2 0/ 28 / 7 4\n1/ 17 /3 0 /4 3 /1 0 4\n1/ 1 4/ 24 /3 4 /9 2\n1/ 1 2/ 21 / 2 9/ 71\n1 /1 2 /2 0 /2 8 /6 4\n1/ 12 / 2 0/ 2 8/ 66\n1/ 1 8 /3 0 /4 3 /1 0 1\n1 /1 4 /2 4 /3 4 /9 3\n1/ 1 2/ 21 / 2 9 /7 6\n1 /1 2 /2 0 /2 8 /6 4\n1/ 1 2/ 20 / 2 8/ 65\nZ H\n_p in\nyi n\n1 2 2. 52 ± 65 .5 9 6/ 67 /1 2 5 / 17 3 /3 53\nZ H\n_w ub\ni 1 0 7. 4 9 ± 5 6 .7 3 4/ 6 0/ 1 08 /1 5 1 /2 94\nA R\n_c p1\n25 6\n1 08 .6 6 ± 5 8. 01 3/ 6 0/ 1 1 0/ 1 53 / 27 7\nR U\n_c p1\n25 1\n1 40 .2 5 ± 7 6. 21 5 /7 5 /1 4 1 /2 0 0 / 3 00\nE S\nC O\nR E\nTA B\nL E\nS\nN um\nbe ro\nfb its\nto en\nco de\nth e\nde v\nda ta\nfo re\nac h\nof th\ne 30\nla ng\nua ge\ndi re\nct io\nns .S\nho w\nn is\nm ea\nn± st\nd ov\ner :\n• 12\nru ns\nfo rC\nH A\nR ,B\nY T\nE ,a\nnd W\nO R\nD fr\nom 10\n0 to\n10 0,\n00 0\nlin es , • 5 ru ns fo r1 ,0 00 ,0 00 lin es ,a nd • 3 ru ns fo ra ll si ze s in vo lv in g al te rn at e re pr es en ta tio ns (B PE ,P in\nyi n,\nW ub\ni, cp\n12 56\nan d\ncp 12\n51 ).\nC H\nA R\nB Y\nT E\nW O\nR D\nB PE\nL G\nD IR\n10 0\n1 ,0 0 0\n10 ,0 0 0\n10 0 ,0 00\n1 ,0 0 0 ,0 0 0\n10 0\n1, 0 00\n10 ,0 00\n1 00 ,0 00\n1 ,0 00 ,0 00\n10 0\n1 ,0 00\n10 ,0 0 0\n10 0, 0 00\n1 ,0 00 ,0 0 0\n1 00\n1 ,0 00\n1 0, 0 00\n1 0 0, 00 0\n1 ,0 0 0, 0 00\nA R\n-E N\n1 ,5 4 4 ,7 36 .2 3\n1 ,1 2 1, 01 5 .0 3\n7 60 ,7 3 2. 5 0\n76 5 ,0 2 8 .7 2\n77 0 ,9 24 .5 2\n1 ,5 2 6, 2 55 .5 8\n1, 12 9 ,9 6 1. 6 4\n74 3 ,5 67 .3 6\n71 1 ,3 5 3 .6 5\n68 9 ,6 39 .7 9\n82 4 ,0 21 .6 6\n91 9 ,9 47 .5 0\n74 9 ,9 70 .5 2\n71 0, 25 1. 48\n70 0, 34 1. 00\n1, 3 01 ,6 65 .5 1\n99 6 ,0 4 7. 6 3\n81 7 ,1 7 0. 91\n73 4, 44 3 .6 1\n72 4, 41 3 .8 3\n± 33 ,3 45 .4 6\n± 33 ,5 8 6 .5 1\n± 51 ,8 8 6 .3 5\n± 6 5, 50 2 .6 2\n± 5 6 ,6 22 .9 5\n± 2 0 ,0 3 5. 5 4\n± 28 ,2 6 8 .8 2\n± 51 ,3 2 1 .6 2\n± 57 ,2 2 0. 79\n± 2 1 ,8 42 .7 8\n± 16 ,6 64 .2 9\n± 57 ,4 5 2. 0 7\n± 27 ,1 8 4. 31\n± 2 0 ,1 4 8. 97\n± 8, 8 53 .3 0\n± 82 ,8 57 .8 6\n± 7 4 ,1 75 .0 8\n± 7 0, 7 05 .5 1 ± 1 8 ,6 26 .3 7 ± 1 1 ,6 71 .4 9\nA R\n-E S\n1, 66 9 ,2 30 .6 7\n1 ,1 82 ,1 22 .1 4\n7 90 ,2 9 6. 6 3\n81 2 ,5 41 .5 9\n80 1 ,7 97 .3 1\n1 ,6 8 2, 3 68 .2 1\n1, 21 4 ,7 1 2. 8 0\n7 6 8 ,3 29 .5 5\n7 73 ,0 77 .6 0\n74 3 ,1 87 .1 2\n89 8 ,4 67 .6 3\n1, 0 39 ,4 23 .5 3\n8 31 ,1 31 .8 5\n79 2, 19 9. 6 9\n78 8, 77 0. 52\n1, 41 5, 20 4. 23\n1, 16 9, 08 6. 0 6\n8 8 9, 2 0 3. 72\n82 2, 28 6 .9 5\n81 8, 45 0 .3 4\n± 33 ,8 27 .0 5\n± 38 ,9 1 8 .2 3\n± 30 ,5 3 6. 00\n± 52 ,7 0 5. 76\n± 3 2 ,0 8 9. 03\n± 49 ,0 5 7. 8 7\n± 31 ,3 4 1. 49\n± 58 ,9 3 5. 67\n± 65 ,3 76 .9 2\n± 44 ,3 15 .6 7\n± 24 ,9 59 .6 7\n± 84 ,9 56 .0 4\n± 2 7, 01 3 .2 7 ± 1 0 ,4 68 .3 1 ± 9 ,8 21 .5 2\n± 8 1 ,5 44 .7 5\n± 34 ,7 9 8. 6 5\n± 49 ,7 2 2. 1 0 ± 22 ,3 9 1. 5 4 ± 11 ,9 7 6. 63\nA R\n-F R\n1, 67 4 ,6 0 3 .4 4\n1, 1 82 ,5 89 .3 0\n79 1 ,6 60 .5 1\n7 87 ,1 10 .4 3\n7 99 ,1 2 4 .8 8\n1 ,6 7 7 ,2 0 1. 0 7\n1 ,1 95 ,3 7 8. 25\n74 9 ,3 5 8. 4 5\n72 9 ,8 8 1 .0 5\n7 05 ,3 75 .6 6\n9 26 ,2 13 .0 5\n1, 02 3, 04 8. 10\n85 9, 5 52 .5 2\n81 9, 66 9 .8 0\n81 2, 02 7 .1 1\n1, 42 5 ,8 34 .1 1\n1, 17 3, 95 6 .4 0\n92 5, 8 54 .4 8\n8 44 ,7 51 .7 7\n8 35 ,8 02 .1 1\n± 38 ,2 29 .9 2\n± 3 6, 91 3 .5 3\n± 5 1 ,7 82 .4 5\n± 30 ,3 0 6 .4 3\n± 3 3 ,6 96 .7 6\n± 2 7, 9 8 5. 3 5\n± 29 ,5 1 8. 16\n± 17 ,8 6 0 .4 0\n± 61 ,8 47 .1 2\n± 1 8, 36 4 .5 7\n± 3 1 ,6 20 .3 2\n± 7 0 ,0 43 .6 9\n± 33 ,9 15 .7 2 ± 15 ,2 39 .7 1 ± 9 ,8 23 .4 8\n± 76 ,6 7 4. 6 2\n± 10 8, 44 8. 13\n± 66 ,0 0 8. 74\n± 24 ,4 13 .2 3 ± 12 ,8 15 .2 0\nA R\n-R U\n1, 83 9 ,6 54 .0 3\n1 ,3 5 4 ,4 0 1 .8 4\n84 8 ,0 31 .2 5\n8 69 ,5 51 .0 1\n8 69 ,6 1 2 .9 3\n2 ,1 9 7 ,5 4 1. 9 8\n2, 2 2 8, 31 9 .1 7\n2, 43 1 ,8 5 3. 8 3\n1, 94 7, 40 4. 65\n1, 82 9, 83 9. 51\n91 8, 1 72 .2 9\n1, 1 10 ,8 84 .6 2\n8 31 ,1 1 2. 9 7\n78 8, 60 8. 2 0\n77 7, 41 8. 25\n1, 58 7, 52 1. 54\n1, 32 7, 48 3. 0 8\n92 9, 7 90 .5 0\n8 60 ,7 89 .4 6\n85 2, 52 0. 24\n± 77 ,8 43 .2 0\n± 93 ,5 25 .8 1\n± 4 5 ,3 1 1 .4 1\n± 5 0 ,9 29 .8 5\n± 1 12 ,9 6 9. 06\n± 11 8 ,1 7 2. 4 3 ± 3 15 ,1 9 0 .2 9 ± 73 4 ,5 3 1 .4 8 ± 50 1 ,3 80 .3 4 ± 57 5 ,0 69 .7 4\n± 38 ,4 38 .6 3\n± 79 ,9 72 .0 5\n± 35 ,3 7 3. 49\n± 14 ,0 59 .9 6 ± 8, 1 80 .3 8\n± 76 ,5 64 .1 2\n± 24 7, 18 7. 6 3 ± 5 4 ,9 54 .7 8 ± 27 ,3 2 4. 96\n± 10 ,9 0 9. 30\nA R\n-Z H\n1 ,2 80 ,8 17 .0 3\n1 ,2 81 ,1 9 7 .2 6\n1, 13 2 ,3 98 .0 0\n1, 12 0 ,6 8 1. 62\n1, 1 21 ,6 4 5 .9 9\n1 ,5 68 ,7 4 3. 1 3\n1, 2 0 1, 5 66 .3 7\n7 69 ,1 8 3 .3 0\n75 2 ,9 91 .2 0\n74 5 ,2 6 4. 8 5\n7 89 ,1 40 .0 1\n94 2 ,4 41 .6 8\n72 9 ,6 12 .4 2\n67 6, 40 9 .8 9\n67 3, 01 2. 11\n1, 21 5 ,9 00 .3 4\n1 ,0 90 ,6 17 .7 4\n78 4, 57 9 .9 4\n6 9 7, 06 6. 73\n69 7, 86 6 .0 7\n± 8 3 ,6 8 3 .9 1\n± 58 ,5 74 .8 7\n± 33 ,1 88 .8 9\n± 3 8, 49 5 .7 7\n± 3 0, 84 0 .4 7\n± 5 4 ,8 45 .3 3\n± 1 1 6, 2 03 .8 7\n± 2 9, 2 22 .5 8\n± 3 1 ,7 5 9. 1 2\n± 1 1 ,9 16 .2 5\n± 41 ,1 01 .5 3\n± 67 ,2 55 .2 0\n± 39 ,3 47 .5 8 ± 1 1, 96 6 .2 2 ± 1 0 ,2 76 .1 4\n± 93 ,9 70 .6 0\n± 16 4, 49 9 .4 6 ± 63 ,1 1 2 .6 8 ± 2 4 ,0 73 .7 4 ± 2 1 ,9 32 .0 7\nE N\n-A R\n1, 51 0 ,7 40 .0 1\n1 ,2 1 9 ,9 1 4 .7 1\n8 4 0 ,0 90 .4 9\n85 1 ,2 69 .3 4\n83 5 ,2 92 .9 0\n1 ,6 1 2 ,8 8 7. 0 2\n1, 4 11 ,0 4 7 .8 4\n1 ,3 76 ,2 1 1 .9 8\n1, 10 0, 95 8. 9 9\n9 05 ,2 75 .8 2\n9 24 ,8 98 .3 1\n1, 19 2 ,6 84 .9 0\n86 8, 14 0. 02\n8 13 ,8 74 .2 5\n7 88 ,4 14 .2 8\n1, 41 9, 27 5. 8 8\n1 ,3 6 1, 39 8. 11\n94 4, 55 4 .1 5\n90 3, 94 1 .6 9\n86 2, 16 2 .9 2\n± 6 5 ,3 75 .8 9\n± 12 0 ,4 33 .8 4\n± 5 5 ,0 48 .7 1\n± 5 1 ,8 7 4. 4 4\n± 47 ,7 1 8 .6 8\n± 3 6, 8 58 .6 3\n± 9 3 ,4 68 .2 8\n± 36 0, 8 10 .3 4 ± 27 2 ,3 9 3. 5 0\n± 52 ,5 91 .2 2\n± 42 ,7 4 5. 8 0 ± 11 0, 70 0. 20\n± 5 5, 33 1. 86\n± 3 4 ,4 72 .4 9 ± 11 ,4 28 .2 6\n± 12 4, 34 1. 4 3 ± 2 52 ,3 43 .7 4 ± 9 8 ,0 8 9 .1 9 ± 6 3 ,7 10 .3 7 ± 24 ,8 82 .5 8\nE N\n-E S\n1, 68 1 ,1 6 5 .4 5\n1 ,1 83 ,3 88 .6 5\n81 0 ,0 07 .6 2\n8 44 ,1 8 6. 0 6\n9 6 5, 9 4 2. 2 7\n1 ,6 98 ,5 84 .2 2\n1, 17 2 ,7 36 .5 8\n8 1 2 ,0 42 .4 2\n8 36 ,0 78 .2 1\n8 62 ,3 62 .7 8\n88 7 ,0 4 5. 1 5\n1, 05 1, 45 1. 41\n82 9, 47 0 .6 1\n79 0, 23 1 .8 7\n78 6, 3 06 .0 7\n1, 40 2 ,3 11 .2 9\n1 ,2 16 ,6 99 .1 4\n87 4, 8 02 .0 0\n8 21 ,1 71 .7 5\n8 16 ,4 03 .0 5\n± 4 8, 3 33 .3 3\n± 3 2 ,3 16 .4 4\n± 7 1 ,1 6 2. 4 4\n± 8 8 ,9 4 6. 7 4\n± 10 6 ,2 6 4. 4 2\n± 63 ,9 4 2 .7 3\n± 2 6 ,5 4 4. 6 1\n± 4 9 ,3 7 5. 3 4\n± 9 1, 25 7 .6 8\n± 41 ,8 34 .9 9\n± 2 1 ,4 62 .7 5\n± 80 ,3 81 .7 3\n± 21 ,1 2 1. 79\n± 10 ,8 28 .7 7 ± 7, 2 41 .9 2\n± 72 ,6 07 .5 6\n± 10 9, 80 7. 2 5 ± 37 ,2 2 6 .6 3 ± 1 9, 08 8 .5 9 ± 1 1 ,0 16 .6 5\nE N\n-F R\n1 ,6 8 6 ,6 6 4 .4 1\n1, 17 8 ,1 68 .8 3\n8 1 9 ,6 2 8. 0 2\n8 34 ,5 3 7 .8 4\n86 3 ,8 6 4 .6 2\n1 ,6 8 3 ,9 71 .8 9\n1, 1 67 ,3 2 4 .3 9\n79 1 ,4 37 .9 2\n84 9 ,8 88 .5 0\n89 7 ,9 8 3. 7 6\n9 34 ,0 50 .8 7\n1, 03 5, 71 4. 9 9\n85 1, 17 1 .4 1\n81 4, 97 5 .8 9\n80 8, 88 6. 69\n1, 43 5 ,8 57 .0 4\n1 ,1 94 ,7 82 .8 2\n9 23 ,2 40 .0 9\n8 40 ,3 83 .9 4\n8 30 ,7 88 .8 1\n± 4 5 ,7 80 .1 1\n± 3 6 ,6 7 2. 3 4\n± 8 2 ,0 5 5. 4 2\n± 71 ,5 2 9. 5 1\n± 91 ,1 2 4. 2 7\n± 31 ,3 27 .5 5\n± 4 4 ,2 1 0. 7 5\n± 56 ,9 7 7. 4 2\n± 8 0, 9 81 .3 7\n± 10 8 ,4 32 .8 1\n± 1 7 ,0 22 .6 3\n± 84 ,8 47 .5 7\n± 27 ,4 42 .3 4 ± 1 5, 51 3 .7 1 ± 8, 76 3. 55\n± 8 4, 87 2 .4 4\n± 20 9 ,6 76 .1 7 ± 77 ,0 36 .4 3 ± 2 2, 77 6 .1 2 ± 1 9 ,2 27 .1 3\nE N\n-R U\n1 ,8 0 0 ,2 85 .2 1\n1 ,3 2 2 ,6 98 .3 6\n9 0 5 ,4 0 4 .6 6\n87 7 ,3 5 6 .9 2\n87 7 ,6 11 .0 1\n2 ,1 45 ,9 2 9. 3 8\n2, 05 6 ,9 6 2 .2 6\n2, 4 3 3, 40 5 .4 1\n2 ,0 82 ,1 68 .2 5\n2, 05 7 ,3 26 .4 7\n88 5, 40 3. 3 0\n1, 10 9, 22 0. 75\n82 5, 7 85 .7 6\n78 8, 32 5. 99\n77 6, 04 1. 11\n1, 5 88 ,0 98 .7 1\n1 ,2 94 ,0 95 .3 6\n92 1, 0 79 .5 4\n85 9, 6 48 .1 4\n84 5, 65 5 .4 5\n± 2 9 ,9 2 5 .0 5\n± 3 5 ,2 69 .3 2\n± 8 5 ,6 69 .6 0\n± 6 5, 4 07 .0 2\n± 6 4, 6 09 .2 2\n± 1 03 ,0 2 1. 89\n± 3 1 2, 12 6 .1 9 ± 4 0 3, 38 5 .2 8 ± 5 07 ,7 89 .3 8 ± 78 6, 35 1 .5 0\n± 27 ,2 23 .8 4\n± 77 ,9 56 .9 9\n± 26 ,4 6 4. 92\n± 1 6, 71 7. 11\n± 1 0 ,2 51 .1 2\n± 74 ,6 70 .8 8\n± 23 9, 51 2. 03\n± 5 2 ,4 1 4. 5 5 ± 24 ,9 07 .3 8 ± 5, 37 6 .9 6\nE N\n-Z H\n1 ,2 26 ,6 60 .6 8\n1, 2 94 ,7 3 7 .7 3\n1, 12 9, 16 7 .1 4\n1, 15 7 ,0 1 9. 4 5\n1 ,1 3 1 ,3 3 2. 8 6\n1, 69 4 ,3 05 .7 8\n1, 24 6 ,0 2 7. 0 7\n82 1 ,5 92 .7 4\n80 8 ,8 48 .9 5\n81 7 ,7 37 .1 5\n78 5 ,1 38 .7 4\n9 54 ,3 62 .8 2\n72 9 ,7 39 .6 4\n68 1, 29 1 .8 2\n67 2, 10 0 .1 2\n1, 2 03 ,8 01 .6 2\n1 ,1 00 ,7 4 7. 41\n8 02 ,5 4 3. 68\n7 0 0, 99 1. 91\n6 95 ,4 38 .3 3\n± 94 ,0 88 .7 6\n± 39 ,3 4 4 .1 7\n± 3 0, 67 6 .1 3\n± 2 8, 96 5 .7 9\n± 1 7 ,9 9 2 .0 5\n± 8 9 ,6 1 2. 6 2\n± 14 6 ,9 8 8. 7 7\n± 48 ,3 9 2 .4 6\n± 2 9 ,2 0 6. 9 1\n± 2 0 ,4 01 .2 9\n± 37 ,1 69 .4 1\n± 68 ,0 65 .9 0\n± 41 ,6 40 .7 1 ± 2 0 ,2 30 .3 9 ± 1 0 ,9 46 .8 1\n± 65 ,2 61 .4 0\n± 13 6, 36 6 .7 1 ± 4 5, 92 7 .7 4 ± 2 2 ,8 5 0. 33\n± 1 1, 29 1 .5 1\nE S-\nA R\n1, 5 02 ,1 18 .2 6\n1, 2 25 ,7 6 1 .4 3\n85 5 ,6 96 .3 7\n8 4 8 ,4 4 7 .6 1\n7 8 0 ,3 7 5 .1 6\n1 ,6 05 ,9 5 5 .7 8\n1, 4 9 0, 4 11 .3 7\n1 ,1 6 1 ,6 01 .7 0\n1, 01 4, 99 4. 2 4\n9 70 ,0 76 .3 5\n91 6 ,5 26 .6 9\n1, 19 7 ,6 24 .0 1\n85 7, 48 9. 43\n8 14 ,2 08 .1 0\n7 87 ,7 50 .6 1\n1, 40 1, 58 3. 2 9\n1 ,2 8 8, 01 3. 56\n94 2, 43 7 .0 9\n89 0, 5 64 .9 0\n86 2, 2 40 .6 0\n± 59 ,5 31 .4 2\n± 1 18 ,9 4 7 .7 7\n± 5 3 ,1 7 9 .4 4\n± 5 4 ,3 5 5. 00\n± 6, 6 22 .9 4\n± 62 ,0 9 8. 79\n± 1 20 ,3 1 6. 5 0 ± 1 18 ,8 16 .2 7 ± 2 51 ,4 91 .5 5 ± 21 1 ,4 9 5. 9 0\n± 43 ,9 61 .4 8 ± 1 05 ,7 78 .6 6 ± 47 ,5 66 .3 8 ± 32 ,4 71 .6 0 ± 13 ,9 80 .2 3\n± 1 09 ,9 16 .8 4 ± 21 8 ,3 10 .6 0 ± 9 3 ,2 54 .7 0 ± 7 4 ,4 17 .8 4 ± 24 ,4 04 .8 5\nE S-\nE N\n1, 52 8 ,8 79 .4 7\n1 ,1 2 5 ,9 2 9 .7 2\n79 4 ,6 13 .9 1\n8 14 ,3 2 5 .6 3\n8 22 ,4 9 5 .3 0\n1 ,5 3 6 ,4 7 9. 43\n1, 1 2 3, 31 1 .0 2\n7 96 ,2 0 7 .8 1\n7 86 ,4 42 .2 6\n79 9 ,4 9 0. 21\n80 5 ,5 78 .6 0\n89 2 ,3 68 .7 2\n73 8 ,8 70 .4 2\n70 4, 02 5 .5 4\n69 9, 4 48 .8 0\n1, 29 0 ,9 21 .2 2\n1 ,0 59 ,7 25 .2 5\n82 4, 1 22 .7 1\n7 29 ,2 24 .3 7\n72 3, 24 1. 13\n± 17 ,3 80 .0 8\n± 3 3 ,0 1 2 .1 1\n± 7 5 ,5 58 .2 0\n± 8 2 ,4 52 .9 3\n± 9 0, 3 57 .6 9\n± 19 ,8 56 .0 1\n± 2 8 ,4 0 8. 3 1\n± 63 ,7 5 6 .3 7\n± 69 ,6 96 .4 7\n± 3 6, 82 9. 85\n± 21 ,8 17 .9 3\n± 49 ,2 90 .5 9\n± 28 ,7 74 .8 0 ± 9, 88 9 .0 0\n± 7, 2 76 .7 7\n± 76 ,0 27 .6 6\n± 44 ,1 81 .1 9\n± 6 3 ,6 0 9. 75\n± 20 ,3 54 .8 2 ± 12 ,0 7 6. 16\nE S-\nFR 1 ,6 7 1 ,9 31 .0 4\n1, 1 85 ,2 3 7 .7 7\n79 2 ,0 16 .6 2\n8 01 ,6 25 .4 7\n8 2 7 ,4 0 6 .5 5\n1 ,6 77 ,6 9 9. 91\n1, 1 7 7, 3 74 .2 2\n7 81 ,2 5 3. 4 5\n81 5 ,4 25 .7 7\n78 1 ,3 9 9. 8 5\n9 30 ,1 15 .3 5\n1, 00 4, 96 0. 2 0\n85 0, 75 3 .2 8\n82 1, 77 1 .7 6\n80 7, 95 0. 92\n1, 43 6 ,0 26 .9 5\n1 ,2 12 ,9 57 .0 4\n9 43 ,3 07 .7 4\n8 39 ,9 85 .4 7\n83 9, 76 0. 81\n± 27 ,7 30 .3 5\n± 3 3 ,3 38 .3 2\n± 7 3 ,0 58 .0 8\n± 3 8, 0 60 .7 3\n± 58 ,3 43 .1 7\n± 23 ,9 8 2 .3 6\n± 40 ,7 21 .1 3\n± 4 9, 92 5 .2 7\n± 6 3, 8 15 .5 4\n± 2 2 ,5 42 .7 4\n± 22 ,1 16 .8 2\n± 59 ,8 88 .0 8\n± 28 ,5 85 .5 8 ± 2 6, 79 7 .4 3 ± 9, 11 7. 7 0\n± 79 ,8 89 .5 4\n± 20 6, 01 1 .4 1 ± 7 4 ,0 3 6. 7 2 ± 2 2, 65 0 .1 3 ± 1 9 ,3 34 .3 8\nE S-\nR U\n1 ,8 0 2, 9 53 .0 2\n1, 3 16 ,2 80 .0 8\n1, 08 3 ,0 50 .5 4\n9 61 ,4 9 7. 3 3\n93 4 ,2 28 .9 5\n2 ,1 8 2, 1 42 .7 3\n2, 30 2 ,6 0 8. 5 3\n2, 6 15 ,5 0 5 .9 7\n2 ,4 08 ,3 7 9. 2 9\n1, 71 9, 14 4. 46\n91 3, 3 99 .9 4\n1, 09 5 ,1 13 .5 9\n82 6, 91 9. 25\n7 86 ,9 55 .7 5\n7 76 ,3 62 .3 7\n1, 58 9, 95 1. 8 5\n1 ,2 8 7, 24 1. 22\n9 25 ,2 82 .4 1\n8 6 3, 85 6. 53\n85 2, 10 6. 52\n± 3 6 ,9 05 .6 0\n± 47 ,9 34 .4 4\n± 58 2 ,6 27 .7 2 ± 11 8 ,5 56 .0 2\n± 5 3 ,2 02 .7 5\n± 1 3 5, 7 19 .7 5 ± 33 0, 8 63 .0 7 ± 60 7 ,0 01 .2 6 ± 43 6, 1 57 .8 6 ± 58 8 ,8 39 .5 7\n± 4 1, 65 0. 78\n± 8 0 ,2 09 .2 5\n± 3 2 ,6 66 .6 8 ± 15 ,3 61 .2 3 ± 9, 47 5. 6 1\n± 82 ,9 13 .6 5\n± 2 35 ,0 0 2. 06\n± 57 ,1 5 2. 5 5 ± 27 ,7 97 .3 3 ± 9 ,4 33 .1 5\nE S-\nZ H\n1, 2 32 ,6 11 .0 5\n1, 31 2 ,0 46 .0 7\n1 ,1 1 9 ,0 2 3. 4 6\n1 ,1 24 ,8 9 6. 5 8\n1, 1 2 7, 9 69 .4 7\n1, 6 64 ,8 3 3 .8 3\n1, 2 2 4, 8 59 .9 4\n8 08 ,7 6 5 .4 0\n80 1 ,3 1 7. 28\n81 2 ,9 27 .3 8\n79 0 ,3 1 3. 0 4\n9 54 ,6 65 .2 3\n73 0 ,0 68 .5 3\n67 9, 04 9. 7 7\n67 3, 43 4. 94\n1, 21 1, 61 0. 91\n1, 10 5, 85 1. 1 4\n79 4, 3 27 .2 1\n7 00 ,1 22 .2 5\n6 92 ,5 39 .6 5\n± 8 2 ,2 7 0. 75\n± 88 ,3 61 .9 1\n± 4 5, 89 8 .6 7\n± 3 1 ,8 2 4 .1 6\n± 2 3 ,4 14 .9 6\n± 77 ,4 1 2 .0 9\n± 1 31 ,8 3 3 .2 8\n± 2 9 ,2 1 1 .1 2\n± 24 ,0 22 .2 8\n± 26 ,4 10 .0 4\n± 28 ,0 23 .6 3\n± 73 ,4 1 1. 42\n± 3 4, 56 8 .0 2 ± 1 2 ,1 24 .2 6 ± 9 ,4 15 .9 9\n± 7 8 ,1 54 .9 7\n± 1 47 ,2 69 .9 8 ± 54 ,8 4 9. 25\n± 31 ,8 01 .7 0 ± 1 3, 10 2 .5 3\nFR -A\nR 1, 4 93 ,9 98 .5 4\n1, 2 13 ,2 57 .6 9\n8 58 ,9 8 0 .2 2\n8 5 3 ,5 5 6 .6 7\n8 27 ,4 4 6 .9 7\n1 ,6 1 0, 0 76 .6 2\n1 ,3 8 1, 5 24 .2 1\n1, 1 72 ,4 2 8. 64\n9 93 ,9 4 4. 78\n79 9 ,3 19 .4 8\n92 9 ,0 13 .5 1\n1, 20 0, 77 9. 58\n86 3, 1 54 .8 5\n81 2, 9 18 .9 9\n79 4, 82 5 .6 2\n1, 4 00 ,8 57 .4 5\n1, 35 2 ,5 02 .4 1\n9 46 ,3 0 6. 6 4\n8 94 ,2 09 .9 6\n8 62 ,8 1 5. 27\n± 73 ,4 8 7. 0 6\n± 12 9 ,4 50 .7 0\n± 27 ,4 0 6. 2 8\n± 52 ,8 0 3 .5 1\n± 7 4 ,9 95 .8 9\n± 5 8, 76 3. 7 4\n± 63 ,9 8 9. 0 6\n± 1 49 ,0 6 6. 8 9 ± 1 49 ,9 70 .8 7\n± 46 ,3 6 7. 05\n± 48 ,0 67 .9 4 ± 1 08 ,9 25 .0 7 ± 50 ,0 40 .0 2 ± 32 ,0 30 .2 9 ± 12 ,3 41 .0 0\n± 1 09 ,3 42 .8 9 ± 21 5 ,3 35 .1 9 ± 9 3 ,5 90 .1 0 ± 73 ,2 4 3. 54\n± 22 ,8 1 8. 24\nFR -E\nN 1 ,5 39 ,3 3 1. 8 2\n1 ,1 1 2 ,2 35 .9 5\n7 70 ,1 30 .6 3\n7 9 8, 4 4 3. 6 0\n8 14 ,7 1 1. 4 0\n1 ,5 2 8, 7 59 .7 9\n1, 11 6 ,2 7 8. 2 7\n7 6 5 ,8 71 .8 6\n75 8 ,8 13 .7 6\n7 81 ,5 2 0. 62\n81 4 ,1 2 4. 76\n91 5 ,3 92 .2 7\n75 1 ,1 16 .4 3\n70 4, 00 0 .9 7\n69 8, 3 60 .2 7\n1, 29 5 ,0 95 .2 4\n1 ,1 32 ,3 24 .9 1\n8 18 ,0 04 .3 3\n7 31 ,4 59 .6 4\n7 21 ,7 42 .6 1\n± 3 0 ,6 45 .0 0\n± 3 9 ,7 0 3. 27\n± 4 5 ,6 1 9. 4 6\n± 77 ,4 3 2. 7 3\n± 73 ,6 31 .1 9\n± 25 ,5 7 4. 7 7\n± 3 1 ,3 7 3. 5 3\n± 63 ,8 2 1. 2 7\n± 5 6, 82 2 .3 3\n± 89 ,9 58 .5 7\n± 28 ,9 93 .1 6\n± 46 ,7 43 .3 3\n± 38 ,8 48 .5 8 ± 10 ,8 60 .9 5 ± 7, 40 4. 34\n± 77 ,1 10 .8 0\n± 1 30 ,6 07 .5 0 ± 6 4 ,4 78 .3 4 ± 2 5, 7 11 .8 3 ± 11 ,0 5 6. 08\nFR -E\nS 1 ,6 60 ,5 0 3 .4 6\n1, 2 02 ,2 6 3 .3 7\n80 9 ,0 42 .3 6\n8 4 8 ,6 6 6 .8 5\n9 5 5 ,3 3 4 .1 1\n1 ,6 6 0, 2 08 .4 7\n1, 16 3 ,3 4 6. 8 0\n7 9 6 ,5 85 .1 7\n81 8 ,9 2 1. 73\n91 2 ,7 07 .8 1\n88 9 ,0 5 8. 3 6\n1, 03 8, 67 9 .6 0\n82 6, 5 99 .8 3\n78 9, 41 3 .2 5\n78 4, 65 8 .8 9\n1, 4 04 ,1 35 .7 4\n1, 2 47 ,8 27 .6 3\n88 2, 77 9 .2 4\n8 21 ,9 25 .1 7\n81 5, 22 0 .8 6\n± 2 8 ,3 32 .3 2\n± 32 ,9 25 .1 8\n± 6 0, 41 2. 8 2\n± 10 5 ,9 3 4. 4 5 ± 1 28 ,2 1 0 .2 4\n± 29 ,3 5 4 .3 3\n± 43 ,4 3 2 .1 4\n± 6 2 ,7 5 6 .7 9\n± 88 ,1 47 .4 5\n± 67 ,6 43 .7 8\n± 2 6 ,4 51 .4 7\n± 6 4 ,1 67 .3 8\n± 29 ,0 86 .1 5 ± 10 ,4 53 .6 6 ± 8, 16 8. 8 9\n± 75 ,5 80 .9 7\n± 17 1, 05 2. 39\n± 58 ,4 6 3. 4 2 ± 18 ,1 5 1. 49\n± 10 ,2 6 1. 57\nFR -R\nU 1, 8 01 ,7 8 7 .8 1\n1, 33 8 ,3 0 2 .4 7\n8 83 ,2 7 7. 92\n8 7 5 ,5 4 6 .3 7\n8 9 6 ,2 9 6 .7 0\n2 ,2 28 ,5 37 .8 8\n2 ,1 2 1, 93 1 .7 1\n2, 1 80 ,1 80 .7 7\n2, 29 6 ,5 74 .4 3\n2, 3 72 ,2 14 .0 2\n90 4, 74 3. 86\n1, 09 3, 30 9. 9 5\n83 5, 42 8 .6 9\n78 6, 9 84 .3 1\n77 4, 8 83 .5 6\n1, 5 97 ,3 46 .9 0\n1, 27 8 ,3 50 .9 1\n9 2 7, 3 93 .7 4\n86 0, 6 79 .7 4\n85 1, 83 4 .0 7\n± 3 5, 24 8 .2 7\n± 4 2, 60 9 .7 6\n± 7 0 ,7 9 1 .8 4\n± 5 6 ,5 24 .2 1\n± 6 4 ,7 51 .3 7\n± 16 0, 4 3 4. 9 9 ± 3 36 ,9 80 .3 5 ± 65 6 ,3 45 .5 8 ± 43 6 ,7 72 .4 5 ± 20 5 ,8 78 .2 6\n± 64 ,8 81 .3 0\n± 72 ,9 82 .6 2\n± 31 ,2 6 0. 71\n± 15 ,9 09 .2 4 ± 7 ,0 98 .4 5\n± 84 ,4 3 0. 89\n± 24 7, 3 42 .9 5 ± 6 0 ,7 2 7 .5 1 ± 2 6 ,9 41 .2 2 ± 1 1 ,5 82 .4 1\nFR -Z\nH 1, 2 76 ,3 5 0 .9 9\n1 ,2 8 3 ,5 8 3 .7 0\n1 ,1 2 8 ,3 6 1. 7 4\n1, 1 26 ,4 7 3 .2 8\n1, 1 1 0, 9 46 .6 5\n1 ,6 82 ,1 6 7. 79\n1, 23 2, 59 9 .0 6\n7 95 ,2 1 9 .4 6\n78 9 ,0 61 .6 7\n81 9 ,5 23 .7 7\n79 7 ,1 70 .1 4\n95 3 ,2 02 .6 9\n7 27 ,2 08 .9 7\n6 80 ,3 61 .4 4\n67 3, 97 8. 7 2\n1, 18 8, 32 0. 74\n9 96 ,7 17 .4 4\n7 9 8 ,0 48 .5 8\n69 8, 0 49 .5 3\n69 2, 64 2 .5 2\n± 67 ,0 68 .6 4\n± 5 2 ,9 6 2 .7 6\n± 3 3 ,7 35 .9 6\n± 5 5 ,6 10 .4 5\n± 4 8, 4 88 .0 7\n± 1 20 ,9 9 9. 47\n± 1 4 0, 8 3 3. 43\n± 2 7 ,7 23 .0 8\n± 31 ,8 81 .8 1\n± 11 ,4 93 .7 6\n± 27 ,5 6 7. 8 1\n± 64 ,7 2 5. 30\n± 3 8 ,1 76 .4 7 ± 20 ,5 63 .3 6 ± 13 ,3 6 5. 69\n± 71 ,0 9 7. 38\n± 11 8, 26 0. 78\n± 4 7 ,6 30 .6 1 ± 2 2, 47 8 .4 1 ± 1 3 ,7 09 .2 1\nR U\n-A R\n1 ,5 1 4, 66 0 .4 9\n1 ,2 38 ,1 5 4 .7 6\n8 5 4 ,6 0 5 .6 1\n8 48 ,8 16 .0 2\n7 82 ,2 1 8 .3 2\n1 ,6 5 9 ,4 0 3. 1 9\n1, 6 34 ,2 98 .5 8\n1, 38 9, 9 21 .8 6\n1 ,2 09 ,2 65 .0 5\n1, 03 3 ,7 69 .4 0\n92 4, 30 3. 4 5\n1, 19 3, 23 3. 65\n86 0, 1 03 .4 0\n81 3, 81 6. 48\n78 8, 96 9. 01\n1, 4 14 ,1 05 .5 7\n1 ,3 24 ,3 15 .2 0\n9 4 6, 3 4 5. 1 9\n8 93 ,7 2 5. 53\n86 0, 82 2 .7 3\n± 80 ,2 09 .4 3\n± 11 6 ,1 46 .8 3\n± 5 6 ,7 6 0 .1 9\n± 6 8, 3 3 3 .2 9\n± 15 ,1 7 1. 3 8\n± 9 6 ,8 66 .4 8\n± 3 7 0, 6 34 .4 7 ± 38 5 ,0 05 .3 5 ± 4 97 ,3 97 .1 2 ± 15 9, 22 0 .2 4\n± 47 ,5 92 .0 8 ± 10 6, 07 5. 39\n± 52 ,0 3 7. 99\n± 3 4, 05 0. 28\n± 1 5 ,5 02 .4 8\n± 10 3, 23 8. 28\n± 28 1, 89 7. 9 5 ± 93 ,7 3 1 .5 4 ± 7 6, 92 3. 49\n± 24 ,1 95 .8 0\nR U\n-E N\n1 ,5 41 ,2 8 4. 1 9\n1 ,0 9 9 ,1 2 4 .5 6\n7 86 ,1 71 .9 8\n7 9 6 ,8 2 5 .0 8\n8 3 8 ,2 9 9 .5 1\n1 ,5 38 ,8 6 3 .2 5\n1, 1 5 8, 0 53 .1 2\n7 64 ,1 7 5. 7 7\n78 2 ,6 66 .9 0\n75 7 ,0 0 5. 6 2\n7 94 ,1 9 6. 11\n92 4 ,3 49 .0 4\n74 9 ,9 55 .3 5\n70 7, 04 3. 65\n70 1, 15 4. 52\n1, 29 9 ,4 44 .1 0\n1, 01 8, 14 8 .8 1\n8 11 ,6 14 .4 2\n7 31 ,6 8 7. 21\n72 3, 71 6. 51\n± 30 ,9 75 .3 5\n± 31 ,7 2 0 .9 2\n± 5 4, 77 7 .8 3\n± 6 8, 2 9 5 .5 4\n± 97 ,2 98 .9 3\n± 2 7 ,2 29 .7 1\n± 4 2, 7 67 .5 5\n± 6 3, 2 9 8 .6 4\n± 75 ,3 48 .2 9\n± 82 ,5 5 1. 42\n± 23 ,4 2 5. 16\n± 51 ,0 03 .3 8\n± 3 3 ,2 21 .4 5 ± 12 ,4 23 .9 7 ± 7, 64 5 .5 0\n± 7 6 ,5 72 .1 2\n± 97 ,5 67 .1 1\n± 70 ,6 85 .8 0 ± 1 9 ,9 13 .1 2 ± 10 ,2 10 .8 5\nR U\n-E S\n1, 6 53 ,0 35 .5 3\n1, 1 86 ,6 0 9 .0 0\n83 6 ,9 04 .9 9\n84 4 ,1 07 .6 4\n8 7 0 ,9 93 .1 7\n1 ,6 60 ,0 6 6 .4 5\n1, 2 3 5, 8 09 .7 4\n7 84 ,4 14 .7 1\n84 2 ,1 04 .2 7\n85 5 ,6 56 .7 9\n8 96 ,0 72 .5 2\n1, 06 5 ,3 27 .2 7\n82 9, 51 4. 31\n7 93 ,4 61 .6 1\n7 86 ,1 02 .5 6\n1, 40 4, 52 1. 6 3\n1 ,1 8 1, 84 0. 49\n9 05 ,8 06 .8 9\n8 2 4, 18 7. 74\n81 6, 91 9. 61\n± 21 ,7 99 .0 4\n± 33 ,2 65 .1 2\n± 9 2, 1 54 .8 8\n± 6 9 ,9 17 .6 3\n± 9 0 ,7 5 6. 2 4\n± 35 ,1 96 .8 9\n± 3 3, 7 13 .1 1\n± 4 8 ,8 71 .4 5\n± 11 7 ,5 98 .9 5\n± 81 ,8 89 .1 1\n± 2 3 ,0 48 .9 8\n± 7 6 ,3 75 .0 4\n± 29 ,9 79 .0 1 ± 10 ,2 24 .6 6 ± 7 ,0 27 .4 6\n± 73 ,6 96 .4 9\n± 22 0 ,2 98 .1 7 ± 83 ,4 2 8 .8 7 ± 18 ,8 36 .4 1 ± 13 ,5 36 .2 5\nR U\n-F R\n1, 67 1 ,9 1 1. 2 4\n1 ,1 48 ,2 37 .2 1\n8 13 ,6 3 8. 0 2\n82 6 ,7 7 0. 8 3\n85 4, 3 81 .9 0\n1 ,6 74 ,9 7 0 .5 4\n1, 18 7 ,4 57 .1 7\n7 6 4 ,8 64 .4 9\n78 7, 98 3 .4 7\n75 8 ,3 3 3. 45\n91 4 ,5 93 .9 8\n1, 02 4, 36 9. 0 2\n85 9, 69 2. 20\n81 9, 75 1. 33\n81 3, 33 7. 85\n1, 42 8, 27 8 .9 2\n1, 20 9, 0 74 .6 2\n93 6, 8 67 .2 6\n84 0, 76 5 .9 9\n8 37 ,6 47 .0 5\n± 3 1, 5 14 .5 3\n± 30 ,3 2 3 .2 0\n± 8 1 ,0 37 .7 3\n± 77 ,2 7 2 .1 1\n± 47 ,1 6 6 .3 8\n± 3 1 ,5 7 7 .2 9\n± 2 6 ,9 23 .7 3\n± 2 3 ,0 61 .6 4\n± 6 0, 11 6 .1 5\n± 56 ,6 54 .0 8\n± 28 ,9 44 .4 6\n± 72 ,8 55 .3 7\n± 27 ,8 1 7. 90\n± 14 ,1 98 .5 4 ± 9, 2 44 .5 3\n± 64 ,9 57 .7 7\n± 41 ,1 32 .7 6\n± 5 6 ,6 28 .7 6 ± 1 7, 0 96 .9 7 ± 14 ,7 3 2. 39\nR U\n-Z H\n1 ,2 4 8 ,2 8 7 .6 1\n1, 33 7 ,5 9 2. 9 0\n1 ,1 3 7 ,3 8 0 .3 2\n1, 1 3 9, 9 40 .1 2\n1, 12 7, 3 0 6. 9 5\n1 ,5 7 1 ,1 41 .8 4\n1, 1 94 ,3 4 4 .9 8\n79 2 ,9 07 .7 0\n79 5 ,9 99 .2 6\n80 2 ,6 00 .3 3\n79 1 ,7 37 .8 0\n9 38 ,4 18 .2 8\n72 5 ,7 15 .6 5\n67 6, 9 71 .7 7\n67 1, 00 4 .6 4\n1, 1 70 ,3 10 .6 1\n1 ,0 83 ,2 9 5. 36\n7 92 ,1 7 0. 5 4\n6 98 ,5 38 .3 5\n69 2, 61 0. 20\n± 81 ,8 3 5. 0 9\n± 89 ,8 4 9 .9 4\n± 44 ,2 3 1 .2 3\n± 4 4, 10 6 .7 5\n± 2 6, 3 05 .4 7\n± 77 ,1 4 5 .4 0\n± 1 13 ,4 3 0 .9 7\n± 2 4, 98 2 .7 1\n± 37 ,6 62 .5 8\n± 34 ,0 68 .5 3\n± 2 7, 00 3 .0 1\n± 71 ,3 92 .0 1\n± 29 ,5 8 4. 8 8 ± 12 ,4 3 5. 61\n± 10 ,2 50 .3 6\n± 2 9 ,2 90 .2 4\n± 13 6 ,6 02 .3 0 ± 5 2 ,2 3 8. 92\n± 18 ,7 0 9. 40\n± 11 ,8 36 .8 6\nZ H\n-A R\n1, 52 0 ,1 3 3 .5 5\n1 ,2 5 1, 84 4 .5 4\n9 01 ,0 0 3. 6 5\n8 7 4 ,4 1 9 .3 6\n8 49 ,6 6 7 .8 6\n1 ,6 2 4, 2 09 .8 5\n1, 45 7 ,7 4 4. 8 0\n1 ,2 38 ,0 7 5. 32\n1, 0 17 ,9 71 .8 5\n89 2 ,8 07 .0 5\n91 5 ,2 09 .0 6\n1, 1 93 ,4 26 .5 0\n86 7, 6 44 .6 9\n81 4, 59 9. 04\n78 8, 13 3. 05\n1, 42 0, 11 8 .8 3\n1, 38 7, 88 3 .5 1\n9 4 7, 5 9 5. 38\n89 1, 41 1. 12\n85 8, 11 0. 42\n± 83 ,3 0 7 .0 6\n± 11 9 ,3 3 1 .9 1\n± 1 6 ,8 2 6 .0 4\n± 4 9 ,2 41 .5 3\n± 5 5 ,5 3 7 .7 2\n± 42 ,1 59 .9 8\n± 1 41 ,3 0 3 .1 0 ± 2 0 4, 1 95 .1 2 ± 13 7 ,5 3 3. 7 6 ± 10 0 ,5 70 .1 0\n± 32 ,3 8 0. 9 6 ± 10 0, 93 3. 92\n± 6 1, 38 5. 13\n± 3 3 ,9 01 .6 6 ± 14 ,2 47 .7 9\n± 12 4, 89 6. 7 3 ± 2 46 ,9 16 .7 7 ± 9 5 ,2 0 7 .4 9 ± 7 5 ,3 78 .1 4 ± 20 ,9 57 .9 7\nZ H\n-E N\n1, 56 2 ,2 1 0 .2 5\n1 ,1 68 ,1 66 .6 5\n81 4 ,2 23 .1 7\n7 70 ,7 0 5. 1 6\n7 7 9, 5 4 0. 8 2\n1 ,5 51 ,5 13 .4 8\n1, 14 1 ,6 88 .2 9\n7 9 8 ,0 52 .3 1\n7 62 ,0 11 .9 2\n8 09 ,9 86 .0 5\n82 1 ,9 4 4. 5 8\n8 98 ,7 8 2. 12\n73 9 ,5 9 5. 7 0\n70 5, 83 7. 56\n6 99 ,2 72 .8 6\n1, 30 7, 17 5. 00\n1, 00 9, 10 6. 8 8\n8 1 0, 54 2 .1 2\n73 2, 93 0. 41\n72 0, 82 7 .3 2\n± 1 5, 27 2 .7 9\n± 4 2 ,4 53 .6 9\n± 5 0 ,8 75 .3 9\n± 3 0 ,9 58 .0 9\n± 5 4, 17 3 .2 9\n± 28 ,7 2 6. 7 5\n± 43 ,5 8 3 .8 3\n± 75 ,5 3 0 .1 7\n± 54 ,9 89 .0 5\n± 54 ,0 07 .8 6\n± 25 ,3 7 8. 46\n± 3 5, 48 1 .9 2\n± 2 6 ,9 88 .7 4 ± 1 0 ,6 92 .0 2 ± 9, 41 7. 54\n± 91 ,6 59 .8 1\n± 64 ,1 0 0. 34\n± 53 ,3 0 9 .7 5 ± 2 3, 10 7 .2 3 ± 1 0 ,2 43 .0 6\nZ H\n-E S\n1 ,6 69 ,6 28 .6 9\n1, 25 7 ,7 03 .2 2\n8 7 3 ,5 7 9. 22\n8 0 5, 2 16 .8 8\n7 67 ,1 72 .7 7\n1 ,6 71 ,7 1 7. 6 0\n1, 2 0 7, 19 4 .9 8\n8 27 ,0 3 7. 5 9\n8 15 ,6 60 .8 5\n84 4 ,4 27 .4 0\n89 7 ,3 03 .5 2\n1, 06 4 ,0 57 .7 8\n8 35 ,4 22 .1 5\n7 92 ,7 67 .5 1\n79 0, 34 4. 6 6\n1, 41 3, 77 1. 04\n1 ,2 1 9, 64 4. 7 7\n8 7 3, 8 5 5. 62\n8 2 3, 3 66 .5 0\n81 7, 53 6. 03\n± 32 ,9 9 0 .2 3\n± 45 ,7 2 0 .3 3\n± 50 ,8 3 6 .7 3\n± 5 5, 92 7 .9 1\n± 2 9, 21 1 .8 5\n± 4 4 ,9 63 .4 4\n± 30 ,3 0 9 .2 8\n± 5 0, 52 9 .7 6\n± 52 ,7 44 .7 3\n± 75 ,2 01 .0 8\n± 36 ,8 6 0. 2 9\n± 71 ,6 3 1. 03\n± 3 1 ,1 04 .3 5 ± 10 ,7 98 .3 2 ± 9, 00 3 .4 9\n± 89 ,1 7 3. 36\n± 14 4, 55 2. 08\n± 38 ,1 7 5. 84\n± 18 ,5 7 7. 96\n± 11 ,5 50 .0 0\nZ H\n-F R\n1 ,6 95 ,4 8 4 .3 4\n1 ,2 4 8 ,5 9 5 .7 8\n8 47 ,4 61 .9 1\n7 6 6 ,0 4 7 .7 6\n7 2 2 ,9 3 8 .1 0\n1 ,6 9 7, 71 4 .4 9\n1, 19 1 ,4 18 .1 3\n8 1 5 ,4 63 .3 0\n80 8 ,3 8 8 .7 8\n82 7 ,6 21 .8 0\n92 8 ,0 38 .1 5\n1, 02 8, 00 4. 46\n85 4, 6 56 .5 5\n81 8, 51 3 .5 3\n81 4, 32 0 .5 4\n1, 4 38 ,4 93 .6 8\n1, 20 1, 8 53 .7 3\n91 5, 60 3 .5 3\n8 43 ,5 13 .4 5\n83 4, 85 7 .8 7\n± 27 ,1 0 7 .0 5\n± 4 8 ,3 2 1 .7 9\n± 41 ,0 7 7. 1 7\n± 41 ,1 6 2. 8 5\n± 32 ,1 8 2 .3 7\n± 2 3 ,2 05 .9 4\n± 46 ,5 2 9. 3 5\n± 69 ,1 39 .0 2\n± 45 ,4 59 .6 7\n± 39 ,3 7 0. 8 9\n± 30 ,1 93 .2 1\n± 65 ,0 07 .3 7\n± 26 ,5 72 .3 9 ± 14 ,2 0 1. 0 3 ± 13 ,2 7 7. 1 6\n± 77 ,0 39 .6 1\n± 13 7 ,3 43 .0 7 ± 54 ,1 0 4 .6 8 ± 21 ,7 07 .5 1 ± 1 4, 23 4 .1 3\nZ H\n-R U\n1, 81 3 ,8 30 .2 7\n1 ,3 81 ,6 36 .9 3\n9 22 ,3 2 8 .2 7\n8 2 5 ,8 7 8 .0 6\n7 9 3 ,5 0 0 .1 1\n2 ,2 63 ,1 12 .0 4\n2 ,3 3 2, 00 8 .3 5\n2, 0 41 ,3 71 .9 1\n2, 19 0 ,2 63 .9 8\n1, 5 31 ,6 60 .2 9\n89 8, 11 6 .3 1\n1, 09 0, 95 7. 6 8\n83 1, 15 4. 56\n78 5, 88 4. 85\n77 7, 64 2. 70\n1, 59 8, 14 7 .9 5\n1, 27 0, 7 65 .1 8\n92 5, 8 64 .5 3\n86 0, 6 78 .5 2\n84 9, 35 9 .5 8\n± 2 5, 06 2 .5 2\n± 4 6, 71 1 .3 5\n± 4 1 ,9 0 1 .7 2\n± 4 7 ,6 65 .7 5\n± 4 4 ,1 28 .6 1\n± 16 7, 3 5 6. 2 3 ± 2 98 ,7 47 .5 4 ± 52 4, 8 29 .5 9 ± 46 7 ,4 25 .1 2 ± 52 6 ,2 43 .8 4\n± 4 3 ,4 71 .2 5\n± 73 ,7 82 .6 3\n± 32 ,6 2 7. 39\n± 14 ,8 83 .9 2 ± 8 ,0 04 .7 7\n± 94 ,0 2 3. 22\n± 26 2, 5 27 .0 8 ± 5 0 ,8 4 5 .8 0 ± 2 7 ,5 16 .2 6 ± 8, 8 82 .6 0\n10 0\n1 ,0 00\n10 ,0 00\n10 0 ,0 00\n1 ,0 00 ,0 00\nA R\n-Z H\n_p in\nyi n\n1, 24 4, 71 9. 00\n9 64 ,2 0 6. 32\n84 4 ,1 87 .8 5\n84 7 ,8 90 .6 9\n84 3 ,6 79 .5 2\n± 19 ,4 0 4. 8 9\n± 62 ,4 4 7. 97\n± 33 ,7 7 7. 37\n± 37 ,1 72 .7 5\n± 46 ,6 88 .2 8\nE N\n-Z H\n_p in\nyi n\n1, 27 4 ,3 2 6. 3 8\n93 5 ,6 34 .7 0\n8 97 ,9 91 .3 1\n84 8 ,4 01 .4 7\n84 4 ,4 40 .4 3\n± 6, 44 6. 7 3\n± 63 ,1 39 .6 7\n± 89 ,8 66 .1 1\n± 34 ,8 86 .1 2\n± 31 ,0 96 .2 0\nE S-\nZ H\n_p in\nyi n\n1, 2 23 ,8 92 .6 7\n94 7 ,0 71 .2 3\n84 4 ,4 16 .8 8\n90 2 ,1 42 .5 1\n84 2 ,9 15 .1 0\n± 2 4 ,2 86 .3 7\n± 71 ,6 69 .7 8\n± 31 ,4 7 9. 07\n± 61 ,2 95 .2 7\n± 32 ,9 26 .8 2\nFR -Z\nH _p\nin yi\nn 1, 23 7, 6 45 .5 1\n9 40 ,6 9 6. 08\n86 9 ,3 4 5. 88\n88 0 ,9 28 .8 5\n82 2 ,5 83 .4 8\n± 4 0, 66 8 .8 3\n± 6 4, 68 6 .9 3\n± 15 ,2 72 .3 9\n± 47 ,1 30 .0 9\n± 54 ,5 68 .1 8\nR U\n-Z H\n_p in\nyi n\n1, 22 8, 5 49 .6 7\n9 57 ,2 08 .3 4\n87 4 ,6 4 9. 15\n85 5 ,2 46 .7 8\n82 7, 62 0 .6 5\n± 3 9, 35 5 .5 3\n± 9 1, 71 5 .8 1\n± 46 ,1 76 .2 2\n± 26 ,9 53 .5 5\n± 35 ,3 29 .2 5\nA R\n-Z H\n_w ub\ni 1, 40 5, 26 5. 49\n1, 57 3 ,5 2 3. 54\n1, 4 78 ,0 60 .7 0\n1 ,5 34 ,5 23 .8 5\n1 ,5 66 ,8 62 .1 4\n± 24 ,1 7 7. 9 3\n± 1 59 ,5 19 .2 4 ± 20 0, 1 25 .3 6 ± 10 5 ,8 07 .5 2\n± 77 ,3 96 .4 0\nE N\n-Z H\n_w ub\ni 1, 43 5 ,3 0 5. 5 5\n1, 5 70 ,8 71 .9 0\n1 ,5 4 2, 5 64 .6 2\n1 ,6 21 ,8 05 .8 9\n1, 54 1 ,4 17 .8 2\n± 52 ,1 71 .9 1\n± 10 0, 3 92 .4 5 ± 15 5 ,5 3 4. 95\n± 11 0 ,0 98 .1 0 ± 13 5 ,4 53 .1 7\nE S-\nZ H\n_w ub\ni 1, 41 8, 6 76 .3 1\n1, 69 0 ,0 4 0. 92\n1, 3 68 ,2 51 .2 2\n1 ,6 01 ,9 82 .5 1\n1 ,5 75 ,4 31 .3 6\n± 10 ,0 9 4. 8 2\n± 4 52 ,3 06 .1 4\n± 15 ,4 57 .4 4\n± 56 ,5 11 .5 4\n± 17 7 ,7 31 .2 7\nFR -Z\nH _w\nub i\n1, 43 9 ,9 4 5. 68\n1, 4 99 ,2 77 .0 3\n1 ,4 6 6, 5 36 .1 8\n1 ,5 28 ,8 45 .7 9\n1, 50 0 ,8 04 .9 4\n± 41 ,6 47 .3 8\n± 14 7, 2 49 .1 0\n± 73 ,6 5 5. 57\n± 16 4 ,4 14 .4 7 ± 16 4 ,5 99 .2 1\nR U\n-Z H\n_w ub\ni 1, 4 30 ,6 59 .3 7\n1 ,5 5 4, 0 48 .8 8\n1 ,4 52 ,2 4 9. 13\n1 ,4 84 ,5 55 .9 6\n1 ,4 73 ,1 14 .5 8\n± 56 ,2 90 .0 4\n± 14 4 ,2 0 0. 65\n± 5 0 ,5 14 .3 8\n± 19 1 ,4 29 .1 8 ± 18 3 ,7 23 .1 0\nE N\n-A R\n_c p1\n25 6\n1, 46 5 ,4 8 0. 7 3\n1, 1 26 ,4 23 .7 8\n81 9 ,1 98 .8 3\n89 5 ,1 39 .9 0\n84 7 ,4 25 .2 6\n± 50 ,4 9 2. 4 6\n± 1 37 ,5 98 .5 8\n± 39 ,1 91 .9 1\n± 47 ,5 55 .7 8\n± 54 ,3 55 .6 0\nE S-\nA R\n_c p1\n25 6\n1, 4 67 ,0 64 .4 8\n1 ,1 0 3, 8 60 .6 5\n8 22 ,5 23 .1 9\n86 5 ,0 80 .0 6\n80 7 ,2 04 .2 5\n± 68 ,7 19 .7 3\n± 15 7, 0 00 .7 0\n± 48 ,1 0 8. 48\n± 85 ,9 01 .5 8\n± 13 ,6 69 .7 3\nFR -A\nR _c\np1 25\n6 1, 45 3 ,8 5 0. 3 3\n1, 1 07 ,3 06 .5 0\n81 1 ,4 12 .7 8\n83 5 ,1 93 .5 9\n81 8 ,6 08 .8 4\n± 71 ,9 78 .4 5\n± 16 4, 3 97 .0 1\n± 50 ,2 9 6. 08\n± 35 ,4 89 .1 1\n± 23 ,6 76 .9 4\nR U\n-A R\n_c p1\n25 6\n1, 44 3, 1 81 .7 9\n1, 16 6 ,9 1 5. 22\n78 9 ,2 71 .0 3\n77 4 ,2 84 .6 6\n79 6 ,7 88 .1 7\n± 5 9 ,3 66 .2 4\n± 11 8 ,8 6 2. 54\n± 8 8 ,8 02 .2 5\n± 80 ,7 21 .5 8\n± 70 ,3 99 .4 6\nZ H\n-A R\n_c p1\n25 6\n1, 4 49 ,0 43 .7 8\n1 ,1 4 2, 3 24 .0 9\n8 57 ,4 47 .9 4\n80 4 ,9 67 .7 0\n77 8 ,6 55 .5 0\n± 6 0 ,4 12 .5 9\n± 15 4 ,6 1 8. 66\n± 4 9 ,9 44 .5 9\n± 49 ,6 61 .8 3\n± 6, 40 6 .7 3\nA R\n-R U\n_c p1\n25 1\n2, 03 3 ,8 4 8. 3 5\n1, 4 33 ,3 60 .2 5\n87 0 ,3 50 .5 6\n78 9 ,4 82 .9 1\n81 4 ,3 19 .1 3\n± 1 66 ,6 96 .0 8 ± 10 3, 7 44 .5 3 ± 10 4 ,2 4 1. 98\n± 3, 27 2 .5 9\n± 59 ,5 19 .6 6\nE N\n-R U\n_c p1\n25 1\n1, 80 5, 6 22 .9 0\n1, 30 3 ,1 5 8. 16\n86 6 ,8 7 8. 02\n91 6 ,4 41 .1 9\n1 ,0 08 ,6 44 .1 3\n± 29 ,4 1 2. 4 8\n± 40 ,2 2 0. 95\n± 5 0 ,4 82 .7 9\n± 57 ,2 77 .4 6\n± 12 1 ,6 22 .3 4\nE S-\nR U\n_c p1\n25 1\n1, 79 0, 4 25 .6 5\n1, 38 6 ,7 0 0. 30\n1 ,0 18 ,9 09 .5 5\n86 5 ,5 20 .6 1\n1 ,2 70 ,1 42 .1 4\n± 3 3, 81 8 .4 0\n± 14 5, 6 29 .1 5 ± 18 9 ,9 3 4. 38\n± 43 ,6 94 .6 6\n± 63 5 ,6 45 .0 1\nFR -R\nU _c\np1 25\n1 1, 78 6, 5 97 .2 9\n1, 37 0 ,5 2 2. 14\n89 1 ,5 3 7. 51\n90 3 ,3 89 .3 8\n95 5 ,5 77 .8 0\n± 31 ,5 5 1. 71\n± 68 ,2 4 8. 46\n± 7 6 ,4 92 .2 8\n± 84 ,8 56 .4 8\n± 11 6 ,1 58 .0 1\nZ H\n-R U\n_c p1\n25 1\n1, 7 86 ,6 34 .0 7\n1 ,3 4 3, 0 79 .4 6\n8 94 ,8 79 .7 9\n90 5 ,8 25 .0 7\n85 8 ,4 40 .4 1\n± 21 ,3 2 6. 49\n± 42 ,1 2 8. 54\n± 6 6 ,5 48 .0 0\n± 52 ,2 82 .9 1\n± 21 ,3 68 .0 0\nA R\n_c p1\n25 6-\nR U\n_c p1\n25 1\n1, 82 4, 62 3. 51\n1, 37 2 ,5 4 6. 79\n88 2 ,3 42 .4 3\n88 3 ,4 93 .7 0\n93 5 ,9 69 .3 1\n± 56 ,8 5 9. 7 7\n± 1 25 ,1 15 .3 1\n± 54 ,3 41 .1 3\n± 95 ,5 77 .2 4\n± 98 ,7 65 .6 5\nR U\n_c p1\n25 1-\nA R\n_c p1\n25 6\n1, 4 57 ,0 56 .2 4\n1 ,1 5 0, 9 43 .3 0\n8 56 ,6 51 .6 7\n80 0 ,1 29 .5 2\n86 0 ,4 62 .6 6\n± 61 ,3 87 .3 2\n± 12 4, 7 07 .9 9\n± 24 ,1 5 9. 92\n± 48 ,9 24 .1 4\n± 80 ,1 05 .8 3" }, { "heading": "F CORRELATION STATISTICS", "text": "Best correlating metrics, i.e. the union of top 3 metrics for all representations. For each representation, the top 3 metrics are boldfaced. All correlations are highly significant (p < 10−30), except for min source length for WORD (p ≈ 0.0001) and min target length for WORD (p ≈ 0.3861).\nMetric CHAR Pinyin Wubi BYTE ARRUt ARRUs,t WORD BPE\nminimum length (target) 0.84 0.85 0.86 0.60 0.84 0.84 −0.02 0.65 minimum length (source) 0.82 0.84 0.85 0.57 0.84 0.84 0.10 0.64\nnumber of tokens (source) −0.78 −0.81 −0.82 −0.60 −0.81 −0.81 −0.59 −0.83 TTR (target) 0.83 0.83 0.84 0.48 0.81 0.81 0.61 0.83\n|V | (source) −0.54 −0.51 −0.51 −0.50 −0.67 −0.68 −0.63 −0.86 data size in lines −0.80 −0.83 −0.83 −0.59 −0.81 −0.81 −0.62 −0.86 OOV token rate (target) 0.69 0.66 0.66 0.47 0.67 0.68 0.66 0.62\nOOV type rate (target) 0.70 0.71 0.72 0.47 0.69 0.70 0.65 0.62\nTTR (source) 0.67 0.71 0.71 0.60 0.81 0.81 0.56 0.82\nThe full list of metrics used for the correlation analysis is:\n1. minimum length (source), 2. minimum length (target), 3. maximum length (source), 4. maximum length (target), 5. median length (source), 6. median length (target), 7. mean length (source), 8. mean length (target), 9. length std (source),\n10. length std (target), 11. data size in lines, 12. number of parameters, 13. number of types (|V |) (source), 14. number of types (|V |) (target), 15. number of tokens (source), 16. number of tokens (target), 17. type-token-ratio (TTR) (source), 18. type-token-ratio (TTR) (target), 19. OOV type rate (source), 20. OOV type rate (target), 21. OOV token rate (source), 22. OOV token rate (target), 23. token ratio, 24. target type-to-parameter ratio, 25. target token-to-parameter ratio, 26. distance between the TTRs of source and target = (1 - TTRsrc/TTRtrg )2, 27. token-to-parameter ratio (i) = (median length source * median length target * num_lines) / num_parameters, 28. token-to-parameter ratio (ii) = (num_source_tokens * num_target_tokens) / num_parameters." }, { "heading": "G ENLARGED FIGURES FOR ALL 30 LANGUAGE DIRECTIONS (AGGREGATE RESULTS", "text": "FROM ALL RUNS)\nH SAMPLE FIGURES FROM RUN A0, ALSO SORTED BY SOURCE LANGUAGE FOR CONTRAST" }, { "heading": "I LANGUAGE PAIRS WITH SIGNIFICANT DIFFERENCES", "text": "15 (non-directional) language pairs total possible from 30 language directions, p=0.001.\nLANG PAIR CHAR Pinyin Wubi BYTE ARRUt ARRUs,t WORD BPE\nAR-EN X X X\nAR-ES EN-ES X\nAR-FR X EN-FR X X ES-FR\nAR-RU X EN-RU X X X X ES-RU X FR-RU X\nAR-ZH X X X X X EN-ZH X X ES-ZH X X X FR-ZH X X X X RU-ZH X X X X X\nLanguage pairs with significant differences indicate that the 2 languages are not equally/similarly good or equally/similarly bad.\n• Character models with ZH behave differently but the disparity can be eliminated with Pinyin. • Byte models with AR and RU exhibit unstable performance due to length but this can be rectified with\ncompression on the target side only (ARRUt).\n• Word-based models, including BPE, however, consistently favor EN and ZH (though it is more of a “mis-segmentation” for the latter, see § 3 and Appendix J) and disfavor AR and RU (as morphologically complex languages with higher OOV rates)." }, { "heading": "J LANGUAGE COMPLEXITY", "text": "In the words of Bentz et al. (2016):\nLanguages are often compared with regard to their complexity from a computational, theoretical and learning perspective. In computational linguistics, it is generally known that methods mainly developed for the English language do not necessarily transfer well to other languages. The cross-linguistic variation in the amount of information encoded at the level of a word is, for instance, recognized as one of the main challenges for multilingual syntactic parsing (formulated as The Architectural Challenge (Tsarfaty et al., 2013)). Complexity of this kind is also found to influence machine translation: translating from morphologically rich languages into English is easier than the other way around (Koehn, 2005).\n*****\nMorphology is “the study of the formation and internal structure of words”. Morphemes are “the smallest meaningful units of language”. (Bender, 2013)\n*****\nAR and RU are traditionally considered morphologically complex (see e.g. Minkov et al. (2007), Seddah et al. (2010) and proceedings of related workshops in subsequent years), and ZH lacking morphological richness (Koehn, 2005). But this definition of morphology is predicated on the notion of word, defined primarily from an alphabetic perspective. As pointed out by Zhang & Komachi (2018), “the important differences between logographic and alphabetic writing systems have long been overlooked”. In logographic languages (i.e. languages with logographic scripts), there can be units within a character that carry semantic and phonetic information that have never been accounted for in the traditional practice of morphology or in the computation of morphological complexity. For example, in the comparison of different morphological complexity measures by Bentz et al. (2016), all measures studied are defined with the notion of word.8 Yet, there is no universally valid definition of a “word” — the form/idea (as in, the philosophical concept) of a “word” may be there for most languages/cultures (though that is certainly also debatable), but its instantiations are different in different languages/cultures, as well as in different genres/settings within one language. The variability in the definition of word is evident in the variation in language-specific word tokenization algorithms, along with the “indeterminacy of word segmentation” or a work-in-progress status for the definition of “word” advocated by Haspelmath (2011), as well as the contested nature of wordhood, esp. for logographic languages such as ZH (see Duanmu (2017) and Li et al. (2019b) for how some ZH speakers do indeed consider a ZH character to be a word or how “word”, as conventionally used in NLP, is not a native term or does not correspond with speakers’ judgement).\nOur results with the Transformer indicate that a notion of morphological complexity can be modeled given our word tokenization scheme, confirming that morphological complexity is only predicated on the notion of word and bounded within the word level, and orthogonal to the performance of character or byte models. That is, unless word-based segmentation has been applied, there is no reason to attribute crosslinguistic performance disparity to differences in morphological complexity. In fact, on the character and byte level, we were able to achieve performance without disparity. Hence disparity is not a necessary condition but an expectation that has been in mutual reinforcement with our practice of word segmentation, while the definitions of “morphological complexity” and “word” are in a circular dependency with each other.\nIn this paper, we resolve language complexity, more specifically that of morphological complexity, in the context of computing through CLMing with the Transformer, in that we explain away the representation granularities and criteria relevant for such calculation.\nTLDR: Up to the point of our taking up the subject of language complexity in this paper, there has been not a rigorous definition of “language complexity”. Conventionally, “language complexity” is synonymous to “linguistic complexity” (with the tradition of “linguistics” being primarily word-\n8An exception could be that of the type/token ratio (TTR). One could imagine applying TTR on the character level for ZH, and that would be indicative of its morphological richness on the character level. However, that has thus far never been practiced or recognized in NLP.\nbased), and people just assume linguistic complexity, e.g. morphological/syntactic complexity, to be intrinsic and necessary in languages (across representation levels). Our findings show that linguistic complexity is relative to the representation granularity, i.e. since morphology is based on words, it is bounded to the word level.\n*****\nAn alternative perspective, with finer prints:\nWe have also developed a more rigorous interpretation. We take on the definition of “language complexity” as one that is related to the statistical attributes of languages. We assume and define solving as the elimination of statistically significant performance disparity.\nIn larger (6-layer) models, and according to the conventional definition of “language” — i.e. language as a whole, we solved language complexity with compression of AR and RU in byte representations. In smaller (1-layer) models, one can think of the situation as: i) no complexity has been modeled by the Transformer hence there is nothing to solve, or ii) there is no complexity between these languages to begin with, or iii) the Transformer solved the complexity.\nWith respect to each representation level/granularity in the larger models:\n• BYTE: one can think of us as having solved complexity with byte representations or with 1-layer models — for these 6 languages empirically. Theoretically, there could be languages with longer sequence lengths than RU and AR, in those cases, we don’t claim to have solved the matter empirically but only resolved it conceptually. But this is the most that anyone could do at the moment, as there is no relevant parallel data available. • CHARACTER: one can think of us as having solved it via bytes or 1-layer models. Whether\nwe can be considered to have solved it via Pinyin for ZH depends on whether the evaluator accepts decomposition into a phonetic representation only qualifies as a solution for the ZH language. • WORD: one can think of us as having solved it via bytes or 1-layer models. It is not possible\nto solve it strictly within the word level without creating word segmentation criteria that would be unrelatable to native speakers. And since “word” is exclusively a human concept, we must either claim that a universal solution is undefined or undefinable for computing, or retreat to a unit that is the greatest common factor crosslinguistically. Since some ZH speakers consider ZH characters as words, we return to the character-level solution.\nIt is beyond the scope of our paper to solve the qualitative disparity on the word level. However, we do advocate a more inclusive evaluation and critical reflection on the possibility of discontinuing the usage of “word” as such a non-technical term biases against both “morphologically complex” and “morphologically simple” languages. The world of languages in written form can be divided into those with logographic scripts and those with (phonetic) alphabetic ones, with the unit of character being the greatest common factor of them all, from the human perspective. For technical processing, esp. for fair multilingual sequence-to-sequence modeling with the Transformer, we recommend measures that are more standardized, such as those based on bytes or characters. There is room for improvement in the design of character encoding that complements the statistical profiles, e.g. with relative rank in sequence length, of different languages. We believe there is crosslinguistic systematicity on the character level to be leveraged.\nOne’s readiness to accept this as a solution to language complexity can be a subjective matter. One may insist that language complexity be solved exclusively with monolingual LMing (which lies outside the scope of the present work), instead of being confounded with the logic of one language being conditional on another. One may also object to the idea of (re-)solving morphological complexity being equivalent to or leading to solving language complexity as a whole, for there could also be e.g. syntactic complexity (although as substantial “information concerning syntactic units and relations is expressed at word level” in morphologically rich languages (Tsarfaty et al., 2010), the boundary between morphology and syntax is less distinct for some languages than others (Haspelmath, 2011)). If, however, our results could be extended, we wonder if syntactic complexity could be due to our sentence segmentation or a combination of word and sentence segmentation practice. That we leave for future work for those who are interested in the topic." }, { "heading": "K SAMPLE-WISE DOUBLE DESCENT (DD)", "text": "K.1 OUR EXPERIMENTAL FRAMEWORK ON DD DATASETS FROM (NAKKIRAN ET AL., 2020)\nText experiments from previous work reporting sample-wise DD involved words (Belkin et al., 2019) and BPEs (Nakkiran et al., 2020).\nWe applied our experimental framework — by testing data points with 10n lines — on the datasets reported in (Nakkiran et al., 2020) to exhibit DD. WMT’149 EN-FR was reported to demonstrate model-wise DD and IWSLT’14 (Cettolo et al., 2012) DE-EN model-wise and sample-wise DD. We downloaded and prepared the data with scripts10 from the FAIRSEQ Toolkit (Ott et al., 2019). The WMT data was preprocessed with 40,000 BPE operations and IWSLT 10,000. Our focus is on sample-wise DD and hence our goal was to see if the spike at 103 we observed with the UN data would apply also to these datasets. We used the same training regime11 with the Transformer and Adam on SOCKEYE as before and tested both language directions on the entirety of both datasets, with no subsampling. For the IWSLT dataset, we tested data sizes with 102 − 105 lines, then at 160, 239 as that is the total number of lines available. For the WMT dataset, we tested from 102 to 107, then at 35, 762, 532.\nThis shows that the effect we reported in § 5 also holds on these datasets: “the ratio of target training token count to number of parameters falls into O(10−4) for 102 lines, O(10−3) at 103, O(10−2) at 104, and O(10−1) for 105 lines and so on”.\n9http://www.statmt.org/wmt14/translation-task.html 10https://github.com/pytorch/fairseq/blob/master/examples/translation/\nprepare-wmt14en2fr.sh and https://github.com/pytorch/fairseq/blob/master/ examples/translation/prepare-iwslt14.sh\n11max-seq-len 300; checkpoint-frequency 4000 except for cases where 50 epochs would be reached before the first checkpoint: 400 for 102 lines and 3450 for 103 lines.\nK.2 TOKEN-TO-PARAMETER RATIO FOR NON-NEURAL MONOLINGUAL LMS\nWe experimented also on KenLM (Heafield, 2011; Heafield et al., 2013), a non-neural LM with modified Kneser-Ney smoothing (Kneser & Ney, 1995; Chen & Goodman, 1999), on our dataset A and found that on the word level, such a spike (or a hump) is common across all languages, see Figure 17. The target-token-to-parameter ratio is under 1 for most of these smaller data sizes. This seems related to the analytical findings in Opper et al. (1990) where the pseudo-inverse solution to a simple learning problem was shown to exhibit non-monotonicity, with the peak exactly as the ratio of data to parameters (α) approaches 1.\nThe number of parameters of a k-gram model is the number of unique n-grams, 1 ≤ n ≤ k. Table 4 shows the ratios for our trigram model (all n-gram models of higher order exhibit the same effect).\nOn word level, where the function of number of bits to data size is not always monotonic, we observe less of a monotonic development whenever the token-to-parameter ratio is smaller than 1. This is more notably shown in the first 4 sizes in AR with a hump-like curve before the performance improves at 106. This is different from the sharper descent for ES and FR, where only the first two data sizes have a non-monotonic relationship and a token-to-parameter ratio less than 1. Taking the token-to-parameter ratio as a rough proxy for over- (< 1) and under-parameterization (> 1), this can be seen as an instance of non-monotonicity with respect to data size in the “critical regime”, i.e. when the model transitions from being (heavily) over- to under-parameterized (Belkin et al., 2019; Nakkiran, 2019).\nA remark on modeling with finer granularity Our KenLM results show the performance of bytes and characters is not on par with that of words with non-neural algorithms. NNs/DL has enabled much progress in this regard.\nL N\nU M\nB E\nR O\nF M\nO D\nE L\nPA R\nA M\nE T\nE R\nS\nN um\nbe ro\nfm od\nel pa\nra m\net er\ns fo\nrd at\nas et A R ep re se nt at io n C H A R B Y T E W\nO R\nD B PE N um be ro fl in es 10 0 1, 00 0 10 ,0 0 0 10 0 ,0 0 0 1 ,0 00 ,0 00 10 0 1 ,0 00 10 ,0 00 10 0, 00 0 1 ,0 00 ,0 00 1 00 1 ,0 00 10 ,0 00 10 0 ,0 00 1 ,0 0 0, 00 0 10 0 1 ,0 00 10 ,0 0 0\n10 0 ,0 0 0\n1 ,0 00 ,0 00\nN um\nbe r\nof PA\nR A\nM S\nA R\n-E N\n44 ,2 2 6 ,6 39\n44 ,2 4 5, 5 89\n44 ,3 0 9, 11\n8 44 ,3 69 ,0 67\n44 ,4 8 0 ,2 4 8\n4 4 ,2 23 ,0 56\n44 ,2 40 ,4 70\n44 ,3 04 ,5 15\n44 ,3 3 6, 79\n5 44 ,3 60 ,8 74\n45 ,2 4 7, 14\n7 50 ,4 81 ,8 8 5\n6 9, 78\n4, 81\n5 13\n2, 4 73 ,2 3 3\n3 25 ,7 70 ,3 30\n45 ,1 27 ,3 05\n49 ,1 70 ,7 3 2\n6 3 ,9 0 7, 82\n9 86 ,1 55 ,8 52\n89 ,2 42 ,5 1 2\nA R\n-E S\n44 ,2 35 ,8 64\n44 ,2 5 4 ,8 1 4\n44 ,3 00 ,9 18\n4 4 ,3 75 ,2 17\n44 ,4 90 ,4 98\n44 ,2 32 ,2 8 1\n44 ,2 49 ,6 95\n4 4 ,2 95 ,2 9 0\n44 ,3 36 ,7 95\n4 4 ,3 60 ,8 74\n45 ,2 75 ,8 47\n51 ,0 42 ,5 60\n72 ,9 6 4, 3 65\n14 5 ,0 14 ,1 08\n34 6, 63\n1, 13\n0 45 ,1 43 ,7 0 5\n4 9 ,5 56 ,1 32\n65 ,8 12 ,2 79\n87 ,9 22 ,9 5 2\n8 9 ,6 28 ,9 37\nA R\n-F R\n44 ,2 37 ,9 14\n44 ,2 58 ,9 1 4\n44 ,3 17 ,3 18\n44 ,3 7 7 ,2 6 7\n44 ,4 92 ,5 48\n44 ,2 34 ,3 31\n44 ,2 54 ,8 20\n44 ,3 0 7, 59\n0 44 ,3 38 ,8 45\n4 4 ,3 67 ,0 2 4\n45 ,3 11 ,7 2 2\n51 ,0 8 5, 61\n0 71 ,8 18 ,4 15\n13 8, 21\n9, 38\n3 32\n1 ,4 11 ,0 05\n4 5 ,2 07 ,2 55\n49 ,5 1 6, 15\n7 65 ,4 51 ,4 7 9\n8 7 ,3 95 ,0 7 7\n8 9 ,5 1 4, 13 7 A R -R U 44 ,2 47 ,1 39 44 ,2 73 ,2 64 44 ,3 48 ,0 68 44 ,3 8 9, 56 7 44 ,4 96 ,6 48 44 ,2 4 4, 58 1 4 4 ,2 6 9, 17 0 44 ,3 26 ,0 40 44 ,3 42 ,9 45 44 ,3 60 ,8 7 4 45 ,4 56 ,2 4 7 52 ,6 7 6, 4 10 81 ,4 58 ,5 4 0 1 77 ,6 53 ,1 83 44 2 ,2 4 2, 10 5 4 5 ,3 29 ,2 30 50 ,6 7 0, 3 07 71 ,8 72 ,0 7 9 8 9 ,2 87 ,2 27 89 ,9 9 9, 9 87 A R -Z H 44 ,7 66 ,8 14 45 ,4 2 9, 46 4 46 ,5 0 7 ,7 4 3 47 ,6 82 ,8 92 49 ,2 37 ,2 73 4 4 ,2 8 5, 5 81 4 4 ,3 0 2, 99 5 44 ,3 31 ,1 65 4 4 ,3 50 ,1 2 0 44 ,3 88 ,5 49 4 5 ,1 93 ,8 47 50 ,3 68 ,1 10 70 ,9 35 ,8 90 14 2, 39 2 ,1 5 8 3 38 ,1 0 6, 2 05 45 ,2 8 6, 1 80 49 ,1 63 ,5 57 64 ,3 49 ,6 04 87 ,2 8 3, 35 2 89 ,3 24 ,5 1 2 E N -A R 44 ,2 30 ,2 30 44 ,2 5 8, 9 27 44 ,3 2 2, 96 9 44 ,3 73 ,6 84 4 4 ,4 7 7 ,6 8 3 4 4 ,2 21 ,5 17 44 ,2 47 ,1 39 44 ,3 06 ,0 54 4 4 ,3 3 3, 71 7 44 ,3 58 ,8 22 4 5 ,3 43 ,0 78 51 ,8 03 ,3 7 3 7 7 ,0 1 5, 03 7 16 3, 6 29 ,2 6 2 4 19 ,4 03 ,6 03 45 ,1 5 5, 5 20 49 ,8 45 ,3 2 7 6 8 ,0 00 ,0 30 87 ,9 62 ,6 38 89 ,7 00 ,6 2 1 E N -E S 44 ,2 32 ,2 80 44 ,2 41 ,5 0 2 44 ,2 87 ,0 94 4 4 ,3 70 ,6 09 44 ,4 93 ,0 58 44 ,2 33 ,8 1 7 44 ,2 43 ,0 39 4 4 ,2 93 ,7 5 4 44 ,3 39 ,8 67 4 4 ,3 62 ,9 22 45 ,1 8 0, 1 03 49 ,7 23 ,6 48 65 ,7 4 8, 2 37 11 3 ,9 18 ,8 12 25 3, 18 0, 37 8 45 ,1 15 ,5 4 5 4 8 ,8 82 ,8 52 61 ,7 2 8, 0 55 86 ,1 19 ,6 8 8 8 9 ,1 71 ,7 21 E N -F R 44 ,2 34 ,3 30 44 ,2 4 5, 60 2 44 ,3 03 ,4 94 44 ,3 7 2 ,6 5 9 44 ,4 95 ,1 08 44 ,2 35 ,8 67 44 ,2 48 ,1 64 44 ,3 0 6, 05 4 44 ,3 41 ,9 17 4 4 ,3 69 ,0 7 2 45 ,2 15 ,9 7 8 49 ,7 6 6, 69 8 64 ,6 02 ,2 87 10 7, 12 4, 08 7 22 7 ,9 60 ,2 53 4 5 ,1 79 ,0 95 48 ,8 4 2, 87 7 61 ,3 67 ,2 5 5 8 5 ,5 91 ,8 1 3 8 9 ,0 56 ,9 21 E N -R U 44 ,2 43 ,5 55 44 ,2 5 9, 95 2 44 ,3 34 ,2 44 44 ,3 8 4, 95 9 44 ,4 99 ,2 08 44 ,2 4 6, 11 7 4 4 ,2 6 2, 51 4 44 ,3 24 ,5 04 44 ,3 46 ,0 17 44 ,3 62 ,9 2 2 45 ,3 60 ,5 0 3 51 ,3 5 7, 4 98 74 ,2 42 ,4 1 2 1 46 ,5 57 ,8 87 34 8 ,7 9 1, 35 3 4 5 ,3 01 ,0 70 49 ,9 9 7, 0 27 67 ,7 87 ,8 5 5 8 7 ,4 83 ,9 63 89 ,5 4 2, 7 71 E N -Z H 44 ,7 63 ,2 30 45 ,4 1 6, 1 52 4 6 ,4 9 3, 91 9 47 ,6 78 ,2 84 4 9 ,2 39 ,8 33 4 4 ,2 87 ,1 17 44 ,2 9 6, 33 9 44 ,3 29 ,6 29 4 4 ,3 5 3, 19 2 44 ,3 90 ,5 97 4 5 ,0 98 ,1 03 49 ,0 49 ,1 98 63 ,7 19 ,7 62 11 1, 29 6 ,8 6 2 2 44 ,6 5 5, 4 53 45 ,2 5 8, 0 20 48 ,4 90 ,2 77 60 ,2 65 ,3 80 85 ,4 8 0, 08 8 88 ,8 67 ,2 9 6 E SA R 44 ,2 34 ,8 38 44 ,2 63 ,5 35 44 ,3 18 ,8 73 44 ,3 76 ,7 56 4 4 ,4 8 2, 80 3 44 ,2 26 ,1 2 5 44 ,2 51 ,7 47 4 4 ,3 01 ,4 46 44 ,3 3 3, 71 7 44 ,3 58 ,8 22 45 ,3 57 ,4 14 52 ,0 83 ,4 3 7 7 8 ,6 0 3, 26 1 16 9, 8 93 ,5 8 2 4 29 ,8 23 ,8 27 45 ,1 6 3, 7 12 50 ,0 37 ,8 3 9 6 8 ,9 51 ,3 26 88 ,8 45 ,3 26 89 ,8 93 ,6 4 5 E SE N 44 ,2 27 ,6 63 44 ,2 36 ,8 8 5 44 ,2 91 ,1 98 4 4 ,3 67 ,5 31 44 ,4 87 ,9 28 44 ,2 29 ,2 0 0 44 ,2 38 ,4 22 4 4 ,2 98 ,3 7 1 44 ,3 39 ,8 67 4 4 ,3 62 ,9 22 45 ,1 6 5, 7 39 49 ,4 43 ,0 37 64 ,1 5 6, 9 11 10 7 ,6 42 ,2 57 24 2, 73 9, 80 2 45 ,1 07 ,3 3 7 4 8 ,6 89 ,9 64 60 ,7 7 4, 9 01 85 ,2 35 ,2 7 6 8 8 ,9 78 ,3 20 E SFR 44 ,2 38 ,9 3 8 4 4 ,2 50 ,2 1 0 44 ,2 99 ,3 98 44 ,3 7 5 ,7 3 1 44 ,5 00 ,2 28 44 ,2 40 ,4 75 44 ,2 52 ,7 72 44 ,3 0 1, 44 6 44 ,3 41 ,9 17 4 4 ,3 69 ,0 7 2 45 ,2 30 ,3 1 4 50 ,0 4 6, 76 2 66 ,1 90 ,5 11 11 3, 38 8, 40 7 23 8 ,3 80 ,4 77 4 5 ,1 87 ,2 87 49 ,0 3 5, 38 9 62 ,3 18 ,5 5 1 8 6 ,4 74 ,5 0 1 8 9 ,2 49 ,9 45 E SR U 44 ,2 48 ,1 63 44 ,2 64 ,5 6 0 4 4 ,3 30 ,1 48 44 ,3 8 8, 03 1 44 ,5 04 ,3 28 44 ,2 5 0, 72 5 4 4 ,2 67 ,1 2 2 44 ,3 1 9, 89 6 44 ,3 46 ,0 17 44 ,3 62 ,9 2 2 45 ,3 74 ,8 3 9 51 ,6 3 7, 5 62 75 ,8 30 ,6 3 6 1 52 ,8 22 ,2 07 35 9 ,2 1 1, 57 7 4 5 ,3 09 ,2 62 50 ,1 8 9, 5 39 68 ,7 39 ,1 5 1 8 8 ,3 66 ,6 51 89 ,7 3 5, 7 95 E SZ H 44 ,7 67 ,8 38 45 ,4 2 0, 7 60 4 6 ,4 8 9, 82 3 47 ,6 81 ,3 56 4 9 ,2 44 ,9 53 4 4 ,2 91 ,7 25 44 ,3 0 0, 94 7 44 ,3 25 ,0 21 4 4 ,3 5 3, 19 2 44 ,3 90 ,5 97 4 5 ,1 12 ,4 39 49 ,3 29 ,2 62 65 ,3 07 ,9 86 11 7, 56 1 ,1 8 2 2 55 ,0 7 5, 6 77 45 ,2 6 6, 2 12 48 ,6 82 ,7 89 61 ,2 16 ,6 76 86 ,3 6 2, 77 6 89 ,0 60 ,3 2 0 FR -A R 44 ,2 35 ,8 62 44 ,2 65 ,5 83 44 ,3 27 ,0 65 44 ,3 77 ,7 80 4 4 ,4 8 3, 82 7 44 ,2 27 ,1 4 9 44 ,2 54 ,3 07 4 4 ,3 07 ,5 90 44 ,3 3 4, 74 1 44 ,3 61 ,8 94 45 ,3 75 ,3 34 52 ,1 04 ,9 4 1 7 8 ,0 3 0, 84 5 16 6, 4 99 ,5 3 4 4 17 ,2 26 ,0 67 45 ,1 9 5, 4 56 50 ,0 17 ,8 7 1 6 8 ,7 71 ,1 02 88 ,5 81 ,6 46 89 ,8 36 ,3 0 1 FR -E N 44 ,2 28 ,6 87 44 ,2 3 8 ,9 3 3 44 ,2 99 ,3 90 4 4 ,3 68 ,5 55 44 ,4 88 ,9 52 44 ,2 30 ,2 2 4 44 ,2 40 ,9 82 4 4 ,3 04 ,5 1 5 44 ,3 40 ,8 91 4 4 ,3 65 ,9 94 45 ,1 8 3, 6 59 49 ,4 64 ,5 41 63 ,5 8 4, 4 95 10 4 ,2 48 ,2 09 23 0, 14 2, 04 2 45 ,1 39 ,0 8 1 4 8 ,6 69 ,9 96 60 ,5 9 4, 6 77 84 ,9 71 ,5 9 6 8 8 ,9 20 ,9 76 FR -E S 44 ,2 37 ,9 1 2 4 4 ,2 48 ,1 5 8 44 ,2 91 ,1 90 44 ,3 7 4 ,7 0 5 44 ,4 99 ,2 02 44 ,2 39 ,4 49 44 ,2 50 ,2 07 44 ,2 9 5, 29 0 44 ,3 40 ,8 91 4 4 ,3 65 ,9 9 4 45 ,2 12 ,3 5 9 50 ,0 2 5, 21 6 66 ,7 64 ,0 45 11 6, 78 9, 08 4 25 1 ,0 02 ,8 42 4 5 ,1 55 ,4 81 49 ,0 5 5, 39 6 62 ,4 99 ,1 2 7 8 6 ,7 38 ,6 9 6 8 9 ,3 07 ,4 01 FR -R U 44 ,2 49 ,1 87 44 ,2 6 6, 60 8 44 ,3 38 ,3 40 44 ,3 8 9, 05 5 44 ,5 05 ,3 52 44 ,2 5 1, 74 9 4 4 ,2 6 9, 68 2 44 ,3 26 ,0 40 44 ,3 47 ,0 41 44 ,3 65 ,9 9 4 45 ,3 92 ,7 5 9 51 ,6 5 9, 0 66 75 ,2 58 ,2 2 0 1 49 ,4 28 ,1 59 34 6 ,6 1 3, 81 7 4 5 ,3 41 ,0 06 50 ,1 6 9, 5 71 68 ,5 58 ,9 2 7 8 8 ,1 02 ,9 71 89 ,6 7 8, 4 51 FR -Z H 44 ,7 68 ,8 62 45 ,4 22 ,8 08 4 6 ,4 9 8, 01 5 47 ,6 82 ,3 80 4 9 ,2 45 ,9 77 4 4 ,2 92 ,7 49 44 ,3 0 3, 50 7 44 ,3 31 ,1 65 4 4 ,3 5 4, 21 6 44 ,3 93 ,6 69 4 5 ,1 30 ,3 59 49 ,3 50 ,7 66 64 ,7 35 ,5 70 11 4, 16 7 ,1 3 4 2 42 ,4 7 7, 9 17 45 ,2 9 7, 9 56 48 ,6 62 ,8 21 61 ,0 36 ,4 52 86 ,0 9 9, 09 6 89 ,0 02 ,9 7 6 R U -A R 44 ,2 40 ,4 70 44 ,2 72 ,7 51 44 ,3 42 ,4 25 44 ,3 83 ,9 24 4 4 ,4 8 5, 87 5 44 ,2 32 ,2 6 9 44 ,2 61 ,4 75 4 4 ,3 16 ,8 06 44 ,3 3 6, 78 9 44 ,3 58 ,8 22 45 ,4 47 ,5 26 52 ,8 99 ,5 6 5 8 2 ,8 4 6, 20 5 18 6, 1 97 ,1 9 8 4 77 ,5 82 ,6 75 45 ,2 5 6, 3 84 50 ,5 94 ,3 8 3 7 1 ,9 78 ,2 70 89 ,5 26 ,7 98 90 ,0 78 ,9 8 9 R U -E N 44 ,2 33 ,2 95 44 ,2 46 ,1 0 1 44 ,3 14 ,7 50 4 4 ,3 74 ,6 99 44 ,4 91 ,0 00 44 ,2 35 ,3 4 4 44 ,2 48 ,1 50 4 4 ,3 13 ,7 3 1 44 ,3 42 ,9 39 4 4 ,3 62 ,9 22 45 ,2 5 5, 8 51 50 ,2 59 ,1 65 68 ,3 9 9, 8 55 12 3 ,9 45 ,8 73 29 0, 49 8, 65 0 45 ,2 00 ,0 0 9 4 9 ,2 46 ,5 08 63 ,8 0 1, 8 45 85 ,9 16 ,7 4 8 8 9 ,1 63 ,6 64 R U -E S 44 ,2 42 ,5 20 44 ,2 55 ,3 2 6 44 ,3 06 ,5 50 4 4 ,3 80 ,8 49 44 ,5 01 ,2 50 44 ,2 44 ,5 69 44 ,2 57 ,3 75 44 ,3 0 4, 50 6 44 ,3 42 ,9 39 4 4 ,3 62 ,9 22 45 ,2 8 4, 5 51 50 ,8 1 9, 84 0 71 ,5 79 ,4 05 13 6, 48 6, 74 8 31 1 ,3 59 ,4 50 4 5 ,2 16 ,4 09 49 ,6 3 1, 90 8 65 ,7 06 ,2 9 5 8 7 ,6 83 ,8 4 8 8 9 ,5 50 ,0 89 R U -F R 44 ,2 44 ,5 70 44 ,2 5 9, 42 6 44 ,3 22 ,9 50 44 ,3 8 2, 89 9 44 ,5 03 ,3 00 44 ,2 4 6, 61 9 4 4 ,2 6 2, 50 0 44 ,3 16 ,8 06 44 ,3 44 ,9 89 44 ,3 69 ,0 7 2 45 ,3 20 ,4 2 6 50 ,8 6 2, 8 90 70 ,4 33 ,4 5 5 1 29 ,6 92 ,0 23 28 6 ,1 3 9, 32 5 4 5 ,2 79 ,9 59 49 ,5 9 1, 9 33 65 ,3 45 ,4 9 5 8 7 ,1 55 ,9 73 89 ,4 3 5, 2 89 R U -Z H 44 ,7 73 ,4 70 45 ,4 2 9, 9 76 4 6 ,5 1 3, 37 5 47 ,6 88 ,5 24 4 9 ,2 48 ,0 25 4 4 ,2 97 ,8 69 44 ,3 1 0, 67 5 44 ,3 40 ,3 81 4 4 ,3 5 6, 26 4 44 ,3 90 ,5 97 4 5 ,2 02 ,5 51 50 ,1 45 ,3 90 69 ,5 50 ,9 30 13 3, 86 4 ,7 9 8 3 02 ,8 3 4, 5 25 45 ,3 5 8, 8 84 49 ,2 39 ,3 33 64 ,2 43 ,6 20 87 ,0 4 4, 24 8 89 ,2 45 ,6 6 4 Z H -A R 44 ,5 00 ,0 54 44 ,8 5 0, 2 87 45 ,4 2 1, 20 9 46 ,0 28 ,9 80 4 6 ,8 5 3 ,8 7 5 4 4 ,2 52 ,7 49 44 ,2 78 ,3 71 44 ,3 19 ,3 66 4 4 ,3 4 0, 37 3 44 ,3 72 ,6 46 4 5 ,3 16 ,4 54 51 ,7 46 ,5 4 1 7 7 ,5 9 0, 01 3 16 8, 5 83 ,8 8 6 4 25 ,5 65 ,5 23 45 ,2 3 4, 8 80 49 ,8 41 ,7 4 3 6 8 ,2 20 ,7 02 88 ,5 25 ,8 38 89 ,7 41 ,5 8 1 Z H -E N 44 ,4 92 ,8 79 44 ,8 23 ,6 3 7 45 ,3 93 ,5 34 4 6 ,0 19 ,7 55 46 ,8 59 ,0 00 44 ,2 55 ,8 2 4 44 ,2 65 ,0 46 4 4 ,3 16 ,2 9 1 44 ,3 46 ,5 23 4 4 ,3 76 ,7 46 45 ,1 2 4, 7 79 49 ,1 06 ,1 41 63 ,1 4 3, 6 63 10 6 ,3 32 ,5 61 23 8, 48 1, 49 8 45 ,1 78 ,5 0 5 4 8 ,4 93 ,8 68 60 ,0 4 4, 2 77 84 ,9 15 ,7 8 8 8 8 ,8 26 ,2 56 Z H -E S 44 ,5 02 ,1 04 44 ,8 32 ,8 6 2 45 ,3 85 ,3 34 46 ,0 2 5 ,9 0 5 46 ,8 69 ,2 50 44 ,2 65 ,0 49 44 ,2 74 ,2 71 44 ,3 0 7, 06 6 44 ,3 46 ,5 23 4 4 ,3 76 ,7 4 6 45 ,1 53 ,4 7 9 49 ,6 6 6, 81 6 66 ,3 23 ,2 13 11 8, 87 3, 43 6 25 9 ,3 42 ,2 98 4 5 ,1 94 ,9 05 48 ,8 7 9, 26 8 61 ,9 48 ,7 2 7 8 6 ,6 82 ,8 8 8 8 9 ,2 12 ,6 81 Z H -F R 44 ,5 0 4 ,1 54 44 ,8 36 ,9 62 45 ,4 01 ,7 34 46 ,0 2 7, 95 5 46 ,8 71 ,3 00 44 ,2 6 7, 09 9 4 4 ,2 7 9, 39 6 44 ,3 19 ,3 66 44 ,3 48 ,5 73 44 ,3 82 ,8 9 6 45 ,1 89 ,3 5 4 49 ,7 0 9, 8 66 65 ,1 77 ,2 6 3 1 12 ,0 78 ,7 11 23 4 ,1 2 2, 17 3 4 5 ,2 58 ,4 55 48 ,8 3 9, 2 93 61 ,5 87 ,9 2 7 8 6 ,1 55 ,0 13 89 ,0 9 7, 8 81 Z H -R U 44 ,5 13 ,3 79 44 ,8 5 1, 3 12 4 5 ,4 3 2, 48 4 46 ,0 40 ,2 55 4 6 ,8 75 ,4 00 4 4 ,2 77 ,3 49 44 ,2 9 3, 74 6 44 ,3 37 ,8 16 4 4 ,3 5 2, 67 3 44 ,3 76 ,7 46 4 5 ,3 33 ,8 79 51 ,3 00 ,6 66 74 ,8 17 ,3 88 15 1, 51 2 ,5 1 1 3 54 ,9 5 3, 2 73 45 ,3 8 0, 4 30 49 ,9 93 ,4 43 68 ,0 08 ,5 27 88 ,0 4 7, 16 3 89 ,5 83 ,7 3 1\nA R\n-Z H\n_p in\nyi n\n44 ,2 30 ,7 39\n44 ,2 54 ,8 14\n44 ,2 90 ,6 68\n44 ,3 55 ,7 42\n4 4 ,8 0 8, 24 8 E N -Z H _p in yi n 44 ,2 27 ,1 55 44 ,2 41 ,5 02 44 ,2 76 ,8 44 44 ,3 51 ,1 34 4 4 ,8 1 0, 80 8 E SZ H _p in yi n 44 ,2 31 ,7 63 44 ,2 46 ,1 10 44 ,2 72 ,7 48 44 ,3 54 ,2 06 4 4 ,8 1 5, 92 8 FR -Z H _p in yi n 44 ,2 32 ,7 87 44 ,2 48 ,1 58 44 ,2 8 0, 94 0 44 ,3 55 ,2 30 4 4 ,8 1 6, 95 2 R U -Z H _p in yi n 44 ,2 37 ,3 95 44 ,2 55 ,3 26 44 ,2 9 6, 30 0 44 ,3 61 ,3 74 4 4 ,8 1 9, 00 0 A R -Z H _w ub i 44 ,2 52 ,2 64 44 ,2 71 ,2 14 44 ,3 1 0, 14 3 44 ,3 72 ,1 42 4 4 ,7 8 5, 69 8 E N -Z H _w ub i 44 ,2 48 ,6 80 44 ,2 57 ,9 02 44 ,2 9 6, 31 9 44 ,3 67 ,5 34 4 4 ,7 88 ,2 5 8 E SZ H _w ub i 44 ,2 53 ,2 88 44 ,2 62 ,5 10 44 ,2 9 2, 22 3 44 ,3 70 ,6 06 4 4 ,7 93 ,3 7 8 FR -Z H _w ub i 44 ,2 54 ,3 12 44 ,2 64 ,5 58 44 ,3 0 0, 41 5 44 ,3 71 ,6 30 4 4 ,7 94 ,4 0 2 R U -Z H _w ub i 44 ,2 58 ,9 20 44 ,2 71 ,7 26 44 ,3 1 5, 77 5 44 ,3 77 ,7 74 4 4 ,7 96 ,4 5 0 E N -A R _c p1 25 6\n44 ,2 30 ,7 4 2\n44 ,2 5 9, 43\n9 44 ,3 25 ,5 29\n44 ,3 6 5, 49\n2 44 ,4 37 ,7 47\nE S-\nA R\n_c p1\n25 6\n44 ,2 35 ,3 5 0\n44 ,2 6 4, 04\n7 44 ,3 20 ,9 21\n44 ,3 6 5, 49\n2 44 ,4 37 ,7 47\nFR -A\nR _c\np1 25\n6 44 ,2 36 ,3 7 4\n44 ,2 6 6, 60\n7 44 ,3 27 ,0 65\n44 ,3 6 6, 51\n6 44 ,4 40 ,8 19\nR U\n-A R\n_c p1\n25 6\n44 ,2 41 ,4 9 4\n44 ,2 7 3, 77\n5 44 ,3 36 ,2 81\n44 ,3 6 8, 56\n4 44 ,4 37 ,7 47\nZ H\n-A R\n_c p1\n25 6\n44 ,2 61 ,9 7 4\n44 ,2 9 0, 67\n1 44 ,3 38 ,8 41\n44 ,3 7 2, 14\n8 44 ,4 51 ,5 71\nA R\n-R U\n_c p1\n25 1\n44 ,2 42 ,5 3 1\n44 ,2 6 7, 12\n0 44 ,3 38 ,3 40\n44 ,3 7 3, 69\n5 44 ,4 57 ,2 24\nE N\n-R U\n_c p1\n25 1\n44 ,2 44 ,0 6 7\n44 ,2 6 0, 46\n4 44 ,3 36 ,8 04\n44 ,3 7 6, 76\n7 44 ,4 59 ,2 72\nE S-\nR U\n_c p1\n25 1\n44 ,2 48 ,6 7 5\n44 ,2 6 5, 07\n2 44 ,3 32 ,1 96\n44 ,3 7 6, 76\n7 44 ,4 59 ,2 72\nFR -R\nU _c\np1 25\n1 44 ,2 49 ,6 9 9\n44 ,2 6 7, 63\n2 44 ,3 38 ,3 40\n44 ,3 7 7, 79\n1 44 ,4 62 ,3 44\nZ H\n-R U\n_c p1\n25 1\n44 ,2 75 ,2 9 9\n44 ,2 9 1, 69\n6 44 ,3 50 ,1 16\n44 ,3 8 3, 42\n3 44 ,4 73 ,0 96\nN um\nbe ro\nfm od\nel pa\nra m\net er\ns fo\nrd at\nas et\nB\nR ep\nre se\nnt at\nio n\nC H\nA R\nB Y\nT E\nW O\nR D\nB PE\nN um\nbe ro\nfl in\nes 10\n0 1, 00\n0 10 ,0 0 0\n10 0 ,0 0 0\n1 ,0 00 ,0 00\n10 0\n1 ,0 00\n10 ,0 00\n10 0, 00\n0 1 ,0 00 ,0 00\n1 00\n1 ,0 00\n10 ,0 00\n10 0 ,0 00\n1 ,0 0 0, 00\n0 10\n0 1 ,0 00\n10 ,0 0 0\n10 0 ,0 0 0\n1 ,0 00 ,0 00\nN um\nbe r\nof PA\nR A\nM S\nA R\n-E N\n44 ,2 3 0 ,2 24\n44 ,2 7 7, 8 63\n44 ,3 2 8, 07\n1 44 ,4 30 ,0 17\n44 ,6 8 0 ,0 2 7\n4 4 ,2 33 ,8 11\n44 ,2 76 ,8 43\n44 ,3 21 ,4 19\n44 ,3 5 6, 26\n3 44 ,3 85 ,9 77\n45 ,5 8 3, 72\n8 51 ,7 99 ,6 0 8\n7 2, 55\n8, 94\n4 13\n7, 9 25 ,9 8 4\n3 47 ,9 56 ,3 40\n45 ,3 26 ,1 00\n49 ,8 57 ,8 7 6\n6 4 ,7 5 8, 28\n0 86 ,7 94 ,4 40\n89 ,3 54 ,2 3 8\nA R\n-E S\n44 ,2 40 ,4 74\n44 ,2 8 2 ,9 8 8\n44 ,3 31 ,1 46\n4 4 ,4 30 ,0 17\n44 ,7 03 ,6 02\n44 ,2 44 ,0 6 1\n44 ,2 81 ,9 68\n4 4 ,3 19 ,3 6 9\n44 ,3 54 ,2 13\n4 4 ,3 90 ,0 77\n45 ,6 80 ,0 78\n52 ,4 42 ,2 83\n75 ,7 1 0, 8 19\n14 6 ,6 61 ,0 34\n36 1, 86\n5, 59\n0 45 ,3 56 ,8 5 0\n5 0 ,1 62 ,3 01\n66 ,5 75 ,6 05\n88 ,0 73 ,6 4 0\n8 9 ,6 54 ,5 63\nA R\n-F R\n44 ,2 43 ,5 49\n44 ,2 91 ,1 8 8\n44 ,3 44 ,4 71\n44 ,4 4 3 ,3 4 2\n44 ,6 89 ,2 52\n44 ,2 46 ,1 11\n44 ,2 94 ,2 68\n44 ,3 2 8, 59\n4 44 ,3 55 ,2 38\n4 4 ,3 90 ,0 7 7\n45 ,6 64 ,7 0 3\n52 ,2 9 9, 80\n8 74 ,5 87 ,4 19\n14 0, 49\n2, 58\n4 33\n9 ,7 24 ,5 65\n4 5 ,3 54 ,8 00\n50 ,1 2 6, 42\n6 66 ,2 97 ,8 3 0\n8 7 ,7 41 ,5 4 0\n8 9 ,5 5 1, 03 8 A R -R U 44 ,2 64 ,0 49 44 ,3 21 ,9 38 44 ,3 71 ,1 21 44 ,4 4 5, 39 2 44 ,6 56 ,4 52 44 ,2 6 7, 63 6 4 4 ,3 1 5, 79 3 44 ,3 46 ,0 19 44 ,3 54 ,2 13 44 ,3 84 ,9 5 2 45 ,8 56 ,3 7 8 54 ,2 3 0, 9 08 85 ,1 65 ,4 1 9 1 80 ,0 46 ,3 09 46 1 ,1 6 0, 41 5 4 5 ,5 40 ,3 25 51 ,3 1 4, 4 01 73 ,0 41 ,3 0 5 8 9 ,2 96 ,4 65 90 ,0 0 3, 0 63 A R -Z H 44 ,9 32 ,3 49 45 ,7 9 0, 76 3 46 ,6 5 5 ,8 4 6 47 ,7 87 ,9 17 49 ,1 51 ,0 77 4 4 ,2 7 8, 9 11 4 4 ,3 2 8, 09 3 44 ,3 52 ,1 69 4 4 ,3 68 ,5 6 3 44 ,3 93 ,1 52 4 5 ,6 05 ,2 53 51 ,8 13 ,9 58 73 ,6 39 ,2 94 14 1, 96 8 ,5 8 4 3 47 ,2 1 4, 2 40 45 ,6 1 7, 2 00 50 ,1 74 ,6 01 65 ,0 79 ,1 05 87 ,3 9 8, 16 5 89 ,2 81 ,4 6 3 E N -A R 44 ,2 35 ,8 67 44 ,2 9 5, 8 18 44 ,3 4 7, 05 2 44 ,4 61 ,8 23 4 4 ,7 2 5 ,1 7 1 4 4 ,2 38 ,4 28 44 ,2 87 ,6 16 44 ,3 27 ,5 75 4 4 ,3 5 4, 21 1 44 ,3 85 ,9 77 4 5 ,7 13 ,5 17 53 ,1 18 ,5 3 1 8 0 ,0 2 9, 25 0 16 5, 9 47 ,5 8 3 4 35 ,3 93 ,5 99 45 ,3 3 9, 4 38 50 ,3 70 ,8 7 6 6 8 ,8 84 ,8 52 88 ,2 61 ,6 20 89 ,7 54 ,8 9 1 E N -E S 44 ,2 34 ,8 42 44 ,2 65 ,0 6 8 44 ,3 12 ,2 02 4 4 ,3 98 ,2 73 44 ,6 58 ,5 46 44 ,2 39 ,4 5 3 44 ,2 71 ,2 16 4 4 ,3 13 ,2 2 5 44 ,3 56 ,2 61 4 4 ,3 90 ,0 77 45 ,5 5 0, 5 42 51 ,1 25 ,9 31 68 ,2 5 5, 0 75 11 8 ,6 94 ,0 58 27 4, 59 8, 77 4 45 ,3 43 ,5 3 8 4 9 ,6 50 ,3 01 62 ,4 5 7, 0 77 86 ,6 09 ,3 2 0 8 9 ,2 54 ,6 91 E N -F R 44 ,2 37 ,9 17 44 ,2 7 3, 26 8 44 ,3 25 ,5 27 44 ,4 1 1 ,5 9 8 44 ,6 44 ,1 96 44 ,2 41 ,5 03 44 ,2 83 ,5 16 44 ,3 2 2, 45 0 44 ,3 57 ,2 86 4 4 ,3 90 ,0 7 7 45 ,5 35 ,1 6 7 50 ,9 8 3, 45 6 67 ,1 31 ,6 75 11 2, 52 5, 60 8 25 2 ,4 57 ,7 49 4 5 ,3 41 ,4 88 49 ,6 1 4, 42 6 62 ,1 79 ,3 0 2 8 6 ,2 77 ,2 2 0 8 9 ,1 51 ,1 66 E N -R U 44 ,2 58 ,4 17 44 ,3 0 4, 01 8 44 ,3 52 ,1 77 44 ,4 1 3, 64 8 44 ,6 11 ,3 96 44 ,2 6 3, 02 8 4 4 ,3 0 5, 04 1 44 ,3 39 ,8 75 44 ,3 56 ,2 61 44 ,3 84 ,9 5 2 45 ,7 26 ,8 4 2 52 ,9 1 4, 5 56 77 ,7 09 ,6 7 5 1 52 ,0 79 ,3 33 37 3 ,8 9 3, 59 9 4 5 ,5 27 ,0 13 50 ,8 0 2, 4 01 68 ,9 22 ,7 7 7 8 7 ,8 32 ,1 45 89 ,6 0 3, 1 91 E N -Z H 44 ,9 26 ,7 17 45 ,7 7 2, 8 43 4 6 ,6 3 6, 90 2 47 ,7 56 ,1 73 4 9 ,1 06 ,0 21 4 4 ,2 74 ,3 03 44 ,3 1 7, 34 1 44 ,3 46 ,0 25 4 4 ,3 7 0, 61 1 44 ,3 93 ,1 52 4 5 ,4 75 ,7 17 50 ,4 97 ,6 06 66 ,1 83 ,5 50 11 4, 00 1 ,6 0 8 2 59 ,9 4 7, 4 24 45 ,6 0 3, 8 88 49 ,6 62 ,6 01 60 ,9 60 ,5 77 85 ,9 3 3, 84 5 88 ,8 81 ,5 9 1 E SA R 44 ,2 40 ,9 87 44 ,2 98 ,3 78 44 ,3 48 ,5 88 44 ,4 61 ,8 23 4 4 ,7 3 6, 94 7 44 ,2 43 ,5 4 8 44 ,2 90 ,1 76 4 4 ,3 26 ,5 51 44 ,3 5 3, 18 7 44 ,3 88 ,0 25 45 ,7 61 ,6 45 53 ,4 39 ,5 5 5 8 1 ,6 0 3, 65 0 17 0, 3 10 ,8 4 7 4 42 ,3 41 ,4 39 45 ,3 5 4, 7 98 50 ,5 22 ,9 4 0 6 9 ,7 92 ,6 28 88 ,9 00 ,5 96 89 ,9 04 ,9 0 7 E SE N 44 ,2 29 ,7 12 44 ,2 62 ,5 0 3 44 ,3 10 ,6 63 4 4 ,3 98 ,2 73 44 ,6 46 ,7 47 44 ,2 34 ,3 2 3 44 ,2 68 ,6 51 4 4 ,3 14 ,2 5 1 44 ,3 57 ,2 87 4 4 ,3 88 ,0 25 45 ,5 0 2, 3 20 50 ,8 04 ,2 80 66 ,6 7 7, 6 00 11 4 ,3 22 ,2 72 26 7, 63 7, 36 4 45 ,3 28 ,1 4 8 4 9 ,4 97 ,9 40 61 ,5 4 7, 5 28 85 ,9 69 ,0 9 6 8 9 ,1 04 ,3 82 E SFR 44 ,2 43 ,0 3 7 4 4 ,2 75 ,8 2 8 44 ,3 27 ,0 63 44 ,4 1 1 ,5 9 8 44 ,6 55 ,9 72 44 ,2 46 ,6 23 44 ,2 86 ,0 76 44 ,3 2 1, 42 6 44 ,3 56 ,2 62 4 4 ,3 92 ,1 2 5 45 ,5 83 ,2 9 5 51 ,3 0 4, 48 0 68 ,7 06 ,0 75 11 6, 88 8, 87 2 25 9 ,4 05 ,5 89 4 5 ,3 56 ,8 48 49 ,7 6 6, 49 0 63 ,0 87 ,0 7 8 8 6 ,9 16 ,1 9 6 8 9 ,3 01 ,1 82 E SR U 44 ,2 63 ,5 37 44 ,3 06 ,5 7 8 4 4 ,3 53 ,7 13 44 ,4 1 3, 64 8 44 ,6 23 ,1 72 44 ,2 6 8, 14 8 4 4 ,3 07 ,6 0 1 44 ,3 3 8, 85 1 44 ,3 55 ,2 37 44 ,3 87 ,0 0 0 45 ,7 74 ,9 7 0 53 ,2 3 5, 5 80 79 ,2 84 ,0 7 5 1 56 ,4 42 ,5 97 38 0 ,8 4 1, 43 9 4 5 ,5 42 ,3 73 50 ,9 5 4, 4 65 69 ,8 30 ,5 5 3 8 8 ,4 71 ,1 21 89 ,7 5 3, 2 07 E SZ H 44 ,9 31 ,8 37 45 ,7 7 5, 4 03 4 6 ,6 3 8, 43 8 47 ,7 56 ,1 73 4 9 ,1 17 ,7 97 4 4 ,2 79 ,4 23 44 ,3 1 9, 90 1 44 ,3 45 ,0 01 4 4 ,3 6 9, 58 7 44 ,3 95 ,2 00 4 5 ,5 23 ,8 45 50 ,8 18 ,6 30 67 ,7 57 ,9 50 11 8, 36 4 ,8 7 2 2 66 ,8 9 5, 2 64 45 ,6 1 9, 2 48 49 ,8 14 ,6 65 61 ,8 68 ,3 53 86 ,5 7 2, 82 1 89 ,0 31 ,6 0 7 FR -A R 44 ,2 42 ,5 23 44 ,3 02 ,4 74 44 ,3 55 ,2 44 44 ,4 68 ,4 79 4 4 ,7 2 9, 77 9 44 ,2 44 ,5 7 2 44 ,2 96 ,3 20 4 4 ,3 31 ,1 59 44 ,3 5 3, 69 9 44 ,3 88 ,0 25 45 ,7 53 ,9 65 53 ,3 68 ,3 8 7 8 1 ,0 4 2, 49 8 16 7, 2 29 ,6 3 1 4 31 ,2 81 ,7 27 45 ,3 5 3, 7 74 50 ,5 05 ,0 2 0 6 9 ,6 53 ,8 76 88 ,7 34 ,7 08 89 ,8 53 ,1 9 5 FR -E N 44 ,2 31 ,2 48 44 ,2 6 6 ,5 9 9 44 ,3 17 ,3 19 4 4 ,4 04 ,9 29 44 ,6 39 ,5 79 44 ,2 35 ,3 4 7 44 ,2 74 ,7 95 4 4 ,3 18 ,8 5 9 44 ,3 57 ,7 99 4 4 ,3 88 ,0 25 45 ,4 9 4, 6 40 50 ,7 33 ,1 12 66 ,1 1 6, 4 48 11 1 ,2 41 ,0 56 25 6, 57 7, 65 2 45 ,3 27 ,1 2 4 4 9 ,4 80 ,0 20 61 ,4 0 8, 7 76 85 ,8 03 ,2 0 8 8 9 ,0 52 ,6 70 FR -E S 44 ,2 41 ,4 9 8 4 4 ,2 71 ,7 2 4 44 ,3 20 ,3 94 44 ,4 0 4 ,9 2 9 44 ,6 63 ,1 54 44 ,2 45 ,5 97 44 ,2 79 ,9 20 44 ,3 1 6, 80 9 44 ,3 55 ,7 49 4 4 ,3 92 ,1 2 5 45 ,5 90 ,9 9 0 51 ,3 7 5, 78 7 69 ,2 68 ,3 23 11 9, 97 6, 10 6 27 0 ,4 86 ,9 02 4 5 ,3 57 ,8 74 49 ,7 8 4, 44 5 63 ,2 26 ,1 0 1 8 7 ,0 82 ,4 0 8 8 9 ,3 52 ,9 95 FR -R U 44 ,2 65 ,0 73 44 ,3 1 0, 67 4 44 ,3 60 ,3 69 44 ,4 2 0, 30 4 44 ,6 16 ,0 04 44 ,2 6 9, 17 2 4 4 ,3 1 3, 74 5 44 ,3 43 ,4 59 44 ,3 55 ,7 49 44 ,3 87 ,0 0 0 45 ,7 67 ,2 9 0 53 ,1 6 4, 4 12 78 ,7 22 ,9 2 3 1 53 ,3 61 ,3 81 36 9 ,7 8 1, 72 7 4 5 ,5 41 ,3 49 50 ,9 3 6, 5 45 69 ,6 91 ,8 0 1 8 8 ,3 05 ,2 33 89 ,7 0 1, 4 95 FR -Z H 44 ,9 33 ,3 73 45 ,7 79 ,4 99 4 6 ,6 4 5, 09 4 47 ,7 62 ,8 29 4 9 ,1 10 ,6 29 4 4 ,2 80 ,4 47 44 ,3 2 6, 04 5 44 ,3 49 ,6 09 4 4 ,3 7 0, 09 9 44 ,3 95 ,2 00 4 5 ,5 16 ,1 65 50 ,7 47 ,4 62 67 ,1 96 ,7 98 11 5, 28 3 ,6 5 6 2 55 ,8 3 5, 5 52 45 ,6 1 8, 2 24 49 ,7 96 ,7 45 61 ,7 29 ,6 01 86 ,4 0 6, 93 3 88 ,9 79 ,8 9 5 R U -A R 44 ,2 52 ,7 63 44 ,3 17 ,8 34 44 ,3 68 ,5 56 44 ,4 69 ,5 03 4 4 ,7 1 3, 39 5 44 ,2 55 ,3 2 4 44 ,3 07 ,0 72 4 4 ,3 39 ,8 63 44 ,3 5 3, 18 7 44 ,3 85 ,4 65 45 ,8 49 ,7 09 54 ,3 32 ,9 9 5 8 6 ,3 2 6, 33 8 18 6, 9 87 ,1 9 9 4 91 ,9 40 ,4 15 45 ,4 4 6, 4 46 51 ,0 98 ,4 2 8 7 3 ,0 22 ,3 24 89 ,5 11 ,4 12 90 ,0 78 ,9 8 7 R U -E N 44 ,2 41 ,4 88 44 ,2 81 ,9 5 9 44 ,3 30 ,6 31 4 4 ,4 05 ,9 53 44 ,6 23 ,1 95 44 ,2 46 ,0 9 9 44 ,2 85 ,5 47 4 4 ,3 27 ,5 6 3 44 ,3 57 ,2 87 4 4 ,3 85 ,4 65 45 ,5 9 0, 3 84 51 ,6 97 ,7 20 71 ,4 0 0, 2 88 13 0 ,9 98 ,6 24 31 7, 23 6, 34 0 45 ,4 19 ,7 9 6 5 0 ,0 73 ,4 28 64 ,7 7 7, 2 24 86 ,5 79 ,9 1 2 8 9 ,2 78 ,4 62 R U -E S 44 ,2 51 ,7 38 44 ,2 87 ,0 8 4 44 ,3 33 ,7 06 4 4 ,4 05 ,9 53 44 ,6 46 ,7 70 44 ,2 56 ,3 49 44 ,2 90 ,6 72 44 ,3 2 5, 51 3 44 ,3 55 ,2 37 4 4 ,3 89 ,5 65 45 ,6 8 6, 7 34 52 ,3 4 0, 39 5 74 ,5 52 ,1 63 13 9, 73 3, 67 4 33 1 ,1 45 ,5 90 4 5 ,4 50 ,5 46 50 ,3 7 7, 85 3 66 ,5 94 ,5 4 9 8 7 ,8 59 ,1 1 2 8 9 ,5 78 ,7 87 R U -F R 44 ,2 54 ,8 13 44 ,2 9 5, 28 4 44 ,3 47 ,0 31 44 ,4 1 9, 27 8 44 ,6 32 ,4 20 44 ,2 5 8, 39 9 4 4 ,3 0 2, 97 2 44 ,3 34 ,7 38 44 ,3 56 ,2 62 44 ,3 89 ,5 6 5 45 ,6 71 ,3 5 9 52 ,1 9 7, 9 20 73 ,4 28 ,7 6 3 1 33 ,5 65 ,2 24 30 9 ,0 0 4, 56 5 4 5 ,4 48 ,4 96 50 ,3 4 1, 9 78 66 ,3 16 ,7 7 4 8 7 ,5 27 ,0 12 89 ,4 7 5, 2 62 R U -Z H 44 ,9 43 ,6 13 45 ,7 9 4, 8 59 4 6 ,6 5 8, 40 6 47 ,7 63 ,8 53 4 9 ,0 94 ,2 45 4 4 ,2 91 ,1 99 44 ,3 3 6, 79 7 44 ,3 58 ,3 13 4 4 ,3 6 9, 58 7 44 ,3 92 ,6 40 4 5 ,6 11 ,9 09 51 ,7 12 ,0 70 72 ,4 80 ,6 38 13 5, 04 1 ,2 2 4 3 16 ,4 9 4, 2 40 45 ,7 1 0, 8 96 50 ,3 90 ,1 53 65 ,0 98 ,0 49 87 ,1 8 3, 63 7 89 ,2 05 ,6 8 7 Z H -A R 44 ,5 86 ,5 87 45 ,0 5 1, 5 30 45 ,5 0 9, 80 4 46 ,1 39 ,1 35 4 6 ,9 5 8 ,5 1 5 4 4 ,2 60 ,9 56 44 ,3 13 ,2 16 44 ,3 42 ,9 35 4 4 ,3 6 0, 35 5 44 ,3 89 ,5 61 4 5 ,7 24 ,2 69 53 ,1 25 ,6 9 9 8 0 ,5 6 8, 89 8 16 7, 9 66 ,9 1 1 4 35 ,0 22 ,9 11 45 ,4 8 4, 8 46 50 ,5 29 ,0 8 4 6 9 ,0 45 ,1 08 88 ,5 63 ,1 88 89 ,7 18 ,5 3 9 Z H -E N 44 ,5 75 ,3 12 45 ,0 15 ,6 5 5 45 ,4 71 ,8 79 4 6 ,0 75 ,5 85 46 ,8 68 ,3 15 44 ,2 51 ,7 3 1 44 ,2 91 ,6 91 4 4 ,3 30 ,6 3 5 44 ,3 64 ,4 55 4 4 ,3 89 ,5 61 45 ,4 6 4, 9 44 50 ,4 90 ,4 24 65 ,6 4 2, 8 48 11 1 ,9 78 ,3 36 26 0, 31 8, 83 6 45 ,4 58 ,1 9 6 4 9 ,5 04 ,0 84 60 ,8 0 0, 0 08 85 ,6 31 ,6 8 8 8 8 ,9 18 ,0 14 Z H -E S 44 ,5 85 ,5 62 45 ,0 20 ,7 8 0 45 ,4 74 ,9 54 46 ,0 7 5 ,5 8 5 46 ,8 91 ,8 90 44 ,2 61 ,9 81 44 ,2 96 ,8 16 44 ,3 2 8, 58 5 44 ,3 62 ,4 05 4 4 ,3 93 ,6 6 1 45 ,5 61 ,2 9 4 51 ,1 3 3, 09 9 68 ,7 94 ,7 23 12 0, 71 3, 38 6 27 4 ,2 28 ,0 86 4 5 ,4 88 ,9 46 49 ,8 0 8, 50 9 62 ,6 17 ,3 3 3 8 6 ,9 10 ,8 8 8 8 9 ,2 18 ,3 39 Z H -F R 44 ,5 8 8 ,6 37 45 ,0 28 ,9 80 45 ,4 88 ,2 79 46 ,0 8 8, 91 0 46 ,8 77 ,5 40 44 ,2 6 4, 03 1 4 4 ,3 0 9, 11 6 44 ,3 37 ,8 10 44 ,3 63 ,4 30 44 ,3 93 ,6 6 1 45 ,5 45 ,9 1 9 50 ,9 9 0, 6 24 67 ,6 71 ,3 2 3 1 14 ,5 44 ,9 36 25 2 ,0 8 7, 06 1 4 5 ,4 86 ,8 96 49 ,7 7 2, 6 34 62 ,3 39 ,5 5 8 8 6 ,5 78 ,7 88 89 ,1 1 4, 8 14 Z H -R U 44 ,6 09 ,1 37 45 ,0 5 9, 7 30 4 5 ,5 1 4, 92 9 46 ,0 90 ,9 60 4 6 ,8 44 ,7 40 4 4 ,2 85 ,5 56 44 ,3 3 0, 64 1 44 ,3 55 ,2 35 4 4 ,3 6 2, 40 5 44 ,3 88 ,5 36 4 5 ,7 37 ,5 94 52 ,9 21 ,7 24 78 ,2 49 ,3 23 15 4, 09 8 ,6 6 1 3 73 ,5 2 2, 9 11 45 ,6 7 2, 4 21 50 ,9 60 ,6 09 69 ,0 83 ,0 33 88 ,1 3 3, 71 3 89 ,5 66 ,8 3 9\nA R\n-Z H\n_p in\nyi n\n44 ,2 25 ,0 99\n44 ,2 82 ,9 88\n44 ,3 37 ,2 96\n44 ,4 83 ,3 17\n4 4 ,7 6 9, 20 2 E N -Z H _p in yi n 44 ,2 19 ,4 67 44 ,2 65 ,0 68 44 ,3 18 ,3 52 44 ,4 51 ,5 73 4 4 ,7 2 4, 14 6 E SZ H _p in yi n 44 ,2 24 ,5 87 44 ,2 67 ,6 28 44 ,3 19 ,8 88 44 ,4 51 ,5 73 4 4 ,7 3 5, 92 2 FR -Z H _p in yi n 44 ,2 26 ,1 23 44 ,2 71 ,7 24 44 ,3 2 6, 54 4 44 ,4 58 ,2 29 4 4 ,7 2 8, 75 4 R U -Z H _p in yi n 44 ,2 36 ,3 63 44 ,2 87 ,0 84 44 ,3 3 9, 85 6 44 ,4 59 ,2 53 4 4 ,7 1 2, 37 0 A R -Z H _w ub i 44 ,2 41 ,4 99 44 ,3 03 ,4 88 44 ,3 5 1, 64 6 44 ,4 88 ,4 42 4 4 ,7 5 9, 97 7 E N -Z H _w ub i 44 ,2 35 ,8 67 44 ,2 85 ,5 68 44 ,3 3 2, 70 2 44 ,4 56 ,6 98 4 4 ,7 14 ,9 2 1 E SZ H _w ub i 44 ,2 40 ,9 87 44 ,2 88 ,1 28 44 ,3 3 4, 23 8 44 ,4 56 ,6 98 4 4 ,7 26 ,6 9 7 FR -Z H _w ub i 44 ,2 42 ,5 23 44 ,2 92 ,2 24 44 ,3 4 0, 89 4 44 ,4 63 ,3 54 4 4 ,7 19 ,5 2 9 R U -Z H _w ub i 44 ,2 52 ,7 63 44 ,3 07 ,5 84 44 ,3 5 4, 20 6 44 ,4 64 ,3 78 4 4 ,7 03 ,1 4 5 E N -A R _c p1 25 6\n44 ,2 37 ,4 0 3\n44 ,2 9 7, 86\n6 44 ,3 49 ,1 00\n44 ,4 4 8, 51\n1 44 ,6 42 ,2 27\nE S-\nA R\n_c p1\n25 6\n44 ,2 42 ,5 2 3\n44 ,3 0 0, 42\n6 44 ,3 48 ,0 76\n44 ,4 4 7, 48\n7 44 ,6 44 ,2 75\nFR -A\nR _c\np1 25\n6 44 ,2 43 ,5 4 7\n44 ,3 0 6, 57\n0 44 ,3 52 ,6 84\n44 ,4 4 7, 99\n9 44 ,6 44 ,2 75\nR U\n-A R\n_c p1\n25 6\n44 ,2 54 ,2 9 9\n44 ,3 1 7, 32\n2 44 ,3 61 ,3 88\n44 ,4 4 7, 48\n7 44 ,6 41 ,7 15\nZ H\n-A R\n_c p1\n25 6\n44 ,2 59 ,9 3 1\n44 ,3 2 3, 46\n6 44 ,3 64 ,4 60\n44 ,4 5 4, 65\n5 44 ,6 45 ,8 11\nA R\n-R U\n_c p1\n25 1\n44 ,2 64 ,5 6 1\n44 ,3 1 6, 81\n8 44 ,3 60 ,3 69\n44 ,3 9 8, 28\n8 44 ,5 28 ,4 52\nE N\n-R U\n_c p1\n25 1\n44 ,2 59 ,9 5 3\n44 ,3 0 6, 06\n6 44 ,3 54 ,2 25\n44 ,4 0 0, 33\n6 44 ,5 28 ,4 52\nE S-\nR U\n_c p1\n25 1\n44 ,2 65 ,0 7 3\n44 ,3 0 8, 62\n6 44 ,3 53 ,2 01\n44 ,3 9 9, 31\n2 44 ,5 30 ,5 00\nFR -R\nU _c\np1 25\n1 44 ,2 66 ,0 9 7\n44 ,3 1 4, 77\n0 44 ,3 57 ,8 09\n44 ,3 9 9, 82\n4 44 ,5 30 ,5 00\nZ H\n-R U\n_c p1\n25 1\n44 ,2 82 ,4 8 1\n44 ,3 3 1, 66\n6 44 ,3 69 ,5 85\n44 ,4 0 6, 48\n0 44 ,5 32 ,0 36\nN um\nbe ro\nfm od\nel pa\nra m\net er\ns fo\nrd at\nas et\nC\nR ep\nre se\nnt at\nio n\nC H\nA R\nB Y\nT E\nW O\nR D\nB PE\nN um\nbe ro\nfl in\nes 10\n0 1, 00\n0 10 ,0 0 0\n10 0 ,0 0 0\n1 ,0 00 ,0 00\n10 0\n1 ,0 00\n10 ,0 00\n10 0, 00\n0 1 ,0 00 ,0 00\n1 00\n1 ,0 00\n10 ,0 00\n10 0 ,0 00\n1 ,0 0 0, 00\n0 10\n0 1 ,0 00\n10 ,0 0 0\n10 0 ,0 0 0\n1 ,0 00 ,0 00\nN um\nbe r\nof PA\nR A\nM S\nA R\n-E N\n44 ,2 3 7 ,3 94\n44 ,2 7 7, 8 62\n44 ,3 1 9, 87\n4 44 ,4 32 ,0 62\n44 ,6 4 4 ,1 5 4\n4 4 ,2 37 ,9 09\n44 ,2 76 ,3 31\n44 ,3 15 ,7 83\n44 ,3 5 2, 67\n5 44 ,3 80 ,3 42\n45 ,6 3 2, 91\n0 51 ,8 78 ,4 8 1\n7 2, 59\n4, 78\n2 13\n7, 7 10 ,2 6 5\n3 47 ,8 43 ,4 39\n45 ,3 72 ,1 98\n49 ,9 98 ,2 4 0\n6 4 ,7 7 0, 10\n0 86 ,7 35 ,5 06\n89 ,3 89 ,6 0 3\nA R\n-E S\n44 ,2 43 ,5 44\n44 ,2 8 8 ,1 1 2\n44 ,3 27 ,0 49\n4 4 ,4 32 ,0 62\n44 ,6 40 ,0 54\n44 ,2 42 ,0 0 9\n44 ,2 85 ,5 56\n4 4 ,3 17 ,8 3 3\n44 ,3 53 ,7 00\n4 4 ,3 77 ,2 67\n45 ,6 98 ,5 10\n52 ,4 53 ,5 06\n75 ,8 6 6, 5 82\n14 6 ,5 29 ,3 65\n36 1, 66\n6, 58\n9 45 ,3 73 ,2 2 3\n5 0 ,2 05 ,2 90\n66 ,4 81 ,8 50\n88 ,1 37 ,7 0 6\n8 9 ,6 50 ,9 78\nA R\n-F R\n44 ,2 46 ,6 19\n44 ,2 93 ,2 3 7\n44 ,3 33 ,1 99\n44 ,4 4 1 ,2 8 7\n44 ,6 64 ,6 54\n44 ,2 48 ,1 59\n44 ,2 90 ,6 81\n44 ,3 1 8, 85\n8 44 ,3 52 ,6 75\n4 4 ,3 81 ,3 6 7\n45 ,7 20 ,0 3 5\n52 ,3 1 9, 23\n1 74 ,5 70 ,9 82\n14 0, 33\n2, 21\n5 33\n8 ,9 56 ,6 89\n4 5 ,4 24 ,4 73\n50 ,2 1 2, 46\n5 66 ,1 74 ,3 5 0\n8 7 ,7 24 ,6 3 1\n8 9 ,5 3 3, 10 3 A R -R U 44 ,2 57 ,8 94 44 ,3 21 ,9 37 44 ,3 61 ,8 99 44 ,4 5 7, 68 7 44 ,6 20 ,5 79 44 ,2 5 7, 38 4 4 4 ,3 1 4, 25 6 44 ,3 39 ,3 58 44 ,3 59 ,8 50 44 ,3 79 ,3 1 7 45 ,8 98 ,3 8 5 54 ,2 5 3, 4 06 85 ,3 60 ,1 3 2 1 80 ,2 06 ,7 65 46 0 ,1 9 8, 81 4 4 5 ,5 77 ,1 98 51 ,4 0 6, 5 90 73 ,2 04 ,8 2 5 8 9 ,3 55 ,4 06 90 ,0 0 3, 5 78 A R -Z H 44 ,9 60 ,0 19 45 ,7 6 6, 16 2 46 ,6 9 4 ,7 9 9 47 ,8 05 ,3 37 49 ,1 41 ,8 54 4 4 ,2 8 6, 0 84 4 4 ,3 2 6, 55 6 44 ,3 47 ,5 58 4 4 ,3 71 ,1 2 5 44 ,3 91 ,6 17 4 5 ,6 08 ,3 10 51 ,8 65 ,1 56 73 ,6 93 ,5 82 14 1, 80 0 ,0 1 5 3 46 ,5 4 3, 7 39 45 ,6 6 1, 2 48 50 ,2 36 ,0 40 65 ,1 94 ,4 50 87 ,4 2 9, 43 1 89 ,2 81 ,9 7 8 E N -A R 44 ,2 47 ,1 41 44 ,2 9 7, 3 56 44 ,3 3 8, 34 2 44 ,4 70 ,5 37 4 4 ,7 0 4 ,1 7 5 4 4 ,2 43 ,5 52 44 ,2 86 ,5 91 44 ,3 22 ,4 52 4 4 ,3 5 3, 18 8 44 ,3 79 ,3 16 4 5 ,7 65 ,7 77 53 ,2 37 ,9 3 1 8 0 ,1 0 4, 07 6 16 5, 7 72 ,9 0 4 4 34 ,7 81 ,0 36 45 ,4 0 4, 0 04 50 ,5 34 ,8 3 8 6 8 ,8 40 ,7 55 88 ,2 26 ,7 97 89 ,7 68 ,7 1 0 E N -E S 44 ,2 33 ,8 16 44 ,2 68 ,6 5 6 44 ,3 08 ,6 17 4 4 ,3 93 ,6 62 44 ,5 80 ,1 50 44 ,2 36 ,3 7 7 44 ,2 75 ,3 16 4 4 ,3 11 ,1 7 7 44 ,3 53 ,1 88 4 4 ,3 78 ,2 91 45 ,5 6 5, 9 02 51 ,0 96 ,7 06 68 ,3 7 1, 9 26 11 8 ,5 21 ,4 29 27 4, 89 8, 46 1 45 ,3 41 ,4 7 9 4 9 ,6 69 ,7 38 62 ,4 1 9, 1 30 86 ,6 49 ,3 2 2 8 9 ,2 72 ,6 10 E N -F R 44 ,2 36 ,8 91 44 ,2 7 3, 78 1 44 ,3 14 ,7 67 44 ,4 0 2 ,8 8 7 44 ,6 04 ,7 50 44 ,2 42 ,5 27 44 ,2 80 ,4 41 44 ,3 1 2, 20 2 44 ,3 52 ,1 63 4 4 ,3 82 ,3 9 1 45 ,5 87 ,4 2 7 50 ,9 6 2, 43 1 67 ,0 76 ,3 26 11 2, 32 4, 27 9 25 2 ,1 88 ,5 61 4 5 ,3 92 ,7 29 49 ,6 7 6, 91 3 62 ,1 11 ,6 3 0 8 6 ,2 36 ,2 4 7 8 9 ,1 54 ,7 35 E N -R U 44 ,2 48 ,1 66 44 ,3 0 2, 48 1 44 ,3 43 ,4 67 44 ,4 1 9, 28 7 44 ,5 60 ,6 75 44 ,2 5 1, 75 2 4 4 ,3 0 4, 01 6 44 ,3 32 ,7 02 44 ,3 59 ,3 38 44 ,3 80 ,3 4 1 45 ,7 65 ,7 7 7 52 ,8 9 6, 6 06 77 ,8 65 ,4 7 6 1 52 ,1 98 ,8 29 37 3 ,4 3 0, 68 6 4 5 ,5 45 ,4 54 50 ,8 7 1, 0 38 69 ,1 42 ,1 0 5 8 7 ,8 67 ,0 22 89 ,6 2 5, 2 10 E N -Z H 44 ,9 50 ,2 91 45 ,7 4 6, 7 06 4 6 ,6 7 6, 36 7 47 ,7 66 ,9 37 4 9 ,0 81 ,9 50 4 4 ,2 80 ,4 52 44 ,3 1 6, 31 6 44 ,3 40 ,9 02 4 4 ,3 7 0, 61 3 44 ,3 92 ,6 41 4 5 ,4 75 ,7 02 50 ,5 08 ,3 56 66 ,1 98 ,9 26 11 3, 79 2 ,0 7 9 2 59 ,7 7 5, 6 11 45 ,6 2 9, 5 04 49 ,7 00 ,4 88 61 ,1 31 ,7 30 85 ,9 4 1, 04 7 88 ,9 03 ,6 1 0 E SA R 44 ,2 50 ,2 13 44 ,3 02 ,4 76 44 ,3 41 ,9 26 44 ,4 70 ,5 37 4 4 ,7 0 2, 12 7 44 ,2 45 ,6 0 0 44 ,2 91 ,1 99 4 4 ,3 23 ,4 76 44 ,3 5 3, 70 0 44 ,3 77 ,7 80 45 ,7 98 ,5 45 53 ,5 25 ,1 6 3 8 1 ,7 3 8, 38 0 17 0, 1 78 ,1 5 2 4 41 ,6 85 ,8 68 45 ,4 0 4, 5 16 50 ,6 38 ,2 6 2 6 9 ,6 95 ,7 95 88 ,9 27 ,2 13 89 ,8 99 ,2 7 0 E SE N 44 ,2 30 ,7 38 44 ,2 63 ,5 2 6 44 ,3 05 ,0 26 4 4 ,3 93 ,6 62 44 ,5 82 ,2 02 44 ,2 34 ,3 2 5 44 ,2 70 ,6 99 4 4 ,3 10 ,1 5 1 44 ,3 52 ,6 75 4 4 ,3 79 ,8 30 45 ,5 3 3, 0 70 50 ,8 08 ,9 13 66 ,7 3 4, 4 30 11 4 ,1 07 ,5 77 26 7, 98 0, 14 3 45 ,3 40 ,9 6 6 4 9 ,5 66 ,1 12 61 ,5 6 2, 4 20 85 ,9 47 ,5 3 8 8 9 ,1 41 ,7 95 E SFR 44 ,2 39 ,9 6 3 4 4 ,2 78 ,9 0 1 44 ,3 18 ,3 51 44 ,4 0 2 ,8 8 7 44 ,6 02 ,7 02 44 ,2 44 ,5 75 44 ,2 85 ,0 49 44 ,3 1 3, 22 6 44 ,3 52 ,6 75 4 4 ,3 80 ,8 5 5 45 ,6 20 ,1 9 5 51 ,2 4 9, 66 3 68 ,7 10 ,6 30 11 6, 72 9, 52 7 25 9 ,0 93 ,3 93 4 5 ,3 93 ,2 41 49 ,7 8 0, 33 7 62 ,9 66 ,6 7 0 8 6 ,9 36 ,6 6 3 8 9 ,2 85 ,2 95 E SR U 44 ,2 51 ,2 38 44 ,3 07 ,6 0 1 4 4 ,3 47 ,0 51 44 ,4 1 9, 28 7 44 ,5 58 ,6 27 44 ,2 5 3, 80 0 4 4 ,3 08 ,6 2 4 44 ,3 3 3, 72 6 44 ,3 59 ,8 50 44 ,3 78 ,8 0 5 45 ,7 98 ,5 4 5 53 ,1 8 3, 8 38 79 ,4 99 ,7 8 0 1 56 ,6 04 ,0 77 38 0 ,3 3 5, 51 8 4 5 ,5 45 ,9 66 50 ,9 7 4, 4 62 69 ,9 97 ,1 4 5 8 8 ,5 67 ,4 38 89 ,7 5 5, 7 70 E SZ H 44 ,9 53 ,3 63 45 ,7 5 1, 8 26 4 6 ,6 7 9, 95 1 47 ,7 66 ,9 37 4 9 ,0 79 ,9 02 4 4 ,2 82 ,5 00 44 ,3 2 0, 92 4 44 ,3 41 ,9 26 4 4 ,3 7 1, 12 5 44 ,3 91 ,1 05 4 5 ,5 08 ,4 70 50 ,7 95 ,5 88 67 ,8 33 ,2 30 11 8, 19 7 ,3 2 7 2 66 ,6 8 0, 4 43 45 ,6 3 0, 0 16 49 ,8 03 ,9 12 61 ,9 86 ,7 70 86 ,6 4 1, 46 3 89 ,0 34 ,1 7 0 FR -A R 44 ,2 51 ,7 49 44 ,3 05 ,0 36 44 ,3 44 ,9 98 44 ,4 75 ,1 45 4 4 ,7 1 4, 41 5 44 ,2 48 ,6 7 2 44 ,2 93 ,7 59 4 4 ,3 23 ,9 88 44 ,3 5 3, 18 8 44 ,3 79 ,8 28 45 ,8 09 ,2 97 53 ,4 58 ,0 9 1 8 1 ,0 9 1, 21 2 16 7, 0 82 ,6 0 0 4 30 ,3 41 ,9 96 45 ,4 3 0, 1 16 50 ,6 41 ,8 4 6 6 9 ,5 42 ,1 95 88 ,7 20 ,8 77 89 ,8 40 ,3 9 0 FR -E N 44 ,2 32 ,2 74 44 ,2 6 6 ,0 8 6 44 ,3 08 ,0 98 4 4 ,3 98 ,2 70 44 ,5 94 ,4 90 44 ,2 37 ,3 9 7 44 ,2 73 ,2 59 4 4 ,3 10 ,6 6 3 44 ,3 52 ,1 63 4 4 ,3 81 ,8 78 45 ,5 4 3, 8 22 50 ,7 41 ,8 41 66 ,0 8 7, 2 62 11 1 ,0 12 ,0 25 25 6, 63 6, 27 1 45 ,3 66 ,5 6 6 4 9 ,5 69 ,6 96 61 ,4 0 8, 8 20 85 ,7 41 ,2 0 2 8 9 ,0 82 ,9 15 FR -E S 44 ,2 38 ,4 2 4 4 4 ,2 76 ,3 3 6 44 ,3 15 ,2 73 44 ,3 9 8 ,2 7 0 44 ,5 90 ,3 90 44 ,2 41 ,4 97 44 ,2 82 ,4 84 44 ,3 1 2, 71 3 44 ,3 53 ,1 88 4 4 ,3 78 ,8 0 3 45 ,6 09 ,4 2 2 51 ,3 1 6, 86 6 69 ,3 59 ,0 62 11 9, 83 1, 12 5 27 0 ,4 59 ,4 21 4 5 ,3 67 ,5 91 49 ,7 7 6, 74 6 63 ,1 20 ,5 7 0 8 7 ,1 43 ,4 0 2 8 9 ,3 44 ,2 90 FR -R U 44 ,2 52 ,7 74 44 ,3 1 0, 16 1 44 ,3 50 ,1 23 44 ,4 2 3, 89 5 44 ,5 70 ,9 15 44 ,2 5 6, 87 2 4 4 ,3 1 1, 18 4 44 ,3 34 ,2 38 44 ,3 59 ,3 38 44 ,3 80 ,8 5 3 45 ,8 09 ,2 9 7 53 ,1 1 6, 7 66 78 ,8 52 ,6 1 2 1 53 ,5 08 ,5 25 36 8 ,9 9 1, 64 6 4 5 ,5 71 ,5 66 50 ,9 7 8, 0 46 69 ,8 43 ,5 4 5 8 8 ,3 61 ,1 02 89 ,6 9 6, 8 90 FR -Z H 44 ,9 54 ,8 99 45 ,7 54 ,3 86 4 6 ,6 8 3, 02 3 47 ,7 71 ,5 45 4 9 ,0 92 ,1 90 4 4 ,2 85 ,5 72 44 ,3 2 3, 48 4 44 ,3 42 ,4 38 4 4 ,3 7 0, 61 3 44 ,3 93 ,1 53 4 5 ,5 19 ,2 22 50 ,7 28 ,5 16 67 ,1 86 ,0 62 11 5, 10 1 ,7 7 5 2 55 ,3 3 6, 5 71 45 ,6 5 5, 6 16 49 ,8 07 ,4 96 61 ,8 33 ,1 70 86 ,4 3 5, 12 7 88 ,9 75 ,2 9 0 R U -A R 44 ,2 57 ,3 81 44 ,3 19 ,3 72 44 ,3 59 ,3 34 44 ,4 83 ,3 37 4 4 ,6 9 2, 39 9 44 ,2 53 ,2 8 0 44 ,3 05 ,5 35 4 4 ,3 34 ,2 28 44 ,3 5 6, 77 2 44 ,3 78 ,8 04 45 ,8 98 ,3 85 54 ,4 24 ,2 3 5 8 6 ,4 8 0, 52 4 18 7, 0 00 ,4 2 4 4 90 ,9 03 ,9 16 45 ,5 0 6, 4 04 51 ,2 38 ,3 2 6 7 3 ,0 54 ,0 03 89 ,5 35 ,4 69 90 ,0 75 ,3 9 8 R U -E N 44 ,2 37 ,9 06 44 ,2 80 ,4 2 2 44 ,3 22 ,4 34 4 4 ,4 06 ,4 62 44 ,5 72 ,4 74 44 ,2 42 ,0 0 5 44 ,2 85 ,0 35 4 4 ,3 20 ,9 0 3 44 ,3 55 ,7 47 4 4 ,3 80 ,8 54 45 ,6 3 2, 9 10 51 ,7 07 ,9 85 71 ,4 7 6, 5 74 13 0 ,9 29 ,8 49 31 7, 19 8, 19 1 45 ,4 42 ,8 5 4 5 0 ,1 66 ,1 76 64 ,9 2 0, 6 28 86 ,5 55 ,7 9 4 8 9 ,3 17 ,9 23 R U -E S 44 ,2 44 ,0 56 44 ,2 90 ,6 7 2 44 ,3 29 ,6 09 4 4 ,4 06 ,4 62 44 ,5 68 ,3 74 44 ,2 46 ,1 05 44 ,2 94 ,2 60 44 ,3 2 2, 95 3 44 ,3 56 ,7 72 4 4 ,3 77 ,7 79 45 ,6 9 8, 5 10 52 ,2 8 3, 01 0 74 ,7 48 ,3 74 13 9, 74 8, 94 9 33 1 ,0 21 ,3 41 4 5 ,4 43 ,8 79 50 ,3 7 3, 22 6 66 ,6 32 ,3 7 8 8 7 ,9 57 ,9 9 4 8 9 ,5 79 ,2 98 R U -F R 44 ,2 47 ,1 31 44 ,2 9 5, 79 7 44 ,3 35 ,7 59 44 ,4 1 5, 68 7 44 ,5 92 ,9 74 44 ,2 5 2, 25 5 4 4 ,2 9 9, 38 5 44 ,3 23 ,9 78 44 ,3 55 ,7 47 44 ,3 81 ,8 7 9 45 ,7 20 ,0 3 5 52 ,1 4 8, 7 35 73 ,4 52 ,7 7 4 1 33 ,5 51 ,7 99 30 8 ,3 1 1, 44 1 4 5 ,4 95 ,1 29 50 ,3 8 0, 4 01 66 ,3 24 ,8 7 8 8 7 ,5 44 ,9 19 89 ,4 6 1, 4 23 R U -Z H 44 ,9 60 ,5 31 45 ,7 6 8, 7 22 4 6 ,6 9 7, 35 9 47 ,7 79 ,7 37 4 9 ,0 70 ,1 74 4 4 ,2 90 ,1 80 44 ,3 3 5, 26 0 44 ,3 52 ,6 78 4 4 ,3 7 4, 19 7 44 ,3 92 ,1 29 4 5 ,6 08 ,3 10 51 ,6 94 ,6 60 72 ,5 75 ,3 74 13 5, 01 9 ,5 9 9 3 15 ,8 9 8, 4 91 45 ,7 3 1, 9 04 50 ,4 03 ,9 76 65 ,3 44 ,9 78 87 ,2 4 9, 71 9 89 ,2 10 ,2 9 8 Z H -A R 44 ,6 08 ,1 01 45 ,0 4 0, 7 80 45 ,5 2 4, 64 6 46 ,1 55 ,5 29 4 6 ,9 5 0 ,8 3 1 4 4 ,2 67 ,6 16 44 ,3 11 ,6 79 44 ,3 38 ,3 24 4 4 ,3 6 2, 40 4 44 ,3 84 ,9 48 4 5 ,7 53 ,4 89 53 ,2 31 ,2 7 5 8 0 ,6 5 2, 94 0 16 7, 8 15 ,7 8 4 4 34 ,1 31 ,8 20 45 ,5 4 8, 3 88 50 ,6 53 ,6 2 2 6 9 ,0 52 ,7 23 88 ,5 73 ,4 21 89 ,7 14 ,9 5 0 Z H -E N 44 ,5 88 ,6 26 45 ,0 01 ,8 3 0 45 ,4 87 ,7 46 4 6 ,0 78 ,6 54 46 ,8 30 ,9 06 44 ,2 56 ,3 4 1 44 ,2 91 ,1 79 4 4 ,3 24 ,9 9 9 44 ,3 61 ,3 79 4 4 ,3 86 ,9 98 45 ,4 8 8, 0 14 50 ,5 15 ,0 25 65 ,6 4 8, 9 90 11 1 ,7 45 ,2 09 26 0, 42 6, 09 5 45 ,4 84 ,8 3 8 4 9 ,5 81 ,4 72 60 ,9 1 9, 3 48 85 ,5 93 ,7 4 6 8 8 ,9 57 ,4 75 Z H -E S 44 ,5 94 ,7 76 45 ,0 12 ,0 8 0 45 ,4 94 ,9 21 46 ,0 7 8 ,6 5 4 46 ,8 26 ,8 06 44 ,2 60 ,4 41 44 ,3 00 ,4 04 44 ,3 2 7, 04 9 44 ,3 62 ,4 04 4 4 ,3 83 ,9 2 3 45 ,5 53 ,6 1 4 51 ,0 9 0, 05 0 68 ,9 20 ,7 90 12 0, 56 4, 30 9 27 4 ,2 49 ,2 45 4 5 ,4 85 ,8 63 49 ,7 8 8, 52 2 62 ,6 31 ,0 9 8 8 6 ,9 95 ,9 4 6 8 9 ,2 18 ,8 50 Z H -F R 44 ,5 9 7 ,8 51 45 ,0 17 ,2 05 45 ,5 01 ,0 71 46 ,0 8 7, 87 9 46 ,8 51 ,4 06 44 ,2 6 6, 59 1 4 4 ,3 0 5, 52 9 44 ,3 28 ,0 74 44 ,3 61 ,3 79 44 ,3 88 ,0 2 3 45 ,5 75 ,1 3 9 50 ,9 5 5, 7 75 67 ,6 25 ,1 9 0 1 14 ,3 67 ,1 59 25 1 ,5 3 9, 34 5 4 5 ,5 37 ,1 13 49 ,7 9 5, 6 97 62 ,3 23 ,5 9 8 8 6 ,5 82 ,8 71 89 ,1 0 0, 9 75 Z H -R U 44 ,6 09 ,1 26 45 ,0 4 5, 9 05 4 5 ,5 2 9, 77 1 46 ,1 04 ,2 79 4 6 ,8 07 ,3 31 4 4 ,2 75 ,8 16 44 ,3 2 9, 10 4 44 ,3 48 ,5 74 4 4 ,3 6 8, 55 4 44 ,3 85 ,9 73 4 5 ,7 53 ,4 89 52 ,8 89 ,9 50 78 ,4 14 ,3 40 15 4, 24 1 ,7 0 9 3 72 ,7 8 1, 4 70 45 ,6 8 9, 8 38 50 ,9 89 ,8 22 69 ,3 54 ,0 73 88 ,2 1 3, 64 6 89 ,5 71 ,4 5 0\nA R\n-Z H\n_p in\nyi n\n44 ,2 33 ,2 94\n44 ,2 87 ,0 87\n44 ,3 33 ,1 99\n44 ,4 87 ,4 12\n4 4 ,7 4 9, 72 9 E N -Z H _p in yi n 44 ,2 23 ,5 66 44 ,2 67 ,6 31 44 ,3 14 ,7 67 44 ,4 49 ,0 12 4 4 ,6 8 9, 82 5 E SZ H _p in yi n 44 ,2 26 ,6 38 44 ,2 72 ,7 51 44 ,3 18 ,3 51 44 ,4 49 ,0 12 4 4 ,6 8 7, 77 7 FR -Z H _p in yi n 44 ,2 28 ,1 74 44 ,2 75 ,3 11 44 ,3 2 1, 42 3 44 ,4 53 ,6 20 4 4 ,7 0 0, 06 5 R U -Z H _p in yi n 44 ,2 33 ,8 06 44 ,2 89 ,6 47 44 ,3 3 5, 75 9 44 ,4 61 ,8 12 4 4 ,6 7 8, 04 9 A R -Z H _w ub i 44 ,2 54 ,8 19 44 ,3 09 ,6 37 44 ,3 4 8, 57 4 44 ,4 89 ,4 62 4 4 ,7 4 4, 60 4 E N -Z H _w ub i 44 ,2 45 ,0 91 44 ,2 90 ,1 81 44 ,3 3 0, 14 2 44 ,4 51 ,0 62 4 4 ,6 84 ,7 0 0 E SZ H _w ub i 44 ,2 48 ,1 63 44 ,2 95 ,3 01 44 ,3 3 3, 72 6 44 ,4 51 ,0 62 4 4 ,6 82 ,6 5 2 FR -Z H _w ub i 44 ,2 49 ,6 99 44 ,2 97 ,8 61 44 ,3 3 6, 79 8 44 ,4 55 ,6 70 4 4 ,6 94 ,9 4 0 R U -Z H _w ub i 44 ,2 55 ,3 31 44 ,3 12 ,1 97 44 ,3 5 1, 13 4 44 ,4 63 ,8 62 4 4 ,6 72 ,9 2 4 E N -A R _c p1 25 6\n44 ,2 48 ,6 7 7\n44 ,2 9 9, 91\n6 44 ,3 40 ,9 02\n44 ,4 5 6, 71\n3 44 ,6 36 ,5 91\nE S-\nA R\n_c p1\n25 6\n44 ,2 50 ,7 2 5\n44 ,3 0 4, 52\n4 44 ,3 41 ,9 26\n44 ,4 5 7, 22\n5 44 ,6 35 ,0 55\nFR -A\nR _c\np1 25\n6 44 ,2 53 ,7 9 7\n44 ,3 0 7, 08\n4 44 ,3 42 ,4 38\n44 ,4 5 6, 71\n3 44 ,6 37 ,1 03\nR U\n-A R\n_c p1\n25 6\n44 ,2 58 ,4 0 5\n44 ,3 1 8, 86\n0 44 ,3 52 ,6 78\n44 ,4 6 0, 29\n7 44 ,6 36 ,0 79\nZ H\n-A R\n_c p1\n25 6\n44 ,2 72 ,7 4 1\n44 ,3 2 5, 00\n4 44 ,3 56 ,7 74\n44 ,4 6 5, 92\n9 44 ,6 42 ,2 23\nA R\n-R U\n_c p1\n25 1\n44 ,2 55 ,3 3 4\n44 ,3 1 5, 28\n1 44 ,3 52 ,6 83\n44 ,4 0 5, 97\n5 44 ,4 92 ,0 67\nE N\n-R U\n_c p1\n25 1\n44 ,2 49 ,7 0 2\n44 ,3 0 5, 04\n1 44 ,3 46 ,0 27\n44 ,4 0 5, 46\n3 44 ,4 93 ,0 91\nE S-\nR U\n_c p1\n25 1\n44 ,2 51 ,7 5 0\n44 ,3 0 9, 64\n9 44 ,3 47 ,0 51\n44 ,4 0 5, 97\n5 44 ,4 91 ,5 55\nFR -R\nU _c\np1 25\n1 44 ,2 54 ,8 2 2\n44 ,3 1 2, 20\n9 44 ,3 47 ,5 63\n44 ,4 0 5, 46\n3 44 ,4 93 ,6 03\nZ H\n-R U\n_c p1\n25 1\n44 ,2 73 ,7 6 6\n44 ,3 3 0, 12\n9 44 ,3 61 ,8 99\n44 ,4 1 4, 67\n9 44 ,4 98 ,7 23" }, { "heading": "M ERRATICITY", "text": "Length has been an issue since the dawn of the encoder-decoder approach for NMT (Cho et al., 2014). Most work on length bias, except for that by e.g. Sountsov & Sarawagi (2016), seems to have focused on the evaluation of generated translation output and monitored performance degradation with respect to sequence length, often arguing that beam size plays a role (Koehn & Knowles, 2017; Murray & Chiang, 2018). (Related work in Stahlberg & Byrne (2019) provides a good summary on this issue.) While there could also be confounds in search, our experiments show that a kind of length bias can surface already with CLMing, without generation taking place. To our knowledge, length bias has not been expressed as a sample-wise non-monotonicity across a large data size range as ours. While the connection between erraticity in CLMs and length bias in NMT models remains to be verified on a case-by-case basis, the knowledge of length also contributing to robustness (not just consistently poor/poorer performance) could support further experimentation/replication of any study. Failed attempts to reproduce results may be explainable by erraticity.\nOne may argue that erraticity may not be relevant when each model is more optimally trained (as opposed to being treated with our one-setting-for-all regime). But we do want to stress that this very stark contrast between erratic and non-erratic behavior is possible, prompting a question on fairness: is there a one-for-all setting under which the languages with non-erratic behavior shown in our study would demonstrate erraticity and vice versa?\nTo the best of our knowledge, the meta phenomenon of erraticity, as a sample-wise non-monotonicity measured intrinsically with cross-entropy and contributing to large variance across runs, is a novel and original discovery and contribution to research in robustness. We hope our work would inspire further evaluation on other models/architectures, reflection and theories on our assumption of unbounded computation (e.g. Xu et al. (2020)), as well as new understanding and solutions that take data statistics and realistic computational aspects into account. We defer a more comprehensive analysis of erraticity with further experiments to future work.\nM.1 ERRATICITY AS LARGE VARIANCE: EVIDENCE FROM DIFFERENT RUNS OF THE SAME DATA\nTo confirm that erraticity is not due to data-specific reasons, e.g. when certain data segments might be “easier” to model than others, we show figures from 2 runs (Figs. 18a and 18b) on the same dataset of wildly differing performance that only differ in seed. Note that changes in the y-direction can vary much, indicating large variance across runs.\nBy establishing that high variance holds across sample sizes, we showcased how it’d be possible to just test on 2 or 3 data points of smaller sizes to get a gauge on the robustness in higher order. It serves as a signal of when the system is being “stress-tested” and hyperparameters need re-tuning. Spot-testing on a couple of smaller data sizes can indeed save much time and energy. Take our run B0 byte models as an example: the training of the 102-line model for EN-RU took 15 minutes, 103 40 minutes, 104 1 hour 50 minutes, and 105 3 hours 36 minutes. One can imagine how these would just be a fraction of training time for bigger models. (Likewise, for our ratio of target training token count to number of parameters — knowing when a representation might be prone to DD within a data size range could help prevent practitioners from prematurely declaring experimental results as negative or from unnecessarily rerunning an experiment because bigger data did not lead to better results.)\nM.2 ADDITIONAL EXPERIMENT WITH LENGTH FILTERING TO 300 BYTES\nFigure 19a and 19b show results of additional experiment with subset of data in byte (UTF-8) representation length-filtered to 300, including dev data:\nErraticity remains for AR and RU. Scores are lower, though they cannot be compared with the experiments in the main paper due to difference in dev data size (3,077 lines vs. 1,804 lines here). Number of total lines for train is 5,533,672 lines for each language, from which we took the initial 102-106. As in our main experiments, we filtered out only whole lines, i.e. not by discarding the tails of longer lines. 300 bytes aren’t long sequences, but without data transform or hyperparameter tuning, things can look unfair. The EN translation of the longest RU line in this dataset is: “47. It is\nnoted that there is a lack of information provided by the Government of Trinidad and Tobago with regard to the legal status of the Convention in the domestic legislation.”" }, { "heading": "N EXPERIMENTS WITH ONE-LAYER TRANSFORMER", "text": "We performed 1 run with dataset A in 4 sizes (102-105 lines, seed=13) with the primary representations of characters, bytes, and words, on 1-layer Transformers (num-layers 1:1, all other hyperparameters remain the same as for our main experiments). We compared this against run A0 in 4 sizes with the same seed. (Based on how our null hypothesis is set up, the higher the number of runs, the more likely it is for there to be disparity. Important is that we evaluate based on an equal number of runs and on the same data for all candidates.) Results are shown in Table 5 with no statistically significant disparity observed on the models trained with 1 layer across the board.\nMany are under the impression that big data is the cause to the neutralization of language instances in DL/NNs. But, as this set of experiments shows, it is possible for there to be no statistically significant differences between them, with as little as our smallest data size of 100 lines." }, { "heading": "O PAQS (PREVIOUSLY ASKED QUESTIONS)", "text": "O.1 ONE SETTING FOR ALL\nQ: Normally, one trains a model with the objective of optimizing based on the training and evaluation data with hyperparameter tuning. The experiments here used one setting for all. Some model configurations might train better and converge close to their optima while other configurations might not reach their full potential. Can this not create a distortion in the results?\nA: For conventional engineering practice, we agree that hyperparameter tuning would be a sine qua non. However, the evaluation objective is the relational distance between languages, hence we need to see it in a different light. Here is a loose analogy:\n***\nAssume 3 objects in 3 different locations in space.\nRelative evaluation from one setting allows one to capture the distance between these objects. It does not matter whether these three objects are in their “best” states.\nFor example, if one were to use a camera to capture these 3 objects and one does not adjust the setting (using just one random aperture, shutter speed, and focus), i.e. no tuning to capture any of these 3 specifically, nor does one try to model these 3 to their individual bests separately, what would result could be a picture that captures one of these 3 objects more favorably than the others, or it could be that all of these would be blurred. But either way, there is a degree of blurriness to be measured, giving us an idea of the relative distance between the objects. Such relative measurement is the evaluation strategy that our paper adopts.\nNow, to add to the camera analogy, say one of the objects is running water, which was extra blurry [erraticity]: we suggest freezing the water, so even from the one arbitrary angle, it could be captured better. And it worked.\nAlso, while one might generally like to have a “pretty” photo, one that is e.g. taken with sub-optimal lighting, say, overexposure, can have a telling effect as it can bring out details in something dark, like a black box.\n***\nAlternatively, one can tune hyperparameters for each model individually such that each model would be a more optimized one and then compare these models. In that case, one would be interpreting the differences between language in terms of hyperparameters, and the paper would be one that is algorithm-centric. That is of course also a possibility. Our approach, however, is a data-centric one. We would, first of all, like to understand the nature of language data, i.e. what it is about language, if there is anything at all, that makes it a different data type than other data, and what kind of structural constraints, if any, that we need to take into consideration. Then with findings from this data perspective, we try to relate back to the algorithm and make connections so to create a more holistic picture.\nO.2 TRANSLATIONESE / WORD ORDER\nQ: Multitexts are parallel texts or translations with the same meaning. There is little to no variation in word order, hence they are just “Translationese” (Gellerstam, 1986). That is why they turn out to be the same, with no performance disparity.\nA: Our findings do show that when the semantics is properly controlled, such as in multitexts, the factors influencing performance are statistical properties related to sequence length and vocabulary, e.g. |V | or TTR, and the languages tested can be different. Semantic equivalence is also not a reason why we should expect neutralization of source language instances, as that would mean we should expect equal results across target languages.\nWe agree that faithfulness is often a priority in producing good translations. Whether the translations are produced by humans or machines, only a single best translation can surface as the translation of choice. There may be many other competing hypotheses, but regardless of whether it is done through an automatic ranking algorithm by a machine or through a human expert, the purpose of\ntranslation is the same. However, styles and preferences in translations can vary. While faithfulness is generally preferred in the translations of legal texts, more freedom with skillful rearrangement of and play on words (or rather, character or sub-character sequences) or sounds being a criterion for literary texts could be appreciated by certain readers. We agree that it could be very interesting and necessary to model these variations, and we understand that languages can surface in many multimodal forms beyond the confines of texts as well. But with a data-driven perspective, to model this broader variation in language, we need corresponding datasets — we suggest contrast sets where the difference in e.g. sequential order is explicit. And for evaluation, we would require an even more systematic meta evaluation, one that spans different datasets.\nBut the argument that language or data could be different beyond how it appears in one dataset is irrelevant in the evaluation of experiments involving said dataset." }, { "heading": "P UNDERSTANDING THE PHENOMENA WITH ALTERNATE REPRESENTATIONS (EXTENDED VERSION)", "text": "[Appendix P is an extended version of § 4.]\nTo understand why some languages show different results than others, we carried out a secondary set of control experiments with representations targeting the problematic statistical properties of the corresponding target languages.\nCharacter level On the character level, it is well known that ZH differs from the other languages in its high |V |, in this study it has an averaged mean±std of 2550±144912 across all 5 data sizes from all 3 datasets compared to 170±87 from all other 5 languages combined, may these be in Latin or Cyrillic alphabet or the Abjad script. But what is often not known is that the character sequence length of logographic languages such as ZH is typically short (think and compare the sequence length of the Ancient Egyptian hieroglyphs or the Demotic script with that of the Greek script on the Rosetta Stone). Here in our case, the averaged mean sequence length in characters for ZH is 35±19, compared to 129±71 from the other 5 languages. Heuristics to mitigate high |V | often involve decomposition, which automatically resolve the problem of short sequence length. We tried 2 methods to lower character |V | with representations in ASCII characters — Pinyin and Wubi. The former is a romanization of ZH characters based on their pronunciations and the latter is an input algorithm that decomposes character-internal information into stroke shape and ordering and matches these to 5 classes of radicals (Lunde, 2008). We replaced the ZH data with these formats only on the target side and reran the experiments involving ZH as a target language (ZHtrg) on the character level.\nResults in Figure 2 and Table 1 show that the elimination of disparity on character level is possible if ZH is represented through Pinyin (transliteration), as in Subfigure 2c. But Wubi exhibits erraticity (Subfigure 2a). Wubi in our data has a maximum sequence length of 688 characters. As we shall also show in our byte-level analysis below, there are reasons to attribute length as cause to erraticity.\nDecomposition into strokes may seem like a natural remedy analogous to decomposing an EN word into character sequences, but one needs to be mindful of not exceeding an optimal length given finite computation. Considering the ZH in the UN data is represented in simplified characters, decomposing traditional characters would surely complicate the problem. As there are also sub-character semantic and phonetic units (Zhang & Komachi, 2018) that can be exploited for information and aligned with character sequences of other alphabets, qualitative advances in this area can indeed be a new state of the art.\nByte level On the byte level, we observe irregularity for AR and RU. We find minimum sequence length of the target language to be one of the highest metrics correlating positively with the total number of bits (ρ = 0.60).13 Our data is based on 300 characters as maximum length per line. While we wanted to retain at least 75% of the UN data after length filtering, this length still renders a maximum sequence length that exceeds 100 words (the default maximum length for the word alignment model, GIZA++ (Och & Ney, 2003), in the traditional SMT pipeline). Translated into bytes with UTF-8 encoding, data with 300 characters maximum gives us, e.g. for the 106-line datasets, an averaged mean±std of 185±106 in length for AR and 246±142 for RU, considerably larger than that for ZH (94±53) and for EN/ES/FR (≈145.41±77). With UTF-8 encoding, each character in AR, RU, and ZH contains 2 or more bytes. ZH typically has shorter line length in characters, compensating for the total byte sequence in length, even when most ZH characters are 3 bytes each. However, AR and RU generally have long line length in characters, so when converted to bytes, the sequence length remains long even when most of the characters might be just 2 bytes each. Results from our pairwise comparisons indicate 8 (non-directional) language pairs to be significantly different (see Table 1 under “BYTE”): ES-RU, EN-RU, FR-RU, RU-ZH, AR-RU, AR-EN, AR-ZH, and AR-FR — all involving AR or RU. (Appendix I lists also the language pairs with significant differences for other representations.)\n12Figures are rounded to whole number. Complete tables of data statistics are provided in Appendix D. 13Top-3 correlates for each representation can be found in Appendix F.\nLeveraging language-specific code pages can be a useful practical trick, a reminder that there are alternatives to UTF-8 for analyses and back-end processing if data is clean and homogeneous and if success of larger-scale prediction is not a concern. But one more sustainable alternative is to design a more adaptive and flexible character encoding scheme in general, taking into account the statistical profiles such as length (wrt characters and bytes) and sub-character (atomic/elementary/compound) information of all (or as many as possible) of the world’s languages.\nWord level The main difference between word and character/byte models is the absence of length as a top contributing factor correlating with performance. Instead, what matters more are metrics concerning word vocabulary, with top correlate being OOV token rate in the target language (ρ = 0.66). This is understandable as word segmentation neutralizes sequence lengths — the longer lengths in phonetic alphabetic scripts are shortened through multiple-character groupings, while the shorter lengths in logographic scripts (cf. difference in length for the 3 scripts on the Rosetta Stone, logographic scripts are typically shorter than phonetic ones) are lengthened by the insertion of whitespaces. To remedy the OOV problem, we use BPE, which learns a fixed vocabulary of variable-length character sequences (on word level, as it presupposes word segmentation) from the training data. It is more fine-grained than word segmentation and is known for its capability to model subword units for morphologically complex languages (e.g. AR and RU). We use the same vocabulary of 30,000 as specified in Junczys-Dowmunt et al. (2016). This reduced our averaged OOV token rate by 89-100% across the 5 sizes. The number of language pairs with significant differences (p ≤ 0.001) reduced to 7 from 8 for word models, showing how finer-grained modeling has a positive effect on closing the disparity gap.\nVersion 1.1 (graphs to be updated, score tables added)" } ]
2,020
null
SP:0cab715d71a765b97066673f3a2d0e00d22ffa3c
[ "The authors propose a neural architecture search (NAS) algorithm inspired by brain physiology. In particular, they propose a NAS algorithm based on neural dendritic branching, and apply it to three different segmentation tasks (namely cell nuclei, electron microscopy, and chest X-ray lung segmentation). The authors share their codes with the scientific community, which is highly appreciated. " ]
Researchers manually compose most neural networks through painstaking experimentation. This process is taxing and explores only a limited subset of possible architecture. Researchers design architectures to address objectives ranging from low space complexity to high accuracy through hours of experimentation. Neural architecture search (NAS) is a thriving field for automatically discovering architectures achieving these same objectives. Addressing these ever-increasing challenges in computing, we take inspiration from the brain because it has the most efficient neuronal wiring of any complex structure; its physiology inspires us to propose Bractivate, a NAS algorithm inspired by neural dendritic branching. An evolutionary algorithm that adds new skip connection combinations to the most active blocks in the network, propagating salient information through the network. We apply our methods to lung x-ray, cell nuclei microscopy, and electron microscopy segmentation tasks to highlight Bractivate’s robustness. Moreover, our ablation studies emphasize dendritic branching’s necessity: ablating these connections leads to significantly lower model performance. We finally compare our discovered architecture with other state-of-the-art UNet models, highlighting how efficient skip connections allow Bractivate to achieve comparable results with substantially lower space and time complexity, proving how Bractivate balances efficiency with performance. We invite you to work with our code here: https://tinyurl.com/bractivate.
[]
[ { "authors": [ "Md Zahangir Alom", "Mahmudul Hasan", "Chris Yakopcic", "Tarek M Taha", "Vijayan K Asari" ], "title": "Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation", "venue": "arXiv preprint arXiv:1802.06955,", "year": 2018 }, { "authors": [ "Peter J Angeline", "Gregory M Saunders", "Jordan B Pollack" ], "title": "An evolutionary algorithm that constructs recurrent neural networks", "venue": "IEEE transactions on Neural Networks,", "year": 1994 }, { "authors": [ "Han Cai", "Tianyao Chen", "Weinan Zhang", "Yong Yu", "Jun Wang" ], "title": "Efficient architecture search by network transformation", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Juan C Caicedo", "Allen Goodman", "Kyle W Karhohs", "Beth A Cimini", "Jeanelle Ackerman", "Marzieh Haghighi", "CherKeng Heng", "Tim Becker", "Minh Doan", "Claire McQuin" ], "title": "Nucleus segmentation across imaging experiments: the 2018 data science bowl", "venue": "Nature methods,", "year": 2019 }, { "authors": [ "X. Chen", "L. Xie", "J. Wu", "Q. Tian" ], "title": "Progressive differentiable architecture search: Bridging the depth gap between search and evaluation", "venue": "IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Hugo de Garis" ], "title": "Genetic programming: Modular evolution for darwin machines", "venue": "In Proceedings of the 1990 International Joint Conference on Neural Networks,", "year": 1990 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L. Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Santiago Estrada", "Ran Lu", "Sailesh Conjeti", "Ximena Orozco-Ruiz", "Joana Panos-Willuhn", "Monique M.B. Breteler", "Martin Reuter" ], "title": "Fatsegnet : A fully automated deep learning pipeline for adipose tissue segmentation on abdominal dixon MRI", "venue": "URL http://arxiv.org/abs/ 1904.02082", "year": 1904 }, { "authors": [ "David B Fogel", "Lawrence J Fogel", "VW Porto" ], "title": "Evolving neural networks", "venue": "Biological cybernetics,", "year": 1990 }, { "authors": [ "M.A. Ghamdi", "M. Abdel-Mottaleb" ], "title": "Collado-Mesa. Du-net: Convolutional network for the detection of arterial calcifications in mammograms", "venue": "IEEE Transactions on Medical Imaging,", "year": 2020 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "William T Greenough", "Fred R Volkmar" ], "title": "Pattern of dendritic branching in occipital cortex of rats reared in complex environments", "venue": "Experimental neurology,", "year": 1973 }, { "authors": [ "William T Greenough", "John R Larson", "Ginger S Withers" ], "title": "Effects of unilateral and bilateral training in a reaching task on dendritic branching of neurons in the rat motor-sensory forelimb cortex", "venue": "Behavioral and neural biology,", "year": 1985 }, { "authors": [ "Gillian F Hamilton", "Lee T Whitcher", "Anna Y Klintsova" ], "title": "Postnatal binge-like alcohol exposure decreases dendritic complexity while increasing the density of mature spines in mpfc layer", "venue": "ii/iii pyramidal neurons. Synapse,", "year": 2010 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Lee Jun Hy" ], "title": "Leejunhyun/image segmentation: Pytorch implementation of u-net, r2u-net, attention u-net, and attention r2u-net. https://github.com/LeeJunHyun/Image_Segmentation, 2018", "venue": null, "year": 2020 }, { "authors": [ "Stefan Jaeger", "Sema Candemir", "Sameer Antani", "Yı̀-Xiáng J Wáng", "Pu-Xuan Lu", "George Thoma" ], "title": "Two public chest x-ray datasets for computer-aided screening of pulmonary diseases", "venue": "Quantitative imaging in medicine and surgery,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "Cifar-10 and cifar-100 datasets", "venue": "URl: https://www. cs. toronto. edu/kriz/cifar. html,", "year": 2009 }, { "authors": [ "Yann LeCun", "Yoshua Bengio" ], "title": "Convolutional networks for images, speech, and time series", "venue": "The handbook of brain theory and neural networks,", "year": 1995 }, { "authors": [ "C. Liu", "L. Chen", "F. Schroff", "H. Adam", "W. Hua", "A.L. Yuille", "L. Fei-Fei" ], "title": "Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: differentiable architecture", "venue": "search. CoRR,", "year": 2018 }, { "authors": [ "Aurélien Lucchi", "Kevin Smith", "Radhakrishna Achanta", "Graham Knott", "Pascal Fua" ], "title": "Supervoxel-based segmentation of mitochondria in em image stacks with learned shape features", "venue": "IEEE transactions on medical imaging,", "year": 2011 }, { "authors": [ "Geoffrey F Miller", "Peter M Todd", "Shailesh U Hegde" ], "title": "Designing neural networks using genetic algorithms", "venue": "In ICGA,", "year": 1989 }, { "authors": [ "Ozan Oktay", "Jo Schlemper", "Loic Le Folgoc", "Matthew Lee", "Mattias Heinrich", "Kazunari Misawa", "Kensaku Mori", "Steven McDonagh", "Nils Y Hammerla", "Bernhard Kainz" ], "title": "Attention u-net: Learning where to look for the pancreas", "venue": "arXiv preprint arXiv:1804.03999,", "year": 2018 }, { "authors": [ "Ilija Radosavovic", "Raj Prateek Kosaraju", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Designing network design spaces, 2020", "venue": null, "year": 2020 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture", "venue": null, "year": 2018 }, { "authors": [ "Philippe Remy" ], "title": "philipperemy/keract: Activation maps (layers outputs) and gradients in keras", "venue": "https:// github.com/philipperemy/keract,", "year": 2020 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In Lecture Notes in Computer Science,", "year": 2015 }, { "authors": [ "Frank Rosenblatt" ], "title": "The perceptron: a probabilistic model for information storage and organization in the brain", "venue": "Psychological review,", "year": 1958 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": null, "year": 2014 }, { "authors": [ "Sagar Vaze", "Weidi Xie", "Ana IL Namburete" ], "title": "Low-memory cnns enabling real-time ultrasound segmentation towards mobile deployment", "venue": "IEEE Journal of Biomedical and Health Informatics,", "year": 2020 }, { "authors": [ "Yu Weng", "Tianbao Zhou", "Yujie Li", "Xiaoyu Qiu" ], "title": "Nas-unet: Neural architecture search for medical image segmentation", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Liang Lin" ], "title": "SNAS: stochastic neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Xiaowei Xu", "Qing Lu", "Tianchen Wang", "Jinglan Liu", "Cheng Zhuo", "Xiaobo Sharon Hu", "Yiyu Shi" ], "title": "Edge segmentation: Empowering mobile telemedicine with compressed cellular neural networks", "venue": "IEEE/ACM International Conference on Computer-Aided Design (ICCAD),", "year": 2017 }, { "authors": [ "Xin Yao" ], "title": "Evolutionary artificial neural networks", "venue": "International journal of neural systems,", "year": 1993 }, { "authors": [ "Xin Yao" ], "title": "Evolving artificial neural networks", "venue": "Proceedings of the IEEE,", "year": 1999 }, { "authors": [ "Chris Ying", "Aaron Klein", "Eric Christiansen", "Esteban Real", "Kevin Murphy", "Frank Hutter" ], "title": "NAS-bench101: Towards reproducible neural architecture search", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Z. Zhou", "M.M.R. Siddiquee", "N. Tajbakhsh", "J. Liang" ], "title": "Unet++: Redesigning skip connections to exploit multiscale features in image segmentation", "venue": "IEEE Transactions on Medical Imaging,", "year": 2020 }, { "authors": [ "Zongwei Zhou", "Md Mahfuzur Rahman Siddiquee", "Nima Tajbakhsh", "Jianming Liang" ], "title": "Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support", "venue": null, "year": 2018 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nResearchers manually composing neural networks must juggle multiple goals for their architectures. Architectures must make good decisions; they must be fast, and they should work even with limited computational resources. These goals are challenging to achieve manually, and researchers often spend months attempting to discover the perfect architecture. To overcome these challenges, we turn to the human brain’s efficient neural wiring for automated architecture discovery. Neuroscience already underlies core\nneural network concepts: The perceptron (Rosenblatt, 1958) is directly analogous to a human neuron. One of the brain’s fundamental learning mechanisms is dendritic branching (Greenough & Volkmar, 1973) whereby active neurons send out signals for other neurons to form connections, strengthening signals through that neural pathway. This neuroscience concept inspires us to devise Bractivate, a Neural Architecture Search (NAS) algorithm for learning new efficient UNet architectures networks, capable of being trained twice as fast as the traditional UNet, and often one\nto two orders of magnitude lighter in terms of trainable parameters. We apply Bractivate on three medical imaging segmentation problems: cell nuclei, electron microscopy, and chest X-ray lung segmentation.\nMedical image segmentation is a growing field in Deep Learning Computer Assisted Detection (CAD): it is a powerful component in clinical decision support tools and has applications in retinal fundus image, lung scan, and mammography analysis. Most papers now approach medical image segmentation with the UNet (Ronneberger et al., 2015); the model architecture is straightforward: it has symmetric, hierarchical convolutional blocks, which are components of an initial contracting path and a final expanding path, with an apex bottleneck layer. Between parallel contracting and expanding blocks, the traditional UNet contains skip connections that pass information through concatenation (Ronneberger et al., 2015). Traditional UNet skip connections involve feature map aggregation with same-scale convolutional blocks, but recent advances have yielded more complex connections ranging from the UNet++ (Zhou et al., 2018) to the NasUNet (Weng et al., 2019). While the UNet is a powerful tool, it does have many limitations:\n1. The depth necessary for many segmentation tasks is initially unknown, and traditional neural architecture search (NAS) struggles to identify the optimal UNet depth.\n2. Researchers often manually choose skip connection locations, leading to potentially missed optimal connections.\n3. Scientists need a NAS algorithm addressing many implementation objectives, including computational time, number of model parameters, and robust segmentation performance.\nOn a broader level, discovering efficient UNet architectures is crucial because it can generate simpler models for applications on mobile devices, which need low latency for online learning. In the Telemedicine age, many medical applications rely on mobile Deep Learning to segment medical images and process raw patient data (Xu et al., 2017; Vaze et al., 2020). We address the Medical and Engineering fields’ need for efficiency with Bractivate, a NAS algorithm to discover lightweight UNet architectures for medical image segmentation tasks. We present the following three primary contributions:\n1. An evolutionary algorithm that non-randomly samples from a distribution of various UNet Model depths and skip connection configurations, with both tensor concatenation and addition operators.\n2. ”Dendritic Branching”-inspired mutations that, just as in the brain, cause salient UNet blocks to branch to other blocks in the network through dendritic skip connections, creating efficient networks that preserve information signals through the network.\n3. Bractivate generates high-performing models with lower space complexity than the current state-of-the-art.\nThe remainder of the paper is structured as follows: In Section 2, we discuss prior works, and what gaps in the literature inspire us to propose Bractivate. Then, in Section 3, we discuss the search algorithm and the dendritic branching mutation. Later, in Section 4, we implement our algorithm with various experiments ranging from changing the search space depth to an ablation study. We report our quantitative and qualitative results, along with baseline comparisons in Section 5 before concluding in Section 6." }, { "heading": "2 RELATED WORKS", "text": "Deep learning algorithms are often restricted to manual model design (Simonyan & Zisserman, 2014; He et al., 2016; Oktay et al., 2018; Ronneberger et al., 2015). To automate model schemes, NAS is the process of selecting candidate architectures through various search strategies to achieve optimal performance (Elsken et al., 2019). Advances in NAS have branched into different areas, including evolutionary algorithms (Miller et al., 1989; de Garis, 1990; Yao, 1993; Fogel et al., 1990; Angeline et al., 1994; Real et al., 2018; Yao, 1999) and automatic pattern recognition (Cai et al., 2018; Radosavovic et al., 2020). While both approaches are merited, the tasks address image classification problems, and although some focus on skip connections, they lack deeper investigation\nof their optimal configurations. Recent advances in the UNet have led to alternative skip connection implementations, including addition (Ghamdi et al., 2020), max out operations (Estrada et al., 2019; Goodfellow et al., 2013) and multiplication by a gating function (Oktay et al., 2018). Ghamdi et al. (2020) reports these connections’ improved efficacy over traditional concatenation, as they overcome vanishing gradients and preserve salient features.\nAuto-DeepLab, which Liu et al. (2019) present for semantic segmentation, is a graph-based NAS algorithm that addresses changing model depth and connection locations in hierarchical models. Building off this work, Zhou et al. (2020) propose a similar graph-search algorithm, termed UNet++, for improved NAS; the final model incorporates dense skip connections to achieve multi-scale feature aggregation. Although UNet++ successfully addresses the model depth problem, it ignores choosing the skip connection operator and relies on pretraining and pruning to generate skip connection configurations.\nThe Differential Architecture Search (DARTs) algorithm by Liu et al. (2018) continuously relaxes the architecture representation to enable gradient-based optimization. Advancing this algorithm, Chen et al. (2019) proposes the Progressive Differentiable Architecture Search Algorithm (PDARTs) to allow the searched model’s depth to grow during the search; when applied to ImageNet (Deng et al., 2009), CIFAR-10 (Krizhevsky et al., 2009), or CIFAR-100 (Krizhevsky et al., 2009), the total training time is approximately seven hours.\nAlthough the DARTS and PDARTs algorithms are specific to image classification and sequential model architecture, they lack applications for segmentation models. Weng et al. (2019) suggest a NASUNet method with modified DARTs search for medical imaging segmentation; their approach addresses searching for model parameters in the convolutional blocks to reduce the space complexity found in attention-based (Oktay et al., 2018; Hy, 2018) and recurrent (Alom et al., 2018; Hy, 2018) UNets, yet NASUNet still preserves same-scale concatenation skip connections, overlooking alternate skip connection possibilities across network blocks.\nMany existing NAS algorithms use modified objective functions for evaluating the searched model performance e.g. NAS-Bench-101 (Ying et al., 2019) uses the cross-entropy loss, Stochastic Neural Architecture Search (SNAS) (Xie et al., 2019) devises a cost function deemed Memory Access Cost (MAC) that incorporates the floating-point operations (FLOPs) and number of parameters, and PDARTs (Chen et al., 2019) employs auxiliary loss (Szegedy et al., 2014). To target gaps in the literature related to skip connection search for efficient models, we propose Bractivate, a NAS algorithm inspired by the brain’s dendritic branching to facilitate optimal architecture discovery.\n3 THE BRACTIVATE NAS ALGORITHM\n3.1 DENDRITIC ARBORIZATION\nTable 1 translates neuroscience into computational terms we use throughout the paper. In the neuroscience field, dendritic branching occurs when stimulating environments cause neurons to form new connections (Greenough & Volkmar, 1973; Greenough et al., 1985). These neural connections are associated with learning, and even learning-impaired children with fetal alcohol syndrome display lower dendritic branching levels (Hamilton et al., 2010) compared to their healthy peers. This branching phenomenon parallels Deep Neural Networks: in the brain, dendrites form new connections to the hyperactive soma: the perceptron’s activation function is\nto the biological soma as the incoming connections are to dendrites. Perceptrons can be stacked together to form multi-layer perceptrons (Rumelhart et al., 1986), with parallel architecture similar to the brain, and this structure underlies convolutional neural networks (LeCun et al., 1995).\nFor the UNet, if we consider layer in the network’s blocks to be a neural soma, then we can think about a block’s ”activity” as the mean-absolute value of its layers’ activations, as shown by Equa-\ntion 1.\nAb = 1\nL L∑ l=0 |Al| (1)\nwhere Ab represents block activation, b ∈ B, Al is the weight of each layer in the block, l, and L is the total number of layers in the block. Knowing the block’s location, b, with max(Ab), surrounding blocks then form skip connections around this active node, a process analogous to dendritic branching. We apply this method to target conv and deconv layers, excluding batch normalization layers as they contain static weights and high values that overwhelm the mutation’s layer selection. When new connections are formed, across blocks with various tensor dimensions, we overcome spatial dimensional mismatch by resizing the incoming connection tensors to the receiving tensor by bilinear interpolation." }, { "heading": "3.2 NAS WITH DENDRITIC BRANCHING MUTATIONS", "text": "Sample\nRandomly\nInitialized Domain\nChosen Model Best Model\nGenome\nEfficiency Evaluator\nDendritic Branching Mutation\nfor each block, and the skip connection operator type (concatenation or addition). A detailed discussion on the genotype initialization and its micro-architecture is found in Appendix A.1. When\ninitializing the genotype, we constrain the number of feature maps to grow by 1.5 in the encoder and decrease it by the same factor for each block in the decoder." }, { "heading": "3.3 EFFICIENT LOSS", "text": "NAS methods often focus on accuracy as the main performance metric (Real et al., 2018; Radosavovic et al., 2020), but often lack consideration for discovered model space and time complexity. To address this, we propose an efficient loss function for the NAS evaluation step. Traditional binary cross-entropy is given by Equation 2. During the NAS mutation and selections, the search process accelerates as ”better” models have faster training steps.\nBCL = − 1 m m∑ i=1 (yi × log(ŷi) + (1− yi)× log(1− ŷi)) (2)\nwhere m is the number of samples, yi is the sample image’s true segmentation mask tensor, and ŷi is the model’s prediction tensor. We propose an alternate efficient loss function equation, Efficient Loss Scaling (ELS). It uses the number of model parameters, P , and the training time per epoch, T .\nEFFICIENCY LOSS SCALING\nWe also explore Efficiency penalty scaling where log(P ) and log(T ) scale the overall loss function through multiplication, hence:\nELS = γ × log(P )× log(T )×BCL (3)\nIn our experiments we set γ = 0.01. We use Equation 3 in Section 4.4 during model search. A detailed ablation study on how this equation favors efficient networks and outperforms standard BCL can be found in Appendix A.3." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 IMPLEMENTATION", "text": "For the following experiments, we execute our programs using TensorFlow 2.3.0 and Keras 2.4.3, using Python 3.6.9. For training the models and inference, we use a virtual machine with 25 GB of RAM, 150 GB of disk space, and one Tesla v-100 GPU." }, { "heading": "4.2 THE DATA", "text": "To validate Bractivate NAS, we work with three datasets:\n1. The Montgomery Pulmonary Chest X-Ray (Jaeger et al., 2014) dataset (N = 138)\n2. The EPFL CVlab Electron Microscopy (Lucchi et al., 2011) dataset (N=330)\n3. The Kaggle Data Science Bowl Cell Nuclei Segmentation Challenge (Caicedo et al., 2019) dataset (N=670)\nA detailed discussion of the dataset follows in Section 4.3." }, { "heading": "4.3 PREPROCESSING", "text": "We resize all images in the three datasets to be 128×128 pixels with bilinear interpolation, and later min-max normalize the pixels, so that pixel intensities are ∈ [0, 1]. We apply the same preprocessing to the image masks. We use a 0.1 validation split percentage on all three datasets to evaluate the optimized model loss function and the Dice coefficient during the search." }, { "heading": "4.4 PENALIZING TIME AND SPACE COMPLEXITY", "text": "We experiment with using the standard binary BCL loss function in Equation 2, and the ELS function in Equation 3. We record our results both with the Dice Score, representing the precision and recall harmonic mean. We note that these loss functions are used only during the search step. During full model training, we use the BCL loss on the ”best model” and use that for our later experiments in Sections 4.7 and 4.8." }, { "heading": "4.5 DISCOVERING ARCHITECTURE", "text": "For all three datasets, we use the same Bractivate search algorithm based on dendritic branching. We initialize our search domain as a queue with 20 randomly generate model genotypes and initialized generated models from the genotypes; we then train the models for 50 epochs on the datasets with Early Stopping. The Early Stopping has patience of 10 and min ∆ of 0.05. In a first-in-first-out (FIFO) fashion, we evaluate each genotype in the queue: if the generated model has ELmin, then it becomes the ”Best Model,” and we place this genotype back in the queue. Suppose it has a EL > ELmin. In that case, the search algorithm mutates the ”Best Model” genotype with the dendritic branching method described in Figure 2 before replacing the mutated genotype in the search queue." }, { "heading": "4.6 GOING DEEPER", "text": "We notice that the original Bractivate search algorithm with a minimum depth of two yields mainly shallow networks, usually with two or three expanding and contracting blocks. To explore how model depth affects the search algorithm’s results, we constrain the search space such that depth is ∈ [5, 10] and later ∈ [7, 10], and observe how the Dice coefficient and Dice:time ratio change while using the ELS (Equation 3)." }, { "heading": "4.7 ABLATING BRANCHING CONNECTIONS", "text": "We hypothesize that active block branching increases signal propagation, reducing time complexity, and improving overall model performance. Thus, we must prove these new branches are necessary for the model’s success by ablating them and measuring how their absence affects the model performance. We perform these experiments by training the selected model architecture for 200 epochs on the dataset, and use the Keract (Remy, 2018) library to measure layer activations on the most active layer (the second convolutional layer in the decoder of a D = 5 deep UNet. For each layer, we calculate the layer’s average activation from Equation 1 and then ablate all dendritic (input) connections of the most active block. We record the results quantitatively with the Dice coefficient and visualize them by analyzing changes in the activation heat maps." }, { "heading": "4.8 BASELINE COMPARISON", "text": "We compare Bractivate models to other state-of-the-art methods, including the standard UNet model (Ronneberger et al., 2015), Wide-UNet (Zhou et al., 2018), UNet++ (Zhou et al., 2020), and attention-based models . We obtain all model architectures from GitHub repositories made by these models’ authors or by contributors who create implementations in Keras and use them with Xavier weight initialization on the three datasets for comparison." }, { "heading": "5 RESULTS", "text": "We observe clear patterns pointing to how dendritic branching allows for efficient neural network selection through our experimentation. Figure 4 presents an example of the discovered architecture with the ELS function.\n5.1 DISCOVERED ARCHITECTURES\nBecause the mutations add more connections to salient blocks, the most active node has the highest input connections. Bractivate randomly assigns addition or concatenation connections to the most active block, optimizing the best connecting operator combination.\nFigure 4 shows the discovered architecture when the search space is constrained such that D ∈ [5, 10]. Al Ghamdi et al. (Ghamdi et al., 2020) report that the addition operator can perform as well or better than the concatenation operator, so we incorporate both operators as modules into the NAS, with each mutation yielding a new random connecting operator combination to the salient block." }, { "heading": "5.2 DEPTH AND TIME-SPACE COMPLEXITY", "text": "5.3 SKIP CONNECTION ABLATION STUDY\nOur ablation study most strongly confirms our hypothesis that dendritic branching to\nactive blocks significantly improves segmentation performance: Figure 6 examines the saliency maps produced by the model architecture in Figure 4 before and after ablating connections to the most active block. Before ablation, the most active block’s saliency maps show encoded information that significantly influences the decoded feature maps during deconvolution.\nAfter ablation, the saliency maps for the EM and nuclei segmentation tasks lack accurate late-stage saliency maps. When the salient block has configured dendritic branches from neighboring blocks,\nthe output signal is highly accurate. However, when these vital encodings in the Decode 2 block lack input from neighboring blocks, the output signal is degraded. This degradation is especially true for the EM and nuclei segmentation tasks.\nThe EM and nuclei segmentation tasks contain more than two connected components; removing dendrites to salient blocks prevents valuable information from neighboring blocks to travel to the most salient block, degrading the signal through the last four blocks in the network. The model’s Dice score is significantly lower in the ablated architecture than in the intact Bractivate-selected model. The added information from these dendritic skip connections, explicitly targeting a salient block in the model, generates more accurate saliency maps, helping the model learn faster. Before ablation, activations are more salient during the decoding phase than post-ablation, where saliency concentrates in the encoder. This observation may be because removing connections towards an active block forces surrounding layers to compensate by increasing their activations." }, { "heading": "5.4 BASELINE COMPARISON", "text": "Figure 7 highlights how Bractivate achieves comparable performance to larger models when initialized with Xavier initialization (Glorot & Bengio, 2010). Table 2 highlights how Bractivate is significantly smaller than many of the other state-of-the-art models: it exchanges high spatial com-\nplexity for more skip connections, as these branches allow information to propagate through salient blocks in the network. For domain-specific tasks, high parameters reduce the signal: noise ratio in the network; simpler models like Bractivate rely on powerful skip connections, analogous to dendrites, to carry most of the signal. Because these connections consist of simple concatenation or addition operators, they greatly reduce the number of trainable parameters, preventing overfitting; this speaks to Bractivate’s comparable–or better–Dice scores as compared to the baseline models." }, { "heading": "6 CONCLUSION", "text": "Throughout this paper, we highlight how dendritic branching in the brain inspires efficient skip connections in Deep Learning models. With our focus on segmentation, we present Bractivate as a method for identifying skip connection configurations to elevate the traditional UNet. During the search, Bractivate mutates the architecture so that the most salient blocks in the network branch out their ”dendrites” to other network blocks. By replacing the oldest model in the search space with the new mutated architecture, we accelerate the search rate.\nThe ablation study strongly supports our hypothesis that dendritic branching is necessary for efficient model discovery; when we ablate dendritic connections to the most salient block, the Dice Score decreases. Before and after ablation, the saliency maps reveal stark contrasts, with the ablated activation maps lacking apparent features for the final segmentation layer in the UNet’s decoder. We finally weigh our methods with other baselines, highlighting how smaller networks can perform segmentation tasks well given limited pretraining data.\nOverall, we present how optimally configured skip connections, inspired by the brain, yield robust signal streaming paths through a lightweight network. Our algorithm is an asset to many mobile medical computing technologies that rely on low latency and high computational efficiency." }, { "heading": "A APPENDIX", "text": "A.1 GENOME MICRO-ARCHITECTURE\nWhen designing our search space, we formulate genotypes that code for model architectures. Following common patters in convolutional networks, and the UNet Ronneberger et al. (2015), we first impose the following constraints on our search space:\n• The network blocks must be symmetrical. This means that the number of blocks both in the network encoder and decoder are identical, with mirror internal layer configurations (types of layers, numbers of filters, and number of layers in the block) • The network must be hierarchical. When designing models for medical image segmenta-\ntion, we rely on hierarchical backbones for both the encoder and decoder, as reflected in Figure 4. • We constrain skip connection directionality. In the network, skip connections only occur\nfrom earlier to later layers in the background.\nFigure 8 shows the standard micro-architecture for the contracting and expanding parts of the network. We also note that while the layer arrangements are constant, the number of filters, n, for each block is initially random. However, each integer value of filter numbers is scaled by a factor of 1.5 for each subsequent block, as Figure 8 highlights.\nA.2 GPU RUN-TIME\nOverall, the search algorithm had GPU run-times, as shown in Table 3. We note that these results are reported after running the search algorithm on a Tesla-v100. The reported values are an average of three independent trials. Oftentimes, the run time was dependent on the dataset size. Because the Cell Nuclei dataset had the highest number of sample images, it took longer to train on as compared to the smaller Lung dataset.\nA.3 ABLATING EFFICIENT LOSS\nWe also examine the effect of ablating the efficiency loss’ parameter and time penalty terms on the overall model selection. Through our investigation, we find that the efficiency loss does help the model select smaller models, that can perform at the level of larger models selected by the BCL loss function. Figure 9 highlights this trend for the Lung Dataset. The results are averaged over three trial runs.\nWe see that removing the penalty terms for high space and time complexity still yields highperforming models. However, these models are larger, and computationally costly." } ]
2,020
null
SP:232edf223e799126992acd9ee04d88c22ff57110
[ "The authors propose two approaches for pruning: (a) \"Evolution-style\": start with K random masks associated with the weights, update weights on gradient descent corresponding to those active in the “fittest” mask, and overtime throw away all but one masks which are less fit. (b) \"Dissipating-gradients”: Here those weights are removed which are not being updated as much, measured by their sum of gradients over a number of iterations. This is shown for elementary networks on MNIST datasets without any serious experiments or comparisons or even presentation. " ]
Post-training dropout based approaches achieve high sparsity and are well established means of deciphering problems relating to computational cost and overfitting in Neural Network architectures citesrivastava2014dropout, (Pan et al., 2016), Zhu & Gupta (2017), LeCun et al. (1990). Contrastingly, pruning at initialization is still far behind Frankle et al. (2020). Initialization pruning is more efficacious when it comes to scaling computation cost of the network. Furthermore, it handles overfitting just as well as post training dropout. It is also averse to retraining losses. In approbation of the above reasons, the paper presents two approaches to prune at initialization. The goal is to achieve higher sparsity while preserving performance. 1) K-starts, begins with k random p-sparse matrices at initialization. In the first couple of epochs the network then determines the ”fittest” of these p-sparse matrices in an attempt to find the ”lottery ticket” Frankle & Carbin (2018) p-sparse network. The approach is adopted from how evolutionary algorithms find the best individual. Depending on the Neural Network architecture, fitness criteria can be based on magnitude of network weights, magnitude of gradient accumulation over an epoch or a combination of both. 2) Dissipating gradients approach, aims at eliminating weights that remain within a fraction of their initial value during the first couple of epochs. Removing weights in this manner despite their magnitude best preserves performance of the network. Contrarily, the approach also takes the most epochs to achieve higher sparsity. 3) Combination of dissipating gradients and kstarts outperforms either methods and random dropout consistently. The benefits of using the provided pertaining approaches are: 1) They do not require specific knowledge of the classification task, fixing of dropout threshold or regularization parameters 2) Retraining of the model is neither necessary nor affects the performance of the p-sparse network. We evaluate the efficacy of the said methods on Autoencoders and Fully Connected Multilayered Perceptrons. The datasets used are MNIST and Fashion MNIST.
[]
[ { "authors": [ "Simon Alford", "Ryan Robinett", "Lauren Milechin", "Jeremy Kepner" ], "title": "Training behavior of sparse neural network topologies", "venue": "IEEE High Performance Extreme Computing Conference (HPEC),", "year": 2019 }, { "authors": [ "Aydin Buluç", "John R Gilbert" ], "title": "Parallel sparse matrix-matrix multiplication and indexing: Implementation and experiments", "venue": "SIAM Journal on Scientific Computing,", "year": 2012 }, { "authors": [ "Li Deng" ], "title": "The mnist database of handwritten digit images for machine learning research [best of the web", "venue": "IEEE Signal Processing Magazine,", "year": 2012 }, { "authors": [ "Misha Denil", "Babak Shakibi", "Laurent Dinh", "Marc’Aurelio Ranzato", "Nando De Freitas" ], "title": "Predicting parameters in deep learning", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "arXiv preprint arXiv:1803.03635,", "year": 2018 }, { "authors": [ "Jonathan Frankle", "Gintare Karolina Dziugaite", "Daniel M Roy", "Michael Carbin" ], "title": "Pruning neural networks at initialization: Why are we missing the mark", "venue": "arXiv preprint arXiv:2009.08576,", "year": 2020 }, { "authors": [ "Stuart Geman", "Elie Bienenstock", "René Doursat" ], "title": "Neural networks and the bias/variance dilemma", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "David E Goldberg", "John Henry Holland" ], "title": "Genetic algorithms and machine learning", "venue": null, "year": 1988 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Babak Hassibi", "David G Stork" ], "title": "Second order derivatives for network pruning: Optimal brain surgeon", "venue": "In Advances in neural information processing systems,", "year": 1993 }, { "authors": [ "Yann LeCun", "John S Denker", "Sara A Solla" ], "title": "Optimal brain damage", "venue": "In Advances in neural information processing systems,", "year": 1990 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P Kingma" ], "title": "Learning sparse neural networks through l 0 regularization", "venue": "arXiv preprint arXiv:1712.01312,", "year": 2017 }, { "authors": [ "R Oftadeh", "MJ Mahjoob", "M Shariatpanahi" ], "title": "A novel meta-heuristic optimization algorithm inspired by group hunting of animals: Hunting search", "venue": "Computers & Mathematics with Applications,", "year": 2010 }, { "authors": [ "Wei Pan", "Hao Dong", "Yike Guo" ], "title": "Dropneuron: Simplifying the structure of deep neural networks", "venue": "arXiv preprint arXiv:1606.07326,", "year": 2016 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Pradnya A Vikhar" ], "title": "Evolutionary algorithms: A critical review and its future prospects", "venue": null, "year": 2016 }, { "authors": [ "Daan Wierstra", "Tom Schaul", "Jan Peters", "Juergen Schmidhuber" ], "title": "Natural evolution strategies", "venue": "IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence),", "year": 2008 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Xin-She Yang" ], "title": "Firefly algorithm, levy flights and global optimization. In Research and development in intelligent systems XXVI", "venue": null, "year": 2010 }, { "authors": [ "Raphael Yuster", "Uri Zwick" ], "title": "Fast sparse matrix multiplication", "venue": "ACM Transactions On Algorithms (TALG),", "year": 2005 }, { "authors": [ "Michael Zhu", "Suyog Gupta" ], "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression", "venue": "arXiv preprint arXiv:1710.01878,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Computational complexity and overfitting in neural networks is a well established problem Frankle & Carbin (2018), Han et al. (2015), LeCun et al. (1990), Denil et al. (2013). We utilize pruning approaches for the following two reasons: 1) To reduce the computational cost of a fully connected neural network. 2) To reduce overfitting in the network.\nGiven a large number of post-training pruning approaches Srivastava et al. (2014), Geman et al. (1992), Pan et al. (2016), the paper attempts to propose two pre-training pruning approaches: kstarts and dissipating gradients. Moreover, it appears to be the case that when isolated from other factors sparse networks outperform fully connected networks. When not isolated they perform at least as well up to a percentage of sparsity depending on the number of parameters in the said network. kstarts and dissipating gradients provide are simple nevertheless effective methods to quickly look for best sparse networks to.\nThe approaches exploit the knowledge that a network has multiple underlying p-sparse networks that perform just as well and in some cases even better when contrasted with their fully connected\ncounterparts Frankle & Carbin (2018). What percentage of sparsity is realized, depends largely on the number of parameters originally present in the network. Such sparse networks are potent in preventing over-fitting and reducing computational cost.\nThe poset-training pruning has several approaches in place such as adding various regularization schemes to prune the network Louizos et al. (2017), Pan et al. (2016) or using second derivative or hessian of the weights for dropout LeCun et al. (1990), Hassibi & Stork (1993). Han et al. (2015), Alford et al. (2019), Zhu & Gupta (2017) use an efficient iterative pruning method to iteratively increase sparsity. Srivastava et al. (2014) dropout random hidden units with p probability instead of weights to avoid overfitting in general. Each of these approaches is effective and achieves good sparsity post-training.\nWe use a simple intuitive models that achieve good results and exploits the fact that a number of sub networks in a Neural Network has the potential to individually learn the input Srivastava et al. (2014). We decide on a sparse network early on based on the dropout method and use only that for training. This provides an edge for faster computation, quicker elimination of excess weights and reduced generalization error. The sparsity achieved is superior to random dropout.\nSection II gives a general introduction to all the methods, section III defines p-sparsity, section IV provides the algorithm for both approaches, section V describes experimental setup and results, section VI discusses various design choices, section VII gives a general discussion of results, section VIII discusses limitations of the approach and section IX provides conclusions and final remarks." }, { "heading": "2 PRUNING METHODS", "text": "" }, { "heading": "2.1 KSTARTS", "text": "" }, { "heading": "2.1.1 KSTARTS AND EVOLUTIONARY ALGORITHMS", "text": "We take the concept of k random starts from Evolutionary Algorithms (Vikhar, 2016) that use a fitness function or heuristic to perform ”natural selection” in optimization and search based problems (Goldberg & Holland, 1988). It is relatively simple to fit genetic algorithms to the problem at hand. Other method that would be equally effective with a little bit of modification are Hunting Search (Oftadeh et al., 2010), Natural Adaptation Strategies (Wierstra et al., 2008), firefly algorithm (Yang, 2010) etc.\nThe basic components of the algorithm are: (1) Population: A product of network weights and sparse matrices. (2) Individual: An instance of the population. (3) Fitness Function: The heuristic chosen for evaluation of the population." }, { "heading": "2.1.2 POPULATION", "text": "We first initialize K sparse matrices, a single instance of these K sparse matrices can be seen in equation ??. In every iteration we multiply model weights W of the Network layer in question with every instance of the K sparse matrices. The resulting set of matrices is our population for that iteration. Each iteration is referred to as a new generation.\npopulation =W ∗K − SparseMatrices (1)" }, { "heading": "2.1.3 INDIVIDUAL", "text": "Each individual, in a population of K instances, is a sparse matrix of size equal to the size of network weights, W. The number of 0’s and 1’s in the sparsity matrix are determined by the connectivity factor p which is further described in section 3. An sparse matrix of p ≈ 0.5 will have ≈ 50% 0’s and ≈ 50% 1s." }, { "heading": "2.1.4 EVALUATION/FITNESS FUNCTION", "text": "The fitness of an individual is ranked by determining the sum of each individual in population as given in 1 such that the fittest individual in a generation is given by equation 2.\nfittest = argmax ind i∗c∑ j=1 ind[j]( of population) (2)\nwhere i ∗ c is the size of each individual and ind refers to the individual in population." }, { "heading": "2.1.5 NEXT GENERATION SELECTION", "text": "Assuming each iteration is the next generation. In each generation the fit individual is favoured so:\n• The fittest individual is passed of as weight to the next generation. • Every 5 generations or as per the decided elimination frequency, the individual with lowest\nfitness is discarded from the population." }, { "heading": "2.2 DISSIPATING GRADIENTS", "text": "Magnitude and gradient based pruning approaches are popular in post-training pruning Srivastava et al. (2014), LeCun et al. (1990). It does not make much sense to employ them pre-training becauseof the randomly generated weights. But in order to reduce error, any network aims to target updating weights that influence the results most. Based upon that hypothesis, in each epoch we sum gradients over and eliminate weights whose weights are not getting updated. In equation 3 N is the total number of iterations for an epoch. In equation 4 epsilon is 1e-6 for all experiments.\nAccumulated dw = N∑ i=1 dW (3) W [Accumulated dw < ] = 0 (4)\nOne consideration in this approach is to not do this for too many epochs which can be only 2 if the image is very monochrome and more than 2 if the gradients are dissipating more slowly. Moreover, once specific weights have reached their optimal learning, their gradients will dissipate and we don’t want to eliminate them." }, { "heading": "2.3 COMBINATION DROPOUT", "text": "Combination dropout is merely combining Kstarts with dissipating gradients. The weights eliminated use both approaches. We fix p for Kstarts to a certain value of minimum sparsity and further eliminate weights that dissipating gradients method will eliminate as well. The approach achieves better performance than either methods." }, { "heading": "3 DEFINING P-SPARSITY", "text": "In a p-sparse network layer approximately p percent of connection between two layers are eliminated. Figure 1 shows a fully connected conventional neural network (figure 1a) and three sparsely connected networks with different values of p (figures 1b, 1c, 1d)." }, { "heading": "3.1 CONNECTIVITY FACTOR", "text": "Connectivity factor, p, determines the percentage of connections to be removed between any two layers of the network. For instance if p = 0.0 than the network is be fully connected as shown in figure 1a, if on the opposite extreme, p=1.0, then there will be no connections between two layers of neurons. If p=0.5, figure 1b, only approximately 50% of the connection in the network remain. If p=0.3, figure 1c, approximately 70% of the connection still exist and if p=0.7, figure 1d, a mere 30% of the connections are active between two network layers. In short p determines percentage of 0’s in an individual sparse matrix shown in equation ??." }, { "heading": "4 ALGORITHM", "text": "The autoencoder, two and three layered neural networks are trained in a standard manner with adam optimizer and batch update." }, { "heading": "4.1 KSTARTS ALGORITHM", "text": "Algorithm 1 gives the kstarts method:\n1. We have K number of sparse matrices as shown in equation ?? generated and we call them KI in algorithm 1.\n2. Every time the weight W needs to be updated instead of the usual gradient update we add one step further and using the fitness function, pass the fittest individual as the new W which biases the network towards that sparse matrix.\n3. Every 5 iterations, the individual with lowest fitness is dropped.\nAlgorithm 1: k Random starts input : Data, params,K,p output: W,b initialize W,b; KI ← K individuals with approximately 1-p percent active connections; for maxiterations do\nRun Neural Network; Update weights; if One individual in KI is left then\nW← individual; else\nW← individual with maximum sum (of weights); for every 5 iterations do\npop individual with minimum sum (of weights) from KI; end\nend end" }, { "heading": "4.2 DISSIPATING GRADIENTS ALGORITHM", "text": "Algorithm 2 is more simple and just eliminates weights with sum of gradients equal to zero in the first 1-4 epochs depending on the desired sparsity.\nAlgorithm 2: Dissipating Gradients input : Data, params output: W,b initialize W,b; for maxepochs do\nfor Maxiterations do Run Neural Network; accumulated dW← accumulated dW+ dW; Update weights;\nend if accumulated dW < 0.0001 then\naccumulated dW←0 ; else\naccumulated dW←1; end W←W*accumulated dW;\nend" }, { "heading": "5 EXPERIMENTS AND RESULTS", "text": "The experiments performed on two datasets; MNIST Deng (2012) and Fashion MNIST Xiao et al. (2017). The network architectures are two layered Autoencoder (784-128-64), a three layered NN (784-128-100-10) both with sigmoid activation and adam optimization functions.\nThe Architecture used for learning curves is a single layered NN(784-10)." }, { "heading": "5.1 EFFECT OF INCREASING SPARSITY", "text": "As sparsity increases, overall performance reduces. Figure 2 shows the behaviour of various dropout methods presented in this paper.\nIn case of random dropout, it’s indeed a random shot. Either no useful weight is eliminated or multiple crucial weights are eliminated which decides how well does random dropout perform. Kstarts performs slightly better on average with multiple start choices. Depending on how many independent p-sparse networks are present in the network that can learn well, one of them can be identified, given k is large enough and the fitness function is smartly decided by first examining the weights and gradients.\nDissipating gradients works well as long as the network isn’t learning very fast i.e. some weights are being updated in the consequent epochs. It’s also most reliable. Combination works by far the best because it does not only rely on eliminating weights that are not being updated but also uses kstarts. It seems to achieve superior performance as long as p value chosen is a value that kstarts performs well on." }, { "heading": "5.2 VARIATION IN SAMPLE SIZE", "text": "Figures 3a and 3b show relationship of varying sample size to different sparsity values in a single layer NN over 2.5k iterations.\nThe interesting result here is that isolated from all other factors like number of parameters, hidden units and various design choices, kstarts dropout performs better on a single layer network compared to even a fully connected network. The standard deviation is also lower to fully connected network as well as random dropout. kstarts dropout also learns faster than a fully connected network. For instance, if the iterations for the experiments in figure 3a and 3b are increased, the fully connected network will eventually reach accuracy of the p-sparse network." }, { "heading": "6 DESIGN PARAMETER CHOICES", "text": "There are a number of design parameter choices for the algorithm 1 presented here. Some are explained in detail in this section." }, { "heading": "6.1 CHOICE OF FITNESS FUNCTION", "text": "Since the fitness function determines the individual being passed on to the next generation and the individual being eliminated. We had three choices for choosing the fitness of an individual each with it’s own pros and cons.\n• Magnitude: As opted here, we choose population as shown in equation 1 and then select fitness using equation 2. This skews the selection of new weights to the previously selected sparse matrix from KI and therefore, the initial sparse matrix will be propagated forward. This also renders elimination to be pointless it does not matter if 1 or all other matrices in K are eliminated. Furthermore, the sparse matrix is picked awfully early in the experiments i.e. only after first 5 iterations or so and that is not when weights have reached a saturation point. • Gradient: The second choice is to use gradient of the weights to create population as\nfollows: population = δW ∗ SparseMatrices (5)\ndoing so can have fitness totally dependant on the current update and the new weights are heavily skewed towards the performance of the current iterations which again doesn’t seem that appropriate. • Sum of Gradients: The third option is summing up the gradients for a number of iterations\nin the manner we do for dissipating gradients 3 and then use those to create population: population = ( ∑ δW ) ∗ SparseMatrices (6)\nDoing so skews the network toward weights that are updated quickly and are increasing." }, { "heading": "6.2 NO. OF LAYERS AND NO. HIDDEN UNITS", "text": "We initially use single layer NN (784-10) to isolate the effects of kstarts algorithm from effects of other parameters that may have a large impact on performance as the size of the network grows. Those parameters i.e. number of hidden layers, regularization choices, types of layers may aid or adversely effect the performance of the algorithm and by performing the experiments on the basic unit of a neural network we were able to concur that the method is effective in it’s own right. After concluding the approach works we tested on three layered NN and Autoencoder." }, { "heading": "6.3 COST EFFECTIVESNESS", "text": "One beneficial feature of pre-training pruning approaches is that the best p-sparse network is quickly identified. There are a number of methods that exploit sparsity of a matrix for quicker multiplication i.e. (Yuster & Zwick, 2005), (Buluç & Gilbert, 2012) which can be used to quickly retrain large networks. Although that is out of scope for our findings." }, { "heading": "7 DISCUSSION", "text": "" }, { "heading": "7.1 EFFECT OF K", "text": "1. From the experiments done so far lower K (≈ 10) outperforms or at least performs as well as a higher value of K (≈ 100). This can be because the more times a different matrix is chosen by the network, the more times the network has to adopt to learning with only that p-sparse network." }, { "heading": "7.2 EFFECT OF P", "text": "1. A sparse network can outperforms a fully connected network for lower number of iterations and smaller networks.\n2. An appropriate value of p, for instance in these experiments p ≈ 0.5, seems to work best for random dropout, kstarts dropout and combination dropout. A poor choice of p can’t seem to be remedied by a better choice of K.\n3. p can be thought of as an information limiter. For better learning, if the network is only provided with particular features, it might have an easier time learning the class specific features in different nodes but this only remains a wishful speculation and requires further analysis. Table 1 shows relationship between k,p and no of iterations in a single layer NN (784-10)" }, { "heading": "8 LIMITATIONS", "text": "There are a number of limitations to the approach and further investigation is required in a number of domains.\n1. The NNs used are single and three layered feed forward networks and autoencoders. CNNs are not experimented upon.\n2. Only classification tasks are considered and those on only two datasets: MNIST and Fashion MNIST.\n3. Ideally using sparse matrix should make for efficient computation but since the algorithms for that are not used it at this point does not show what the time comparison of the approaches will be." }, { "heading": "9 CONCLUSIONS AND FINAL REMARKS", "text": "We present two methods for pruning weights pre-training or in the first couple of epochs. The comparisons are made against random dropout and both approaches mostly perform better than the random dropout. We provide a combination dropout approach that consistently outperforms other dropout approaches. A lot more analysis of the approach is required on multiple datasets, learning tasks and network architectures but the basic methods seem to be effective." } ]
2,020
null
SP:bb0b99194e5d102320ca4cc7c89c4ae6ee514d83
[ "The paper studies “butterfly networks”, where, a logarithmic number of linear layers with sparse connections resembling the butterfly structure of the FFT algorithm, along with linear layers in smaller dimensions are used to approximate linear layers in larger dimensions. In general, the paper follows the idea of sketching to design new architectures that can reduce the number of trainable parameters. In that regard, the paper is very appealing, as it shows that replacing linear layers with the butterfly networks does not result in any loss in performance. " ]
A butterfly network consists of logarithmically many layers, each with a linear number of non-zero weights (pre-specified). The fast Johnson-Lindenstrauss transform (FJLT) can be represented as a butterfly network followed by a projection onto a random subset of the coordinates. Moreover, a random matrix based on FJLT with high probability approximates the action of any matrix on a vector. Motivated by these facts, we propose to replace a dense linear layer in any neural network by an architecture based on the butterfly network. The proposed architecture significantly improves upon the quadratic number of weights required in a standard dense layer to nearly linear with little compromise in expressibility of the resulting operator. In a collection of wide variety of experiments, including supervised prediction on both the NLP and vision data, we show that this not only produces results that match and often outperform existing well-known architectures, but it also offers faster training and prediction in deployment. To understand the optimization problems posed by neural networks with a butterfly network, we study the optimization landscape of the encoder-decoder network, where the encoder is replaced by a butterfly network followed by a dense linear layer in smaller dimension. Theoretical result presented in the paper explain why the training speed and outcome are not compromised by our proposed approach. Empirically we demonstrate that the network performs as well as the encoderdecoder network.
[ { "affiliations": [], "name": "FIXED BUTTER" } ]
[ { "authors": [ "N. Ailon", "B. Chazelle" ], "title": "The fast johnson–lindenstrauss transform and approximate nearest neighbors", "venue": "SIAM J. Comput.,", "year": 2009 }, { "authors": [ "N. Ailon", "E. Liberty" ], "title": "Fast dimension reduction using rademacher series on dual BCH codes", "venue": "Discret. Comput. Geom.,", "year": 2009 }, { "authors": [ "A. Akbik", "D. Blythe", "R. Vollgraf" ], "title": "Contextual string embeddings for sequence labeling", "venue": "In COLING 2018,", "year": 2018 }, { "authors": [ "A. Akbik", "T. Bergmann", "R. Vollgraf" ], "title": "Pooled contextualized embeddings for named entity recognition", "venue": "In NAACL 2019,", "year": 2019 }, { "authors": [ "K. Alizadeh", "P. Anish", "F. Ali", "R. Mohammad" ], "title": "Butterfly transform: An efficient fft based neural architecture design", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "P. Baldi", "K. Hornik" ], "title": "Neural networks and principal component analysis: Learning from examples without local minima", "venue": "Neural Networks,", "year": 1989 }, { "authors": [ "A.L. Cambridge" ], "title": "The olivetti faces", "venue": null, "year": 1994 }, { "authors": [ "Y. Cheng", "F.X. Yu", "R.S. Feris", "S. Kumar", "A.N. Choudhary", "S. Chang" ], "title": "An exploration of parameter redundancy in deep networks with circulant projections", "venue": "In 2015 IEEE International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "K.L. Clarkson", "D.P. Woodruff" ], "title": "Numerical linear algebra in the streaming model", "venue": "Proceedings of the 41st Annual ACM Symposium on Theory of Computing,", "year": 2009 }, { "authors": [ "J. Cooley", "J. Tukey" ], "title": "An algorithm for the machine calculation of complex fourier series", "venue": "Mathematics of Computation,", "year": 1965 }, { "authors": [ "T. Dao", "A. Gu", "M. Eichhorn", "A. Rudra", "C. Ré" ], "title": "Learning fast algorithms for linear transforms using butterfly factorizations", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "T. Dao", "N.S. Sohoni", "A. Gu", "M. Eichhorn", "A. Blonder", "M. Leszczynski", "A. Rudra", "C. R" ], "title": "e. Kaleidoscope: An efficient, learnable representation for all structured linear maps", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "D. Davido", "E. Gabrilovich", "S. Markovitch" ], "title": "Parameterized generation of labeled datasets for text categorization based on a hierarchical directory", "venue": "In 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2004 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "M. Denil", "B. Shakibi", "L. Dinh", "M. Ranzato", "N. de Freitas" ], "title": "Predicting parameters in deep learning", "venue": "In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems", "year": 2013 }, { "authors": [ "C. Ding", "S. Liao", "Y. Wang", "Z. Li", "N. Liu", "Y. Zhuo", "C. Wang", "X. Qian", "Y. Bai", "G. Yuan", "X. Ma", "Y. Zhang", "J. Tang", "Q. Qiu", "X. Lin", "B. Yuan" ], "title": "Circnn: accelerating and compressing deep neural networks using block-circulant weight matrices", "venue": "In Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture,", "year": 2017 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Identity mappings in deep residual networks. In Computer Vision - ECCV 2016", "venue": "European Conference,", "year": 2016 }, { "authors": [ "N. Imamoglu", "Y. Oishi", "X. Zhang", "Y.F.G. Ding", "T. Kouyama", "R. Nakamura" ], "title": "Hyperspectral image dataset for benchmarking on salient object detection", "venue": "In Tenth International Conference on Quality of Multimedia Experience, (QoMEX),", "year": 2018 }, { "authors": [ "P. Indyk", "A. Vakilian", "Y. Yuan" ], "title": "Learning-based low-rank approximations", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "V. Jain", "N. Pillai", "A. Smith" ], "title": "Kac meets johnson and lindenstrauss: a memory-optimal, fast johnson-lindenstrauss", "venue": "transform. arXiv,", "year": 2020 }, { "authors": [ "W. Johnson", "J. Lindenstrauss" ], "title": "Extensions of lipschitz maps into a hilbert space", "venue": "Contemporary Mathematics, 26:189–206,", "year": 1984 }, { "authors": [ "K. Kawaguchi" ], "title": "Deep learning without poor local minima. In Advances in Neural Information Processing Systems", "venue": "Annual Conference on Neural Information Processing Systems", "year": 2016 }, { "authors": [ "F. Krahmer", "R. Ward" ], "title": "New and improved johnson–lindenstrauss embeddings via the restricted isometry property", "venue": "SIAM Journal on Mathematical Analysis, 43:1269–1281,", "year": 2011 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "University of Toronto,", "year": 2012 }, { "authors": [ "N. Lee", "T. Ajanthan", "P.H.S. Torr" ], "title": "Snip: single-shot network pruning based on connection sensitivity", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Y. Li", "H. Yang" ], "title": "Interpolative butterfly factorization", "venue": "SIAM J. Scientific Computing,", "year": 2017 }, { "authors": [ "Z. Lu", "V. Sindhwani", "T.N. Sainath" ], "title": "Learning compact recurrent neural networks", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2016 }, { "authors": [ "M.P. Marcus", "B. Santorini", "M.A. Marcinkiewicz" ], "title": "Building a large annotated corpus of English: The Penn Treebank", "venue": "Computational Linguistics,", "year": 1993 }, { "authors": [ "E. Michielssen", "A. Boag" ], "title": "A multilevel matrix decomposition algorithm for analyzing scattering from large structures", "venue": "IEEE Transactions on Antennas and Propagation,", "year": 1996 }, { "authors": [ "D.C. Mocanu", "E. Mocanu", "P. Stone", "P.H. Nguyen", "M. Gibescu", "A. Liotta" ], "title": "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science", "venue": "Nature Communications,", "year": 2018 }, { "authors": [ "M. Moczulski", "M. Denil", "J. Appleyard", "N. de Freitas" ], "title": "ACDC: A structured efficient linear layer", "venue": "In Y. Bengio and Y. LeCun, editors, 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "M. O’Neil", "F. Woolfe", "V. Rokhlin" ], "title": "An algorithm for the rapid evaluation of special function transforms", "venue": "Applied and Computational Harmonic Analysis,", "year": 2010 }, { "authors": [ "T.N. Sainath", "B. Kingsbury", "V. Sindhwani", "E. Arisoy", "B. Ramabhadran" ], "title": "Low-rank matrix factorization for deep neural network training with high-dimensional output targets", "venue": "In IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2013 }, { "authors": [ "T. Sarlós" ], "title": "Improved approximation algorithms for large matrices via random projections", "venue": "Annual IEEE Symposium on Foundations of Computer Science (FOCS", "year": 2006 }, { "authors": [ "S. D" ], "title": "Seljebotn. WAVEMOTH-FAST SPHERICAL HARMONIC TRANSFORMS BY BUTTERFLY MATRIX COMPRESSION", "venue": "The Astrophysical Journal Supplement Series,", "year": 2012 }, { "authors": [ "V. Sindhwani", "T.N. Sainath", "S. Kumar" ], "title": "Structured transforms for small-footprint deep learning", "venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", "year": 2015 }, { "authors": [ "M. Tan", "Q.V. Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "A.T. Thomas", "A. Gu", "T. Dao", "A. Rudra", "C. Ré" ], "title": "Learning compressed transforms with low displacement rank", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "E.F. Tjong Kim Sang", "F. De Meulder" ], "title": "Introduction to the CoNLL-2003 shared task: Languageindependent named entity recognition", "venue": "In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL", "year": 2003 }, { "authors": [ "S. Verdenius", "M. Stol", "P. Forré" ], "title": "Pruning via iterative ranking of sensitivity statistics", "venue": null, "year": 2006 }, { "authors": [ "C. Wang", "G. Zhang", "R.B. Grosse" ], "title": "Picking winning tickets before training by preserving gradient flow", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Z. Yang", "M. Moczulski", "M. Denil", "N. de Freitas", "A.J. Smola", "L. Song", "Z. Wang" ], "title": "Deep fried convnets", "venue": "In 2015 IEEE International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "J. Ye", "L. Wang", "G. Li", "D. Chen", "S. Zhe", "X. Chu", "Z. Xu" ], "title": "Learning compact recurrent neural networks with block-term tensor decomposition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Baldi", "Hornik" ], "title": "We first note that our result continues to hold even if B in the theorem is replaced by any structured matrix. For example the result continues to hold if B is an `× n matrix with one non-zero entry per column, as is the case with a random sparse sketching matrix Clarkson and", "venue": null, "year": 2009 }, { "authors": [ "Baldi", "Hornik" ], "title": "The critical points of the encoder-decoder network are analyzed in Baldi and Hornik", "venue": "Suppose the eigenvalues of Y X (XX )−1XY T are", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "A butterfly network (see Figure 6 in Appendix A) is a layered graph connecting a layer of n inputs to a layer of n outputs withO(log n) layers, where each layer contains 2n edges. The edges connecting adjacent layers are organized in disjoint gadgets, each gadget connecting a pair of nodes in one layer with a corresponding pair in the next layer by a complete graph. The distance between pairs doubles from layer to layer. This network structure represents the execution graph of the Fast Fourier Transform (FFT) (Cooley and Tukey, 1965), Walsh-Hadamard transform, and many important transforms in signal processing that are known to have fast algorithms to compute matrix-vector products.\nAilon and Chazelle (2009) showed how to use the Fourier (or Hadamard) transform to perform fast Euclidean dimensionality reduction with Johnson and Lindenstrauss (1984) guarantees. The resulting transformation, called Fast Johnson Lindenstrauss Transform (FJLT), was improved in subsequent works (Ailon and Liberty, 2009; Krahmer and Ward, 2011). The common theme in this line of work is to define a fast randomized linear transformation that is composed of a random diagonal matrix, followed by a dense orthogonal transformation which can be represented via a butterfly network, followed by a random projection onto a subset of the coordinates (this research is still active, see e.g. Jain et al. (2020)). In particular, an FJLT matrix can be represented (explicitly) by a butterfly network followed by projection onto a random subset of coordinates (a truncation operator). We refer to such a representation as a truncated butterfly network (see Section 4).\nSimple Johnson-Lindenstrauss like arguments show that with high probability for any W ∈ Rn2×n1 and any x ∈ Rn1 , Wx is close to (JT2 J2)W (JT1 J1)x where J1 ∈ Rk1×n1 and J2 ∈ Rk2×n2 are both FJLT, and k1 = log n1, k2 = log n2 (see Section 4.2 for details). Motivated by this, we propose to replace a dense (fully-connected) linear layer of size n2 × n1 in any neural network by the following architecture: JT1 W ′J2, where J1, J2 can be represented by a truncated butterfly\nnetwork and W ′ is a k2 × k1 dense linear layer. The clear advantages of such a strategy are: (1) almost all choices of the weights from a specific distribution, namely the one mimicking FJLT, preserve accuracy while reducing the number of parameters, and (2) the number of weights is nearly linear in the layer width of W (the original matrix). Our empirical results demonstrate that this offers faster training and prediction in deployment while producing results that match and often outperform existing known architectures. Compressing neural networks by replacing linear layers with structured linear transforms that are expressed by fewer parameters have been studied extensively in the recent past. We compare our approach with these related works in Section 3.\nSince the butterfly structure adds logarithmic depth to the architecture, it might pose optimization related issues. Moreover, the sparse structure of the matrices connecting the layers in a butterfly network defies the general theoretical analysis of convergence of deep linear networks. We take a small step towards understanding these issues by studying the optimization landscape of a encoder-decoder network (two layer linear neural network), where the encoder layer is replaced by a truncated butterfly network followed by a dense linear layer in fewer parameters. This replacement is motivated by the result of Sarlós (2006), related to fast randomized low-rank approximation of matrices using FJLT (see Section 4.2 for details). We consider this replacement instead of the architecture consisting of two butterfly networks and a dense linear layer as proposed earlier, because it is easier to analyze theoretically. We also empirically demonstrate that our new network with fewer parameters performs as well as an encoder-decoder network.\nThe encoder-decoder network computes the best low-rank approximation of the input matrix. It is well-known that with high probability a close to optimal low-rank approximation of a matrix is obtained by either pre-processing the matrix with an FJLT (Sarlós, 2006) or a random sparse matrix structured as given in Clarkson and Woodruff (2009) and then computing the best low-rank approximation from the rows of the resulting matrix1. A recent work by Indyk et al. (2019) studies this problem in the supervised setting, where they find the best pre-processing matrix structured as given in Clarkson and Woodruff (2009) from a sample of matrices (instead of using a random sparse matrix). Since an FJLT can be represented by a truncated butterfly network, we emulate the setting of Indyk et al. (2019) but learn the pre-processing matrix structured as a truncated butterfly network." }, { "heading": "2 OUR CONTRIBUTION AND POTENTIAL IMPACT", "text": "We provide an empirical report, together with a theoretical analysis to justify our main idea of using sparse linear layers with a fixed butterfly network in deep learning. Our findings indicate that this approach, which is well rooted in the theory of matrix approximation and optimization, can offer significant speedup and energy saving in deep learning applications. Additionally, we believe that this work would encourage more experiments and theoretical analysis to better understand the optimization and generalization of our proposed architecture (see Future Work section).\nOn the empirical side – The outcomes of the following experiments are reported:\n(1) In Section 6.1, we replace a dense linear layer in the standard state-of-the-art networks, for both image and language data, with an architecture that constitutes the composition of (a) truncated butterfly network, (b) dense linear layer in smaller dimension, and (c) transposed truncated butterfly network (see Section 4.2). The structure parameters are chosen so as to keep the number of weights near linear (instead of quadratic).\n(2) In Sections 6.2 and 6.3, we train a linear encoder-decoder network in which the encoder is replaced by a truncated butterfly network followed by a dense linear layer in smaller dimension. These experiments support our theoretical result. The network structure parameters are chosen so as to keep the number of weights in the (replaced) encoder near linear in the input dimension. Our results (also theoretically) demonstrate that this has little to no effect on the performance compared to the standard encoder-decoder network.\n(3) In Section 7, we learn the best pre-processing matrix structured as a truncated butterfly network to perform low-rank matrix approximation from a given sample of matrices. We compare our results\n1The pre-processing matrix is multiplied from the left.\nto that of Indyk et al. (2019), which learn the pre-processing matrix structured as given in Clarkson and Woodruff (2009).\nOn the theoretical side – The optimization landscape of linear neural networks with dense matrices have been studied by Baldi and Hornik (1989), and Kawaguchi (2016). The theoretical part of this work studies the optimization landscape of the linear encoder-decoder network in which the encoder is replaced by a truncated butterfly network followed by a dense linear layer in smaller dimension. We call such a network as the encoder-decoder butterfly network. We give an overview of our main result, Theorem 1, here. Let X ∈ Rn×d and Y ∈ Rm×d be the data and output matrices respectively. Then the encoder-decoder butterfly network is given as Y = DEBX , where D ∈ Rm×k and E ∈ Rk×` are dense layers, B is an ` × n truncated butterfly network (product of log n sparse matrices) and k ≤ ` ≤ m ≤ n (see Section 5). The objective is to learn D,E and B that minimizes ||Y − Y ||2F. Theorem 1 shows how the loss at the critical points of such a network depends on the eigenvalues of the matrix Σ = Y XTBT (BXXTBT )−1BXY T 2. In comparison, the loss at the critical points of the encoder-decoder network (without the butterfly network) depends on the eigenvalues of the matrix Σ′ = Y XT (XXT )−1XY T (Baldi and Hornik, 1989). In particular, the loss depends on how the learned matrix B changes the eigenvalues of Σ′. If we learn only for an optimal D and E, keeping B fixed (as done in the experiment in Section 6.3) then it follows from Theorem 1 that every local minima is a global minima and that the loss at the local/global minima depends on howB changes the top k eigenvalues of Σ′. This inference together with a result by Sarlós (2006) is used to give a worst-case guarantee in the special case when Y = X (called auto-encoders that capture PCA; see the below Theorem 1)." }, { "heading": "3 RELATED WORK", "text": "Important transforms like discrete Fourier, discrete cosine, Hadamard and many more satisfy a property called complementary low-rank property, recently defined by Li et al. (2015). For an n×n matrix satisfying this property related to approximation of specific sub-matrices by low-rank matrices, Michielssen and Boag (1996) and O’Neil et al. (2010) developed the butterfly algorithm to compute the product of such a matrix with a vector inO(n log n) time. The butterfly algorithm factorizes such a matrix into O(log n) many matrices, each with O(n) sparsity. In general, the butterfly algorithm has a pre-computation stage which requires O(n2) time (O’Neil et al., 2010; Seljebotn, 2012). With the objective of reducing the pre-computation cost Li et al. (2015); Li and Yang (2017) compute the butterfly factorization for an n × n matrix satisfying the complementary low-rank property in O(n 3 2 ) time. This line of work does not learn butterfly representations for matrices or apply it in neural networks, and is incomparable to our work.\nA few works in the past have used deep learning models with structured matrices (as hidden layers). Such structured matrices can be described using fewer parameters compared to a dense matrix, and hence a representation can be learned by optimizing over a fewer number of parameters. Examples of structured matrices used include low-rank matrices (Denil et al., 2013; Sainath et al., 2013), circulant matrices (Cheng et al., 2015; Ding et al., 2017), low-distortion projections (Yang et al., 2015), Toeplitz like matrices (Sindhwani et al., 2015; Lu et al., 2016; Ye et al., 2018), Fourier-related transforms (Moczulski et al., 2016) and matrices with low-displacement rank (Thomas et al., 2018). Recently Alizadeh et al. (2020) demonstrated the benefits of replacing the pointwise convolutional layer in CNN’s by a butterfly network. Other works by Mocanu et al. (2018); Lee et al. (2019); Wang et al. (2020); Verdenius et al. (2020) consider a different approach to sparsify neural networks. The works closest to ours are by Yang et al. (2015), Moczulski et al. (2016), and Dao et al. (2020) and we make a comparison below.\nYang et al. (2015) and Moczulski et al. (2016) attempt to replace dense linear layers with a stack of structured matrices, including a butterfly structure (the Hadamard or the Cosine transform), but they do not place trainable weights on the edges of the butterfly structure as we do. Note that adding these trainable weights does not compromise the run time benefits in prediction, while adding to the expressiveness of the network in our case. Dao et al. (2020) replace handcrafted structured subnetworks in machine learning models by a kaleidoscope layer, which consists of compositions of butterfly matrices. This is motivated by the fact that the kaleidoscope hierarchy captures a structured matrix exactly and optimally in terms of multiplication operations required to perform the matrix\n2At a critical point the gradient of the loss function with respect to the parameters in the network is zero.\nvector product operation. Their work differs from us as we propose to replace any dense linear layer in a neural network (instead of a structured sub-network) by the architecture proposed in Section 4.2. Our approach is motivated by theoretical results which establish that this can be done with almost no loss in representation.\nFinally, Dao et al. (2019) show that butterfly representations of standard transformations like discrete Fourier, discrete cosine, Hadamard mentioned above can be learnt efficiently. They additionally show the following: a) for the benchmark task of compressing a single hidden layer model they compare the network constituting of a composition of butterfly networks with the classification accuracy of a fully-connected linear layer and b) in ResNet a butterfly sub-network is added to get an improved result. In comparison, our approach to replace a dense linear layer by the proposed architecture in Section 4.2 is motivated by well-known theoretical results as mentioned previously, and the results of the comprehensive list of experiments in Section 6.1 support our proposed method." }, { "heading": "4 PROPOSED REPLACEMENT FOR A DENSE LINEAR LAYER", "text": "In Section 4.1, we define a truncated butterfly network, and in Section 4.2 we motivate and state our proposed architecture based on truncated butterfly network to replace a dense linear layer in any neural network. All logarithms are in base 2, and [n] denotes the set {1, . . . , n}." }, { "heading": "4.1 TRUNCATED BUTTERFLY NETWORK", "text": "Definition 4.1 (Butterfly Network). Let n be an integral power of 2. Then an n×n butterfly network B (see Figure 6) is a stack of of log n linear layers, where in each layer i ∈ {0, . . . , log n − 1}, a bipartite clique connects between pairs of nodes j1, j2 ∈ [n], for which the binary representation of j1 − 1 and j2 − 1 differs only in the i’th bit. In particular, the number of edges in each layer is 2n.\nIn what follows, a truncated butterfly network is a butterfly network in which the deepest layer is truncated, namely, only a subset of ` neurons are kept and the remaining n−` are discarded. The integer ` is a tunable parameter, and the choice of neurons is always assumed to be sampled uniformly at random and fixed throughout training in what follows. The effective number of parameters (trainable weights) in a truncated butterfly network is at most 2n log ` + 6n, for any ` and any choice of neurons selected from the last layer.3 We include a proof of this simple upper bound in Appendix F for lack of space (also, refer to Ailon and Liberty (2009) for a similar result related to computation time of truncated FFT). The reason for studying a truncated butterfly network follows (for example) from the works (Ailon and Chazelle, 2009; Ailon and Liberty, 2009; Krahmer and Ward, 2011). These papers define randomized linear transformations with the Johnson-Lindenstrauss property and an efficient computational graph which essentially defines the truncated butterfly network. In what follows, we will collectively denote these constructions by FJLT. 4" }, { "heading": "4.2 MATRIX APPROXIMATION USING BUTTERFLY NETWORKS", "text": "We begin with the following proposition, following known results on matrix approximation (proof in Appendix B).\nProposition 1. Suppose J1 ∈ Rk1×n1 and J2 ∈ Rk2×n2 are matrices sampled from FJLT distribution, and let W ∈ Rn2×n1 . Then for the random matrix W ′ = (JT2 J2)W (JT1 J1), any unit vector x ∈ Rn1 and any ∈ (0, 1), Pr [‖W ′x−Wx‖ ≤ ‖W‖] ≥ 1− e−Ω(min{k1,k2} 2) .\nFrom Proposition 1 it follows that W ′ approximates the action of W with high probability on any given input vector. Now observe that W ′ is equal to JT2 W̃J1, where W̃ = J2WJ T 1 . Since J1 and J2 are FJLT, they can be represented by a truncated butterfly network, and hence it is conceivable to replace a dense linear layer connecting n1 neurons to n2 neurons (containing n1n2 variables) in any\n3Note that if n is not a power of 2 then we work with the first n columns of the ` × n′ truncated butterfly network, where n′ is the closest number to n that is greater than n and is a power of 2.\n4To be precise, the construction in Ailon and Chazelle (2009), Ailon and Liberty (2009), and Krahmer and Ward (2011) also uses a random diagonal matrix, but the values of the diagonal entries can be ‘absorbed’ inside the weights of the first layer of the butterfly network.\nneural network with a composition of three gadgets: a truncated butterfly network of size k1 × n1, followed by a dense linear layer of size k2 × k1, followed by the transpose of a truncated butterfly network of size k2 × n2. In Section 6.1, we replace dense linear layers in common deep learning networks with our proposed architecture, where we set k1 = log n1 and k2 = log n2." }, { "heading": "5 ENCODER-DECODER BUTTERFLY NETWORK", "text": "Let X ∈ Rn×d, and Y ∈ Rm×d be data and output matrices respectively, and k ≤ m ≤ n. Then the encoder-decoder network for X is given as" }, { "heading": "Y = DEX", "text": "where E ∈ Rk×n, and D ∈ Rm×k are called the encoder and decoder matrices respectively. For the special case when Y = X , it is called auto-encoders. The optimization problem is to learn matrices D and E such that ||Y − Y ||2F is minimized. The optimal solution is denoted as Y ∗, D∗ and E∗5. In the case of auto-encoders X∗ = Xk, where Xk is the best rank k approximation of X . In this section, we study the optimization landscape of the encoder-decoder butterfly network : an encoder-decoder network, where the encoder is replaced by a truncated butterfly network followed by a dense linear layer in smaller dimension. Such a replacement is motivated by the following result from Sarlós (2006), in which ∆k = ||Xk −X||2F. Proposition 2. Let X ∈ Rn×d. Then with probability at least 1/2, the best rank k approximation of X from the rows of JX (denoted Jk(X)), where J is sampled from an ` × n FJLT distribution and ` = (k log k + k/ ) satisfies ||Jk(X)−X||2F ≤ (1 + )∆k.\nProposition 2 suggests that in the case of auto-encoders we could replace the encoder with a truncated butterfly network of size ` × n followed by a dense linear layer of size k × `, and obtain a network with fewer parameters but loose very little in terms of representation. Hence, it is worthwhile investigating the representational power of the encoder-decoder butterfly network\nY = DEBX . (1)\nHere, X , Y and D are as in the encoder-decoder network, E ∈ Rk×` is a dense matrix, and B is an `× n truncated butterfly network. In the encoder-decoder butterfly network the encoding is done using EB, and decoding is done using D. This reduces the number of parameters in the encoding matrix from kn (as in the encoder-decoder network) to k` + O(n log `). Again the objective is to learn matrices D and E, and the truncated butterfly network B such that ||Y − Y ||2F is minimized. The optimal solution is denoted as Y ∗, D∗, E∗, and B∗. Theorem 1 shows that the loss at a critical point of such a network depends on the eigenvalues of Σ(B) = Y XTBT (BXXTBT )−1XY T , when BXXTBT is invertible and Σ(B) has ` distinct positive eigenvalues.The loss L is defined as ||Y − Y ||2F. Theorem 1. Let D,E and B be a point of the encoder-decoder network with a truncated butterfly network satisfying the following: a) BXXTBT is invertible, b) Σ(B) has ` distinct positive eigenvalues λ1 > . . . > λ`, and c) the gradient of L(Y ) with respect to the parameters in D and E matrix is zero. Then corresponding to this point (and hence corresponding to every critical point) there is an I ⊆ [`] such that L(Y ) at this point is equal to tr(Y Y T ) − ∑ i∈I λi. Moreover if the point is a local minima then I = [k].\nThe proof of Theorem 1 is given in Appendix C. We also compare our result with that of Baldi and Hornik (1989) and Kawaguchi (2016), which study the optimization landscape of dense linear neural networks in Appendix C. From Theorem 1 it follows that if B is fixed and only D and E are trained then a local minima is indeed a global minima. We use this to claim a worst-case guarantee using a two-phase learning approach to train an auto-encoder. In this case the optimal solution is denoted as Bk(Y ), DB , and EB . Observe that when Y = X , Bk(X) is the best rank k approximation of X computed from the rows of BX .\nTwo phase learning for auto-encoder: Let ` = k log k + k/ and consider a two phase learning strategy for auto-encoders, as follows: In phase one B is sampled from an FJLT distribution, and then only D and E are trained keeping B fixed. Suppose the algorithm learns D′ and E′ at the end\n5Possibly multiple D∗ and E∗ exist such that Y ∗ = D∗E∗X .\nof phase one, and X ′ = D′E′B. Then Theorem 1 guarantees that, assuming Σ(B) has ` distinct positive eigenvalues and D′, E′ are a local minima, D′ = DB , E′ = EB , and X ′ = Bk(X). Namely X ′ is the best rank k approximation of X from the rows of BX . From Proposition 2 with probability at least 12 , L(X\n′) ≤ (1 + )∆k. In the second phase all three matrices are trained to improve the loss. In Sections 6.2 and 6.3 we train an encoder-decoder butterfly network using the standard gradient descent method. In these experiments the truncated butterfly network is initialized by sampling it from an FJLT distribution, and D and E are initialized randomly as in Pytorch." }, { "heading": "6 EXPERIMENTS ON DENSE LAYER REPLACEMENT AND ENCODER-DECODER BUTTERFLY NETWORK", "text": "In this section we report the experimental results based on the ideas presented in Sections 4.2 and 5." }, { "heading": "6.1 REPLACING DENSE LINEAR LAYERS BY THE PROPOSED ARCHITECTURE", "text": "This experiment replaces a dense linear layer of size n2×n1 in common deep learning architectures with the network proposed in Section 4.2.6 The truncated butterfly networks are initialized by sampling it from the FJLT distribution, and the dense matrices are initialized randomly as in Pytorch. We set k1 = log n1 and k2 = log n2. The datasets and the corresponding architectures considered are summarized in Table 1. For each dataset and model, the objective function is the same as defined in the model, and the generalization and convergence speed between the original model and the modified one (called the butterfly model for convenience) are compared. Figure 7 in Appendix D.1 reports the number of parameters in the dense linear layer of the original model, and in the replaced network, and Figure 8 in Appendix D.1 displays the number of parameter in the original model and the butterfly model. In particular, Figure 7 shows the significant reduction in the number of parameters obtained by the proposed replacement. On the left of Figure 1, the test accuracy of the original model and the butterfly model is reported, where the black vertical lines denote the error bars corresponding to standard deviation, and the values above the rectangles denote the average accuracy. On the right of Figure 1 observe that the test accuracy for the butterfly model trained with stochastic gradient descent is even better than the original model trained with Adam in the first few epochs. Figure 12 in Appendix D.1 compares the test accuracy in the the first 20 epochs of the original and butterfly model. The results for the NLP tasks in the interest of space are reported in Figure 9, Appendix D.1. The training and inference times required for the original model and the butterfly model in each of these experiments are reported in Figures 10 and 11 in Appendix D.1. We remark that the modified architecture is also trained for fewer epochs. In almost all the cases the modified architecture does better than the normal architecture, both in the rate of convergence and in the final accuracy/F1 score. Moreover, the training time for the modified architecture is less." }, { "heading": "6.2 ENCODER-DECODER BUTTERFLY NETWORK WITH SYNTHETIC GAUSSIAN AND REAL DATA", "text": "This experiment tests whether gradient descent based techniques can be used to train encoderdecoder butterfly network. In all the experiments in this section Y = X . Five types of data matrices are tested, whose attributes are specified in Table 2.7 Two among them are random and\n6In all the architectures considered the final linear layer before the output layer is replaced, and n1 and n2 depend on the architecture.\n7In Table 2 HS-SOD denotes a dataset for hyperspectral images from natural scenes (Imamoglu et al., 2018).\nthree are constructed using standard public real image datasets. In the interest of space, the construction of the data matrices is explained in Appendix D.2. For the matrices constructed from the image datasets, the input coordinates are randomly permuted, which ensures the network cannot take advantage of the spatial structure in the data. For each of the data matrices the loss obtained via training the truncated butterfly network with the Adam optimizer is compared to ∆k (denoted as PCA) and ||Jk(X)−X||2F where J is an `× n matrix sampled from the FJLT distribution (denoted as FJLT+PCA). Figure 2 reports the loss on Gaussian 1 and MNIST, whereas Figure 13 in Appendix D.2 reports the loss for the remaining data matrices. Observe that for all values of k the loss for the encoder-decoder butterfly network is almost equal to ∆k, and is in fact ∆k for small and large values of k." }, { "heading": "6.3 TWO-PHASE LEARNING FOR ENCODER-DECODER BUTTERFLY NETWORK", "text": "This experiment is similar to the experiment in Section 6.2 but the training in this case is done in two phases. In the first phase, B is fixed and the network is trained to determine an optimal D and E. In the second phase, the optimalD andE determined in phase one are used as the initialization, and the\nnetwork is trained over D,E and B to minimize the loss. Theorem 1 ensures worst-case guarantees for this two phase training (see below the theorem). Figure 3 reports the approximation error of an image from Imagenet. The red and green lines in Figure 3 correspond to the approximation error at the end of phase one and two respectively." }, { "heading": "7 SKETCHING ALGORITHM FOR LOW-RANK MATRIX DECOMPOSITION PROBLEM USING BUTTERFLY NETWORK", "text": "The recent influential work by Indyk et al. (2019) considers a supervised learning approach to compute an `×n pre-conditioning matrixB for low-rank approximation of n×dmatrices. The matrixB has a fixed sparse structure as in Clarkson and Woodruff (2009), each column as one non-zero entry (chosen randomly) which are learned to minimize the loss over a training set of matrices. In this section, we present experiments with the setting being similar to that in Indyk et al. (2019), except that B is now represented as an ` × n truncated butterfly network. Our setting is similar to that in Indyk et al. (2019), except that B is now represented as an ` × n truncated butterfly network. Our experiments suggests that indeed a learned truncated butterfly network does better than a random matrix, and even a learned B as in Indyk et al. (2019).\nSetup: Suppose X1, . . . , Xt ∈ Rn×d are training matrices sampled from a distribution D. Then a B is computed that minimizes the following empirical loss: ∑ i∈[t] ||Xi − Bk(Xi)||2F. We compute Bk(Xi) using truncated SVD of BXi (as in Algorithm 1, Indyk et al. (2019)). Similar to Indyk et al. (2019), the matrix B is learned by the back-propagation algorithm that uses a differentiable SVD implementation to calculate the gradients, followed by optimization with Adam such that the butterfly structure of B is maintained. The learned B can be used as the pre-processing matrix for any matrix in the future. The test error for a matrix B and a test set Te is defined as follows:\nErrTe(B) = EX∼Te [ ||X −Bk(X)||2F ] − AppTe, where AppTe = EX∼Te [ ||X −Xk||2F ] .\nExperiments and Results: The experiments are performed on the datasets shown in Table 3. In HS-SOD Imamoglu et al. (2018) and CIFAR-10 Krizhevsky (2012) 400 training matrices (t = 400), and 100 test matrices are sampled, while in Tech 200 training matrices (t = 200), and 95 test matrices are sampled. In Tech Davido et al. (2004) each matrix has 835,422 rows but on average only 25,389 rows and 195 columns contain non-zero entries. For the same reason as in Section 6.2 in each dataset, the coordinates of each row are randomly permuted. Some of the matrices in the datasets have much larger singular values than the others, and to avoid imbalance in the dataset, the matrices are normalized so that their top singular values are all equal, as done in Indyk et al. (2019). For each of the datasets, the test error for the learned B via our truncated butterfly structure\nis compared to the test errors for the following three cases: 1) B is a learned as a sparse sketching matrix as in Indyk et al. (2019), b) B is a random sketching matrix as in Clarkson and Woodruff (2009), and c) B is an ` × n Gaussian matrix. Figure 4 compares the test error for ` = 20, and k = 10, where AppTe = 10.56. Figure 14 in Appendix E compares the test errors of the different methods in the extreme case when k = 1, and Figure 15 in Appendix E compares the test errors of the different methods for various values of `. Table 4 in Appendix E in Appendix E reports the test error for different values of ` and k. Figure 16 in in Appendix E shows the test error for ` = 20 and k = 10 during the training phase on HS-SOD. In Figure 16 it is observed that the butterfly learned is able to surpass sparse learned after a merely few iterations.\nFigure 5 compares the test error for the learned B via our truncated butterfly structure to a learned matrix B with N non-zero entries in each column – the N non-zero location for each column are chosen uniformly at random. The reported test errors are on HS-SOD, when ` = 20 and k = 10. Interestingly, the error for butterfly learned is not only less than the error for sparse learned (N = 1 as in (Indyk et al., 2019)) but also less than than the error for dense learned (N = 20). In particular, our results indicate that using a learned butterfly sketch can significantly reduce the approximation loss compared to using a learned sparse sketching matrix." }, { "heading": "8 DISCUSSION AND FUTURE WORK", "text": "Discussion: Among other things, this work showed that it is beneficial to replace dense linear layer in deep learning architectures with a more compact architecture (in terms of number of parameters), using truncated butterfly networks. This approach is justified using ideas from efficient matrix approximation theory from the last two decades. however, results in additional logarithmic depth to the network. This issue raises the question of whether the extra depth may harm convergence of gradient descent optimization. To start answering this question, we show, both empirically and theoretically, that in linear encoder-decoder networks in which the encoding is done using a butterfly network, this typically does not happen. To further demonstrate the utility of truncated butterfly networks, we consider a supervised learning approach as in Indyk et al. (2019), where we learn how to derive low rank approximations of a distribution of matrices by multiplying a pre-processing linear operator represented as a butterfly network, with weights trained using a sample of the distribution.\nFuture Work: The main open questions arising from the work are related to better understanding the optimization landscape of butterfly networks. The current tools for analysis of deep linear networks do not apply for these structures, and more theory is necessary. It would be interesting to determine whether replacing dense linear layers in any network, with butterfly networks as in Section 4.2 harms the convergence of the original matrix. Another direction would be to check empirically whether adding non-linear gates between the layers (logarithmically many) of a butterfly network improves the performance of the network. In the experiments in Section 6.1, we have replaced a single dense layer by our proposed architecture. It would be worthwhile to check whether replacing multiple dense linear layers in the different architectures harms the final accuracy. Similarly, it might be insightful to replace a convolutional layer by an architecture based on truncated butterfly network. Finally, since our proposed replacement reduces the number of parameters in the network, it might be possible to empirically show that the new network is more resilient to over-fitting." }, { "heading": "ACKNOWLEDGEMENT", "text": "This project has received funding from European Union’s Horizon 2020 research and innovation program under grant agreement No 682203 -ERC-[ Inf-Speed-Tradeoff]." }, { "heading": "A BUTTERFLY DIAGRAM FROM SECTION 1", "text": "Figure 6 referred to in the introduction is given here." }, { "heading": "B PROOF OF PROPOSITION 1", "text": "The proof of the proposition will use the following well known fact (Lemma B.1 below) about FJLT (more generally, JL) distributions (see Ailon and Chazelle (2009); Ailon and Liberty (2009); Krahmer and Ward (2011)). Lemma B.1. Let x ∈ Rn be a unit vector, and let J ∈ Rk×n be a matrix drawn from an FJLT distribution. Then for all < 1 with probability at least 1− e−Ω(k 2):\n‖x− JTJx‖ ≤ . (2)\nBy Lemma B.1 we have that with probability at least 1− e−Ω(k1 2),\n‖x− JT1 J1x‖ ≤ ‖x‖ = . (3) Henceforth, we condition on the event ‖x−JT1 J1x‖ ≤ ‖x‖. Therefore, by the definition of spectral norm ‖W‖ of W : ‖Wx−WJT1 J1x‖ ≤ ‖W‖ . (4) Now apply Lemma B.1 again on the vector WJT1 J1x and transformation J2 to get that with probability at least 1− e−Ω(k2 2),\n‖WJT1 J1x− JT2 J2WJT1 J1x‖ ≤ ‖WJT1 J1x‖. (5) Henceforth, we condition on the event ‖WJT1 J1x − JT2 J2WJT1 J1x‖ ≤ ‖WJT1 J1x‖. To bound the last right hand side, we use the triangle inequality together with (4):\n‖WJT1 J1x‖ ≤ ‖Wx‖+ ‖W‖ ≤ ‖W‖(1 + ). (6) Combining (5) and (6) gives:\n‖WJT1 J1x− JT2 J2WJT1 J1x‖ ≤ ‖W‖(1 + ). (7)\nFinally,\n‖JT2 J2WJT1 J1x−Wx‖ = ‖(JT2 J2WJT1 J1x−WJT1 J1x) + (WJT1 J1x−Wx)‖ ≤ ‖W‖(1 + ) + ‖W‖ = ‖W‖ (2 + ) ≤ 3‖W‖ , (8)\nwhere the first inequality is from the triangle inequality together with (4) and (7), and the second inequality is from the bound on . The proposition is obtained by adjusting the constants hiding inside the Ω() notation in the exponent in the proposition statement." }, { "heading": "C PROOF OF THEOREM 1", "text": "We first note that our result continues to hold even if B in the theorem is replaced by any structured matrix. For example the result continues to hold if B is an `× n matrix with one non-zero entry per column, as is the case with a random sparse sketching matrix Clarkson and Woodruff (2009). We also compare our result with that Baldi and Hornik (1989); Kawaguchi (2016).\nComparison with Baldi and Hornik (1989) and Kawaguchi (2016): The critical points of the encoder-decoder network are analyzed in Baldi and Hornik (1989). Suppose the eigenvalues of Y XT (XXT )−1XY T are γ1 > . . . > γm > 0 and k ≤ m ≤ n. Then they show that corresponding to a critical point there is an I ⊆ [m] such that the loss at this critical point is equal to tr(Y Y T ) −∑ i∈I γi, and the critical point is a local/global minima if and only if I = [k]. Kawaguchi (2016) later generalized this to prove that a local minima is a global minima for an arbitrary number of hidden layers in a linear neural network if m ≤ n. Note that since ` ≤ n and m ≤ n in Theorem 1, replacing X by BX in Baldi and Hornik (1989) or Kawaguchi (2016) does not imply Theorem 1 as it is.\nNext, we introduce a few notation before delving into the proof. Let r = (Y − Y )T , and vec(r) ∈ Rmd is the entries of r arranged as a vector in column-first ordering, (∇vec(DT )L(Y ))T ∈ Rmk and (∇vec(ET )L(Y ))T ∈ Rk` denote the partial derivative of L(Y ) with respect to the parameters in vec(DT ) and vec(ET ) respectively. Notice that ∇vec(DT )L(Y ) and ∇vec(ET )L(Y ) are row vectors of size mk and k` respectively. Also, let PD denote the projection matrix of D, and hence if D is a matrix with full column-rank then PD = D(DT ·D)−1 ·DT . The n× n identity matrix is denoted as In, and for convenience of notation let X̃ = B ·X . First we prove the following lemma which gives an expression for D and E if ∇vec(DT )L(Y ) and ∇vec(ET )L(Y ) are zero. Lemma C.1 (Derivatives with respect to D and E).\n1. ∇vec(DT )L(Y ) = vec(r)T (Im ⊗ (E · X̃)T ), and\n2. ∇vec(ET )L(X) = vec(r)T (D ⊗ X̃)T\nProof. 1. Since L(Y ) = 12 vec(r) T · vec(r),\n∇vec(DT )L(Y ) = vec(r)T · ∇vec(DT )vec(r) = vec(r)T (vec(DT )(X̃T · ET ·DT )) = vec(r)T (Im ⊗ (E · X̃)T ) · ∇vec(DT )vec(DT ) = vec(r)T (Im ⊗ (E · X̃)T )\n2. Similarly,\n∇vec(ET )L(Y ) = vec(r)T · ∇vec(ET )vec(r) = vec(r)T (vec(ET )(X̃T · ET ·DT )) = vec(r)T (D ⊗ X̃T ) · ∇vec(ET )vec(ET ) = vec(r)T (D ⊗ X̃T )\nAssume the rank of D is equal to p. Hence there is an invertible matrix C ∈ Rk×k such that D̃ = D ·C is such that the last k−p columns of D̃ are zero and the first p columns of D̃ are linearly independent (via Gauss elimination). Let Ẽ = C−1 ·E. Without loss of generality it can be assumed D̃ ∈ Rd×p, and Ẽ ∈ Rp×d, by restricting restricting D̃ to its first p columns (as the remaining are\nzero) and Ẽ to its first p rows. Hence, D̃ is a full column-rank matrix of rank p, and DE = D̃Ẽ. Claims C.1 and C.2 aid us in the completing the proof of the theorem. First the proof of theorem is completed using these claims, and at the end the two claims are proved. Claim C.1 (Representation at the critical point).\n1. Ẽ = (D̃T D̃)−1D̃TY X̃T (X̃ · X̃T )−1\n2. D̃Ẽ = PD̃Y X̃ T (X̃ · X̃T )−1\nClaim C.2. 1. ẼBD̃ = (ẼBY X̃T ẼT )(ẼX̃X̃T ẼT )−1\n2. PD̃Σ = ΣPD̃ = PD̃ΣPD̃\nWe denote Σ(B) as Σ for convenience. Since Σ is a real symmetric matrix, there is an orthogonal matrix U consisting of the eigenvectors of Σ, such that Σ = U ∧ UT , where ∧ is a m × m diagonal matrix whose first ` diagonal entries are λ1, . . . , λ` and the remaining entries are zero. Let u1, . . . , um be the columns of U . Then for i ∈ [`], ui is the eigenvector of Σ corresponding to the eigenvalue λi, and {u`+1, . . . , udy} are the eigenvectors of Σ corresponding to the eigenvalue 0.\nNote that PUT D̃ = U T D̃(D̃TUTUD̃)−1D̃TU = UTPD̃U , and from part two of Claim C.2 we have\n(UPUT D̃U T )Σ = Σ(UPUT D̃U T ) (9)\nU · PUT D̃ ∧ U T = U ∧ PUT D̃U T (10) PUT D̃∧ = ∧PUT D̃ (11)\nSince PUT D̃ commutes with ∧, PUT D̃ is a block-diagonal matrix comprising of two blocks P1 and P2: the first block P1 is an ` × ` diagonal block, and P2 is a (m − `) × (m − `) matrix. Since PUT D̃ is orthogonal projection matrix of rank p its eigenvalues are 1 with multiplicity p and 0 with multiplicity m − p. Hence at most p diagonal entries of P1 are 1 and the remaining are 0. Finally observe that\nL(Y ) = tr((Y − Y )(Y − Y )T )\n= tr(Y Y T )− 2tr(Y Y T ) + tr(Y Y T ) = tr(Y Y T )− 2tr(PD̃Σ) + tr(PD̃ΣPD̃) = tr(Y Y T )− tr(PD̃Σ)\nThe second line in the above equation follows using the fact that tr(Y Y T ) = tr(Y Y T\n), the third line in the above equation follows by substituting Y = PD̃Y X̃\nT · (X̃ · X̃T )−1 · X̃ (from part two of Claim C.1), and the last line follows from part two of Claim C.2. Substituting Σ = U ∧ UT , and PD̃ = UPUT D̃U T in the above equation we have,\nL(Y ) = tr(Y Y T )− tr(UPUT D̃ ∧ U T )\n= tr(Y Y T )− tr(PUT D̃∧)\nThe last line the above equation follows from the fact that tr(UP ˜UTD ∧U T ) = tr(PUT D̃ ∧UTU) = tr(PUT D̃∧). From the structure of PUT D̃ and ∧ it follows that there is a subset I ⊆ [`], |I| ≤ p such that tr(PUT D̃∧) = ∑ i∈I λi. Hence, L(Y ) = tr(Y Y T )− ∑ i∈I λi.\nSince PD̃ = UPUT D̃U T , there is a p× p invertible matrix M such that\nD̃ = (U · V )I′ ·M , and Ẽ = M−1(V TUT )I′Y X̃T (X̃X̃T )−1\nwhere V is a block-diagonal matrix consisting of two blocks V1 and V2: V1 is equal to I`, and V2 is an (m− `)× (m− `) orthogonal matrix, and I ′ is such that I ⊆ I ′ and |I ′| = p. The relation for Ẽ in the above equation follows from part one of Claim C.1. Note that if I ′ ⊆ [`], then I = I ′, that is I consists of indices corresponding to eigenvectors of non-zero eigenvalues.\nRecall that D̃ was obtained by truncating the last k − p zero rows of DC, where C was a\nk × k invertible matrix simulating the Gaussian elimination. Let [M |Op×(k−p)] denoted the p × k matrix obtained by augmenting the columns of M with (k − p) zero columns. Then\nD = (UV )I′ [M |Op×(k−p)]C−1 .\nSimilarly, there is a p× (k − p) matrix N such that\nE = C[M−1N ]((UV )I′) TY X̃T (X̃X̃T )−1\nwhere [M −1 N ] denotes the k × p matrix obtained by augmenting the rows of M −1 with the rows of N . Now suppose I 6= [k], and hence I ′ 6= [k]. Then we will show that there are matrices D′ and E′ arbitrarily close to D and E respectively such that if Y ′ = D′E′X̃ then L(Y ′) < L(Y ). There is an a ∈ [k] \\ I ′, and b ∈ I ′ such that λa > λb (λb could also be zero). Denote the columns of the matrix UV as {v1, . . . , vm}, and observe that vi = ui for i ∈ [`] (from the structure of V ). For > 0 let u′b = (1 + 2)− 1 2 (vb + ua). Define U ′ as the matrix which is equal to UV except that the column vector vb in UV is replaced by u′b in U ′. Since a ∈ [k] ⊆ [`] and a /∈ I ′, va = ua and (U ′I′) TU ′I′ = Ip. Define\nD′ = U ′I′ [M |Op×(k−p)]C−1 , and E′ = C[M −1 N ](U ′ I′) TY X̃T (X̃X̃T )−1\nand let Y ′ = D′E′X̃ . Now observe that, D′E′ = U ′I′(UI′) TY X̃T (X̃X̃T )−1, and that\nL(Y ′) = tr(Y Y T )− ∑ i∈I λi − 2 1 + 2 (λa − λb) = L(Y )−\n2\n1 + 2 (λa − λb)\nSince can be set arbitrarily close to zero, it can be concluded that there are points in the neighbourhood of Y such that the loss at these points are less than L(Y ). Further, since L is convex with respect to the parameters in D (respectively E), when the matrix E is fixed (respectively D is fixed) Y is not a local maximum. Hence, if I 6= [k] then Y represents a saddle point, and in particular Y is local/global minima if and only if I = [k].\nProof of Claim C.1. Since ∇vec(ET )L(X) is equal to zero, from the second part of Lemma C.1 the following holds,\nX̃(Y − Y )TD = X̃Y TD − X̃Y TD = 0 ⇒ X̃X̃TETDTD = X̃Y TD\nTaking transpose on both sides\n⇒ DTDEX̃X̃T = DTY X̃T (12)\nSubstituting DE as D̃Ẽ in Equation 12, and multiplying Equation 12 by CT on both the sides from the left, Equation 13 follows.\n⇒ D̃T D̃ẼX̃X̃T = D̃TY X̃T (13)\nSince D̃ is full-rank, we have\nẼ = (D̃T D̃)−1D̃TY X̃T (X̃X̃T )−1. (14)\nand, D̃Ẽ = PD̃Y X̃ T (X̃X̃T )−1 (15)\nProof of Claim C.2. Since ∇vec(DT )L(Y ) is zero, from the first part of Lemma C.1 the following holds,\nEX̃(Y − Y )T = EX̃Y T − EX̃ · Y T = 0 ⇒ EX̃X̃TETDT = EX̃Y T (16)\nSubstituting ET ·DT as ẼT · D̃T in Equation 12, and multiplying Equation 16 by C−1 on both the sides from the left Equation 17 follows.\nẼX̃X̃T ẼT D̃T = ẼX̃Y T (17)\nTaking transpose of the above equation we have,\nD̃ẼX̃X̃T ẼT = Y X̃T ẼT (18)\nFrom part 1 of Claim C.1, it follows that Ẽ has full row-rank, and hence ẼX̃X̃T ẼT is invertible. Multiplying the inverse of ẼX̃X̃T ẼT from the right on both sides and multiplying ẼB from the left on both sides of the above equation we have,\nẼBD̃ = (ẼBY X̃T ẼT )(ẼX̃X̃T ẼT )−1 (19)\nThis proves part one of the claim. Moreover, multiplying Equation 18 by D̃T from the right on both sides\nD̃ẼX̃X̃T ẼT D̃T = Y X̃T ẼT D̃T\n⇒ (PD̃Y X̃ T (X̃X̃T )−1)(X̃X̃T )((X̃X̃T )−1X̃Y TPD̃) = Y X̃ T ((X̃X̃T )−1X̃Y T · PD̃) ⇒ PD̃Y X̃ T (X̃X̃T )−1X̃Y TPD̃ = Y X̃ T (X̃X̃T )−1X̃Y T · PD̃\nThe second line the above equation follows by substituting D̃Ẽ = PD̃Y X̃ T (X̃X̃T )−1 (from part 2 of Claim C.1). Substituting Σ = Y X̃T (X̃X̃T )−1X̃Y T in the above equation we have\nPD̃ΣPD̃ = Σ · PD̃ Since PT\nD̃ = PD̃, and Σ T = Σ, we also have ΣPD̃ = PD̃Σ." }, { "heading": "D ADDITIONAL TABLES AND PLOTS FROM SECTION 6", "text": "D.1 PLOTS FROM SECTION 6.1\nFigure 7 displays the number of parameters in the dense linear layer of the original model and in the replaced butterfly based network. Figure 9 reports the results for the NLP tasks done as part of experiment in Section 6.1. Figure 8 displays the number of parameter in the original model and the butterfly model. Figures 10 and 11 reports the training and inference times required for the original model and the butterfly model in each of the experiments. The training and and inference times in Figures 10 and 11 are averaged over 100 runs. Figure 12 is the same as the right part of Figure 1 but here we compare the test accuracy of the original and butterfly model for the the first 20 epochs.\nD.2 PLOTS FROM SECTION 6.2\nData Matrices: The data matrices are as in Table 2. Gaussian 1 and Gaussian 2 are Gaussian matrices with rank 32 and 64 respectively. Rank r Gaussian matrices are constructed as follows: r orthonormal vectors of size 1024 are sampled at random and the columns of the matrix are random linear combinations of these vectors determined by choosing the coefficients independently and uniformly at random from the Gaussian distribution with mean 0 and variance 0.01. The data matrix for MNIST is constructed as follows: each row corresponds to an image represented as a 28 ×\n28 matrix (pixels) sampled uniformly at random from the MNIST database of handwritten digits (LeCun and Cortes, 2010) which is extended to a 32× 32 matrix by padding numbers close to zero and then represented as a vector of size 1024 in column-first ordering8. Similar to the MNIST every row of the data matrix for Olivetti corresponds to an image represented as a 64× 64 matrix sampled uniformly at random from the Olivetti faces data set (Cambridge, 1994), which is represented as a vector of size 4096 in column-first ordering. Finally, for HS-SOD the data matrix is a 1024 × 768 matrix sampled uniformly at random from HS-SOD – a dataset for hyperspectral images from natural scenes (Imamoglu et al., 2018).\nFigure 13 reports the losses for the Gaussian 2, Olivetti, and Hyper data matrices.\n8Close to zero entries are sampled uniformly at random according to a Gaussian distribution with mean zero and variance 0.01." }, { "heading": "E MISSING PLOTS FROM SECTION 7", "text": "In this section we state a few additional cases that were done as part of the experiment in Section 7. Figure 14 compares the test errors of the different methods in the extreme case when k = 1. Figure 15 compares the test errors of the different methods for various values of `. Figure 16 shows the test error for ` = 20 and k = 10 during the training phase on HS-SOD. Observe that the butterfly\nlearned is able to surpass sparse learned after a merely few iterations. Finally Table 4 compares the test error for different values of ` and k." }, { "heading": "F BOUND ON NUMBER OF EFFECTIVE WEIGHTS IN TRUNCATED BUTTERFLY NETWORK", "text": "A butterfly network for dimension n, which we assume for simplicity to be an integral power of 2, is log n layers deep. Let p denote the integer log n. The set of nodes in the first (input) layer will be denoted here by V (0). They are connected to the set of n nodes V (1) from the next layer, and so on until the nodes V (p) of the output layer. Between two consecutive layers V (i) and V (i+1), there are 2n weights, and each node in V (i) is adjacent to exactly two nodes from V (i+1).\nWhen truncating the network, we discard all but some set S(p) ⊆ V (p) of at most ` nodes in the last layer. These nodes are connected to a subset S(p−1) ⊆ V (p−1) of at most 2` nodes from the\npreceding layer using at most 2` weights. By induction, for all i ≥ 0, the set of nodes S(p−i) ⊆ V (p−i) is of size at most 2i ·`, and is connected to the set S(p−i−1) ⊆ V (p−i−1) using at most 2i+1 ·` weights.\nNow take k = dlog2(n/`)e. By the above, the total number of weights that can participate in a path connecting some node in S(p) with some node in V (p−k) is at most\n2`+ 4`+ · · ·+ 2k` ≤ 4n .\nFrom the other direction, the total number of weights that can participate in a path connecting any node from V (0) with any node from V (p−k) is 2n times the number of layers in between, or more precisely:\n2n(p− k) = 2n(log2 n− dlog2(n/`)e) ≤ 2n(log2 n− log2(n/`) + 1) = 2n(log `+ 1) .\nThe total is 2n log `+ 6n, as required." } ]
2,020
null
SP:fb0eda1f20d9b0a63164e96a2bf9ab4bee365eea
[ "The paper considers the problem of partitioning the atoms (e.g., pixels of an image) of a reinforcement learning task to latent states (e.g., a grid that determines whether there exists furniture in each cell). The number of states grows exponentially with the number of cells of the grid. So the algorithms that are polynomial in the number of states are not efficient. The paper considers the factored block Markov decision process (MDP) model and adds a few more assumptions. Generally, this model and assumptions guarantee that the cells of the grid partition the atoms (i.e., each atom depends on only one cell), the atoms in a cell are dependent (in the probabilistic sense), the conditional probability of the parent value of the states and the action given the next state is 0 or 1 is separated (i.e., the difference is bounded away from zero), and the regressor classes that are used are realizable. The paper shows that this is enough to give an algorithm that partitioned the atoms in each step with high probability and its time complexity is polynomial in the number of cells and logarithmic in the number of atoms." ]
We propose a novel setting for reinforcement learning that combines two common real-world difficulties: presence of observations (such as camera images) and factored states (such as location of objects). In our setting, the agent receives observations generated stochastically from a latent factored state. These observations are rich enough to enable decoding of the latent state and remove partial observability concerns. Since the latent state is combinatorial, the size of state space is exponential in the number of latent factors. We create a learning algorithm FactoRL (Fact-o-Rel) for this setting which uses noise-contrastive learning to identify latent structures in emission processes and discover a factorized state space. We derive polynomial sample complexity guarantees for FactoRL which polynomially depend upon the number factors, and very weakly depend on the size of the observation space. We also provide a guarantee of polynomial time complexity when given access to an efficient planning algorithm.
[ { "affiliations": [], "name": "Dipendra Misra" }, { "affiliations": [], "name": "Qinghua Liu" } ]
[ { "authors": [ "Alekh Agarwal", "Sham Kakade", "Akshay Krishnamurthy", "Wen Sun" ], "title": "Flambe: Structural complexity and representation learning of low rank mdps", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Mohammad Gheshlaghi Azar", "Ian Osband", "Rémi Munos" ], "title": "Minimax regret bounds for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ronen I. Brafman", "Moshe Tennenholtz" ], "title": "R-MAX - A general polynomial time algorithm for near-optimal reinforcement learning", "venue": "The Journal of Machine Learning Research,", "year": 2002 }, { "authors": [ "Richard Y Chen", "Szymon Sidor", "Pieter Abbeel", "John Schulman" ], "title": "UCB exploration via QEnsembles", "venue": null, "year": 2017 }, { "authors": [ "Simon S Du", "Akshay Krishnamurthy", "Nan Jiang", "Alekh Agarwal", "Miroslav Dudík", "John Langford" ], "title": "Provably efficient RL with rich observations via latent state decoding", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Fei Feng", "Ruosong Wang", "Wotao Yin", "Simon S Du", "Lin Yang" ], "title": "Provably efficient exploration for reinforcement learning using unsupervised learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Carlos Guestrin", "Relu Patrascu", "Dale Schuurmans" ], "title": "Algorithm-directed exploration for modelbased reinforcement learning in factored mdps", "venue": "In International Conference on Machine Learning,", "year": 2002 }, { "authors": [ "Carlos Guestrin", "Daphne Koller", "Ronald Parr", "Shobha Venkataraman" ], "title": "Efficient solution algorithms for factored mdps", "venue": "Journal of Artificial Intelligence Research,", "year": 2003 }, { "authors": [ "Ji He", "Mari Ostendorf", "Xiaodong He", "Jianshu Chen", "Jianfeng Gao", "Lihong Li", "Li Deng" ], "title": "Deep reinforcement learning with a combinatorial action space for predicting popular reddit threads", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Thomas Jaksch", "Ronald Ortner", "Peter Auer" ], "title": "Near-optimal regret bounds for reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with Gumbel-Softmax", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Nan Jiang", "Akshay Krishnamurthy", "Alekh Agarwal", "John Langford", "Robert E Schapire" ], "title": "Contextual decision processes with low Bellman rank are PAC-learnable", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Chi Jin", "Zeyuan Allen-Zhu", "Sebastien Bubeck", "Michael I Jordan" ], "title": "Is q-learning provably efficient", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sham Kakade" ], "title": "On the Sample Complexity of Reinforcement Learning", "venue": "PhD thesis, Gatsby Computational Neuroscience Unit,", "year": 2003 }, { "authors": [ "Michael Kearns", "Daphne Koller" ], "title": "Efficient reinforcement learning in factored mdps", "venue": "In International Joint Conference on Artificial Intelligence,", "year": 1999 }, { "authors": [ "Michael Kearns", "Satinder Singh" ], "title": "Near-optimal reinforcement learning in polynomial time", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "Hyoungseok Kim", "Jaekyeom Kim", "Yeonwoo Jeong", "Sergey Levine", "Hyun Oh Song" ], "title": "Emi: Exploration with mutual information", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "arXiv preprint arXiv:1802.05983,", "year": 2018 }, { "authors": [ "Akshay Krishnamurthy", "Alekh Agarwal", "John Langford" ], "title": "PAC reinforcement learning with rich observations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Adrien Laversanne-Finot", "Alexandre Péré", "Pierre-Yves Oudeyer" ], "title": "Curiosity driven exploration of learned disentangled goal spaces", "venue": "arXiv preprint arXiv:1807.01521,", "year": 2018 }, { "authors": [ "Lihong Li", "Michael L Littman", "Thomas J Walsh", "Alexander L Strehl" ], "title": "Knows what it knows: a framework for self-aware learning", "venue": "Machine learning,", "year": 2011 }, { "authors": [ "Andrew L. Maas", "Awni Y. Hannun", "Andrew Y. Ng" ], "title": "Rectifier nonlinearities improve neural network acoustic models", "venue": "In ICML Workshop on Deep Learning for Audio, Speech and Language Processing,", "year": 2013 }, { "authors": [ "Andrey Kolobov Mausam" ], "title": "Planning with markov decision processes: an ai perspective", "venue": null, "year": 2012 }, { "authors": [ "Ðord̄e Miladinović", "Muhammad Waleed Gondal", "Bernhard Schölkopf", "Joachim M Buhmann", "Stefan Bauer" ], "title": "Disentangled state space representations", "venue": "arXiv preprint arXiv:1906.03255,", "year": 2019 }, { "authors": [ "Dipendra Misra", "Mikael Henaff", "Akshay Krishnamurthy", "John Langford" ], "title": "Kinematic state abstraction and provably efficient rich-observation reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Ofir Nachum", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Near-optimal representation learning for hierarchical reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ian Osband", "Benjamin Van Roy" ], "title": "Near-optimal reinforcement learning in factored mdps", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Rajat Sen", "Ananda Theertha Suresh", "Karthikeyan Shanmugam", "Alexandros G Dimakis", "Sanjay Shakkottai" ], "title": "Model-powered conditional independence test", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sahil Sharma", "Aravind Suresh", "Rahul Ramesh", "Balaraman Ravindran" ], "title": "Learning to factor policies and action-value functions: Factored action space representations for deep reinforcement learning", "venue": "arXiv preprint arXiv:1705.07269,", "year": 2017 }, { "authors": [ "Aravind Srinivas", "Michael Laskin", "Pieter Abbeel" ], "title": "Curl: Contrastive unsupervised representations for reinforcement learning", "venue": "arXiv preprint arXiv:2004.04136,", "year": 2020 }, { "authors": [ "Alexander L. Strehl", "Lihong Li", "Eric Wiewiora", "John Langford", "Michael L. Littman" ], "title": "PAC model-free reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2006 }, { "authors": [ "Alexander L Strehl", "Carlos Diuk", "Michael L Littman" ], "title": "Efficient structure learning in factored-state mdps", "venue": "In Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence,", "year": 2010 }, { "authors": [ "Wen Sun", "Nan Jiang", "Akshay Krishnamurthy", "Alekh Agarwal", "John Langford" ], "title": "Model-based rl in contextual decision processes: Pac bounds and exponential improvements over model-free approaches", "venue": "In Conference on Learning", "year": 2019 }, { "authors": [ "Martin Sundermeyer", "Ralf Schlüter", "Hermann Ney" ], "title": "Lstm neural networks for language modeling", "venue": "In Thirteenth annual conference of the international speech communication association,", "year": 2012 }, { "authors": [ "Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "OpenAI Xi Chen", "Yan Duan", "John Schulman", "Filip DeTurck", "Pieter Abbeel" ], "title": "Exploration: A study of count-based exploration for deep reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Valentin Thomas", "Emmanuel Bengio", "William Fedus", "Jules Pondard", "Philippe Beaudoin", "Hugo Larochelle", "Joelle Pineau", "Doina Precup", "Yoshua Bengio" ], "title": "Disentangling the independently controllable factors of variation by interacting with the world", "venue": "arXiv preprint arXiv:1802.09484,", "year": 2018 }, { "authors": [ "Tsachy Weissman", "Erik Ordentlich", "Gadiel Seroussi", "Sergio Verdu", "Marcelo J Weinberger" ], "title": "Inequalities for the l1 deviation of the empirical distribution", "venue": "Hewlett-Packard Labs, Tech. Rep,", "year": 2003 }, { "authors": [ "Van Roy" ], "title": "polynomial samples in the number of parameters that encode the factored MDP (Osband", "venue": "Laversanne-Finot et al", "year": 2014 }, { "authors": [ "Du" ], "title": "2020) provide computationally and sample efficient algorithms for Block MDP which is a rich-observation setting with a latent non-factored state space. Nevertheless, this line of results crucially relies on the number of latent states being relatively small", "venue": null, "year": 2020 }, { "authors": [ "Kim" ], "title": "2020)) without theoretical guarantee. C INDEPENDENCE TESTING USING NOISE CONTRASTIVE ESTIMATION In this section, we introduce the independence testing algorithm, Algorithm 5 and provide its theoretic guarantees. Algorithm 5 will be used in Algorithm 2 as a subroutine for determining if two atoms", "venue": null, "year": 2020 }, { "authors": [ "Misra" ], "title": "δabs and c is a universal constant. Proof. This is a standard regression guarantee derived using Bernstein’s inequality with realizability", "venue": null, "year": 2020 }, { "authors": [ "√ ∆(nabs", "δabs", "|G" ], "title": "Coupling Distribution We introduce a coupling distribution following", "venue": null, "year": 2020 }, { "authors": [ "Du et al. Du" ], "title": "Appendix E for statement). Finally, the last step uses Assumption 1. We bound the two multiplicative terms below: We have D(š", "venue": null, "year": 2019 }, { "authors": [], "title": "A be any policy on real state space and let φ̂ : Ŝ → A be the induced policy on learned state space given by φ̂(ŝ) = φ ◦ θ(ŝ) = φ(θ(ŝ)) for any ŝ ∈ Ŝ. We showed in Theorem 13 that T̂ and T have small L1 distance under the bijection θ", "venue": null, "year": 2019 }, { "authors": [ "Weissman" ], "title": "Let P be a probability distribution over a discrete set of size a", "venue": "Let X = X1,", "year": 2003 }, { "authors": [ "Du" ], "title": "For any a, b, c, d > 0 with a ≤ b and c ≤ d", "venue": null, "year": 2021 }, { "authors": [ "max{b" ], "title": "They state their Lemma for a specific event (E = α(ŝ) in their notation) but this choice of event is not important and their proof holds for any event", "venue": null, "year": 2019 }, { "authors": [ "Misra" ], "title": "We implement the model class G for learning state decoder following suggestion", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Most reinforcement learning (RL) algorithms scale polynomially with the size of the state space, which is inadequate for many real world applications. Consider for example a simple navigation task in a room with furniture where the set of furniture pieces and their locations change from episode to episode. If we crudely approximate the room as a 10× 10 grid and consider each element in the grid to contain a single bit of information about the presence of furniture, then we end up with a state space of size 2100, as each element of the grid can be filled independent of others. This is intractable for RL algorithms that depend polynomially on the size of state space.\nThe notion of factorization allows tractable solutions to be developed. For the above example, the room can be considered a state with 100 factors, where the next value of each factor is dependent on just a few other parent factors and the action taken by the agent. Learning in factored Markov Decision Processes (MDP) has been studied extensively (Kearns & Koller, 1999; Guestrin et al., 2003; Osband & Van Roy, 2014) with tractable solutions scaling linearly in the number of factors and exponentially in the number of parent factors whenever planning can be done efficiently.\nHowever, factorization alone is inadequate since the agent may not have access to the underlying factored state space, instead only receiving a rich-observation of the world. In our room example, the agent may have access to an image of the room taken from a megapixel camera instead of the grid representation. Naively, treating each pixel of the image as a factor suggests there are over a million factors and a prohibitively large number of parent factors for each pixel. Counterintuitively, thinking of the observation as the state in this way leads to the conclusion that problems become harder as the camera resolution increases or other sensors are added. It is entirely possible, that these pixels (or more generally, observation atoms) are generated by a small number of latent factors with a small number of parent factors. This motivates us to ask: can we achieve PAC RL guarantees that depend polynomially on the number of latent factors and very weakly (e.g., logarithmically) on the size of observation space? Recent work has addressed this for a rich-observation setting with a non-factored latent state space when certain supervised learning problems are tractable (Du et al., 2019; Misra et al., 2020; Agarwal et al., 2020). However, addressing the rich-observation setting with a latent factored state space has remained elusive. Specifically, ignoring the factored structure in the latent space or treating observation atoms as factors yields intractable solutions.\n∗Correspondence at: dimisra@microsoft.com\nContributions. We combine two threads of research on rich-observation RL and factored MDP by proposing a new problem setup called Factored Block MDP (Section 2). In this setup, observations are emitted by latent states that obey the dynamics of a factored MDP. We assume observations to be composed of atoms (which can be pixels for an image) that are emitted by the latent factors. A single factor can emit a large number of atoms but no two factors can control the same atom. Following existing rich-observation RL literature, we assume observations are rich enough to decode the current latent state. We introduce an algorithm FactoRL that achieves the desired guarantees for a large class of Factored Block MDPs under certain computational and realizability assumptions (Section 4). The main challenge that FactoRL handles is to map atoms to the parent factor that emits them. We achieve this by reducing the identification problem to solving a set of independence test problems with distributions satisfying certain properties. We perform independence tests in a domain-agnostic setting using noise-contrastive learning (Section 3). Once we have mapped atoms to their parent factors, FactoRL then decodes the factors, estimates the model, recovers the latent structure in the transition dynamics, and learns a set of exploration policies. Figure 1 shows the different steps of FactoRL. This provides us with enough tools to visualize the latent dynamics, and plan for any given reward function. Due to the space limit, we defer the discussion of related work to Appendix B." }, { "heading": "To the best of our knowledge, our work represents the first provable solution to rich-observation RL with a combinatorially large latent state space.", "text": "" }, { "heading": "2 THE FACTORED BLOCK MDP SETTING", "text": "There are many possible ways to add rich observations to a factored MDP resulting in inapplicability or intractability. Our goal here is to define a problem setting that is tractable to solve and covers potential real-world problems. We start with the definition of Factored MDP (Kearns & Koller, 1999), but first review some useful notation that we will be using:\nNotations: For any n ∈ N, we use [n] to denote the set {1, 2, · · · , n}. For any ordered set (or a vector) U of size n, and an ordered index set I ⊆ [n] and length k, we use the notation U [I] to denote the ordered set (U [I[1]],U [I[2]], · · · ,U [I[k]]). Definition 1. A Factored MDP (S,A, T,R,H) consists of a d-dimensional discrete state space S ⊆ {0, 1}d, a finite action space A, an unknown transition function T : S × A → ∆(S), an unknown reward function R : S × A → [0, 1] and a time horizon H . Each state s ∈ S consists of d factors with the ith factor denoted as s[i]. The transition function satisfies T (s′ | s, a) =∏d i=1 Ti(s\n′[i] | s[pt(i)], a) for every s, s′ ∈ S and a ∈ A, where Ti : {0, 1}|pt(i)| ×A → ∆({0, 1}) defines a factored transition distribution and a parent function pt : [d]→ 2[d] defines the set of parent factors that can influence a factor at the next timestep.\nWe assume a deterministic start state. We also assume, without loss of generality, that each state and observation is reachable at exactly one time step. This can be easily accomplished by concatenating the time step information to state and observations. This allows us to write the state space as S = (S1,S2, · · · ,SH) where Sh is the set of states reachable at time step h.\nA natural question to ask here is why we assume factored transition. In tabular MDPs, the lower bound for sample complexity scales linearly w.r.t. the size of the state set (Kakade, 2003). If we do not assume a factorized transition function then we can encode an arbitrary MDP with a state space of size 2d, which would yield a lower bound of Ω(2d) rendering the setting intractable. Instead, we will prove sample complexity guarantees for FactoRL that scales in number of factors as dO(κ) where κ := maxi∈[d] |pt(i)| is the size of the largest parent factor set. The dependence of κ in the exponent is unavoidable as we have to find the parent factors from all possible ( d κ ) combinations, as well as learn the model for all possible values of the parent factor. However, for real-world problems we expect κ to be a small constant such as 2. This yields significant improvement, for example, if κ = 2 and d = 100 then dκ = 100 while 2d ≈ 1030. Based on the definition of Factored MDP, we define the main problem setup of this paper, called Factored Block MDP, where the agent does not observe the state but instead receives an observation containing enough information to decode the latent state.\nDefinition 2. A Factored Block MDP consists of an observation space X = X m and a latent state space S ⊆ {0, 1}d. A single observation x ∈ X is made of m atoms with the kth denoted by x[k] ∈ X . Observations are generated stochastically given a latent state s ∈ S according to a factored emission function q(x | s) = ∏di=1 qi(x[ch(i)] | s[i]) where qi : {0, 1} → ∆(X |ch(i)|) and ch : [d]→ 2[m] is a child function satisfying ch(i)∩ch(j) = ∅ whenever i 6= j. The emission function satisfies the disjointness property: for every i ∈ [d], we have supp(qi(· | 0)) ∩ supp (qi(· | 1)) = ∅.1 The dynamics of the latent state space follows a Factored MDP (S,A, T,R,H), with parent function pt and a deterministic start state.\nThe notion of atoms generalizes commonly used abstractions. For example, if the observation is an image then atoms can be individual pixels or superpixels, and if the observation space is a natural language text then atoms can be individual letters or words. We make no assumption about the structure of the atom space X or its size, which can be infinite. An agent is responsible for mapping each observation x ∈ X to individual atoms (x[1], · · · , x[m]) ∈X m. For the two examples above, this mapping is routinely performed in practice. If observation is a text presented to the agent as a string, then it can use off-the-shelf tokenizer to map it to sequence of tokens (atoms). Similar to states, we assume the set of observations reachable at different time steps is disjoint. Additionally, we also allow the parent (pt) and child function (ch) to change across time steps. We denote these functions at time step h by pth and chh.\nThe disjointness property was introduced in Du et al. (2019) for Block MDPs—a class of richobservation non-factorized MDPs. This property removes partial observability concerns and enables tractable learning. We expect this property to hold in real world problems whenever sufficient sensor data is available to decode the state from observation. For example, disjointness holds true for the navigation task with an overhead camera in Figure 1. In this case, the image provides us with enough information to locate all objects in the room, which describes the agent’s state.. Disjointness allows us to define a decoder φ?i : X\n|ch(i)| → {0, 1} for every factor i ∈ [d], such that φ?i (x[ch(i)]) = s[i] if x[ch(i)] ∈ supp (qi(. | s[i])). We define a shorthand φ?i (x) = φ?i (x[ch(i)]) whenever ch is clear from the context. Lastly, we define the state decoder φ? : X → {0, 1}d where φ?(x)[i] = φ?i (x). The agent interacts with the environment by taking actions according to a policy π : X → ∆(A). These interactions consist of episodes {s1, x1, a1, r1, s2, x2, a2, r2, · · · , aH , sH} with s1 = ~0, xh ∼ q(. | sh), rh = R(xh, ah), and sh+1 ∼ T (. | sh, ah). The agent never observes {s1, · · · , sH}. Technical Assumptions. We make two assumptions that are specific to the FactoRL algorithm. The first is a margin assumption on the transition dynamics that enables us to identify different values of a factor. This assumption was introduced by Du et al. (2019), and we adapt it to our setting.\nAssumption 1 (Margin Assumption). For every h ∈ {2, 3, · · · , H}, i ∈ [d], let ui be the uniform distribution jointly over actions and all possible reachable values of sh−1[pt(i)]. Then we assume: ‖Pui(·, · | sh[i] = 1)− Pui(·, · | sh[i] = 0)‖TV ≥ σ where Pui(sh−1[pt(i)], a | sh[i]) is the backward dynamics denoting the probability over parent values and last action given sh[i] and roll-in distribution ui, and σ > 0 is the margin.\n1The notation supp(p) denotes the support of the distribution p. Formally, supp(p) = {z | p(z) > 0}.\nAssumption 1 captures a large set of problems, including all deterministic problems for which the value of σ is 1. Assumption 1 helps us identify the different values of a factor but it does not help with mapping atoms to the factors from which they are emitted. In order to identify if two atoms come from the same factor, we make the following additional assumption to measure their dependence. Assumption 2 (Atom Dependency Bound). For any h ∈ [H], u, v ∈ [m] and u 6= v, if ch−1(u) = ch−1(v), i.e., atoms xh[u] and xh[v] have the same factor. Then under any distribution D ∈ ∆(Sh) we have ‖PD(xh[u], xh[v])− PD(xh[u])PD(xh[v])‖TV ≥ βmin.\nDependence assumption states that atoms emitted from the same factor will be correlated. This is true for many real-world problems. For example, consider a toy grid-based navigation task. Each state factor s[i] represents a cell in the grid which can be empty (s[i] = 0) or occupied (s[i] = 1). In the latter case, a randomly sampled box from the set {red box, yellow box, black box}, occupies its place. We expect Assumption 2 to hold in this case as pixels emitted from the same factor come from the same object and hence will be correlated. More specifically, if one pixel is red in color, then another pixel from the same cell will also be red as the object occupying the cell is a red box. This assumption does not remove the key challenge in identifying factors. As atoms from different factors can still be dependent due to actions and state distributions from previous time steps.\nModel Class. We use two regressor classes F and G. The first regressor class F : X ×X → [0, 1] takes a pair of atoms and outputs a scalar in [0, 1]. To define the second class, we first define a decoder class Φ : X ∗ → {0, 1}. We allow this class to be defined on any set of atoms. This is motivated by empirical research where commonly used neural network models operate on inputs of arbitrary lengths. For example, the LSTM model can operate on a text of arbitrary length (Sundermeyer et al., 2012). However, this is without loss of generality as we can define a different model class for different numbers of atom. We also define a model class U : X × A × {0, 1} → [0, 1]. Finally, we define the regressor class G : X × A×X ∗ → [0, 1] as {(x, a, x̌) 7→ u(x, a, φ(x̌)) | u ∈ U , φ ∈ Φ}. We assume F and G are finite classes and derive sample complexity guarantees which scale as log |F| and log |G|. However, since we only use uniform convergence arguments extending the guarantees to other statistical complexity measures such as Rademacher complexity is straightforward. Let Πall : S → A denote the set of all non-stationary policies of this form. We then define the class of policies Π : X → A by {x 7→ ϕ(φ?(x)) | ∀ϕ ∈ Πall}, which we use later to define our task. We use Pπ[E ] to denote probability of an event E under the distribution over episodes induced by policy π. Computational Oracle. We assume access to two regression oracles REG for model classes F and G. Let D1 be a dataset of triplets (x[u], x[v], y) where u, v denote two different atoms and y ∈ {0, 1}. Similarly, let D2 be a dataset of quads (x, a, x′, y) where x ∈ X , a ∈ A, x̌ ∈ X ∗, and y ∈ {0, 1}. Lastly, let ÊD[·] denote the empirical mean over dataset D. The two computational oracles compute:\nREG(D1,F)=arg min f∈F\nÊD1 [ (f(x[u], x[v])− y)2 ] , REG(D2,G)=arg min g∈GN ÊD2 [ (g(x, a, x̌)− y)2 ] .\nWe also assume access to a ∆pl-optimal planning oracle planner. Let Ŝ = (Ŝ1, · · · , Ŝh) be a learned state space and T̂ = (T̂1, · · · , T̂H) with T̂h : Ŝh−1 ×A → ∆(Ŝh) be the learned dynamics, and R̂ : Ŝ × A → [0, 1] be a given reward function. Let ϕ : Ŝ → A be a policy and V (ϕ; T̂ , R̂) be the policy value. Then for any ∆pl > 0 the output of planner ϕ̂ = planner(T̂ , R̂,∆pl) satisfies V (ϕ̂; T̂ , R̂) ≥ supϕ V (ϕ; T̂ , R̂)−∆pl, where supremum is taken over policies of type Ŝ → A. Task Definition. We focus on a reward-free setting with the goal of learning a state decoder and estimating the latent dynamics T . Since the state space is exponentially large, we cannot visit every state. However, the factorization property allows us to estimate the model by reaching factor values. In fact, we show that controlling the value of at most 2κ factors is sufficient for learning the model. Let C≤k(U) denote the space of all sets containing at most k different elements selected from the set U including ∅. We define the reachability probability ηh(K,Z) for a given h ∈ [H], K ⊆ [d], and Z ∈ {0, 1}|K|, and the reachability parameter ηmin as:\nηh(K,Z) := sup π∈ΠNS Pπ(sh[K] = Z), ηmin := inf h∈[H] inf s∈Sh inf K∈C≤2κ([d]) ηh(K, s[K]).\nOur sample complexity scales polynomially with η−1min. Note that we only require that if sh[K] = Z is reachable, then it is reachable with at least ηmin probability, i.e., either ηh(K,Z) = 0 or it is at least ηmin. These requirements are similar to those made by earlier work for non-factored state\nspace (Du et al., 2019; Misra et al., 2020). The key difference being that instead of requiring every state to be reachable with ηmin probability, we only require a small set of factor values to be reachable. For reference, if every policy induces a uniform distribution over S = {0, 1}d, then probability of visiting any state is 2−d but the probability of two factors taking certain values is only 0.25. This gives us a more practical value for ηmin.\nBesides estimating the dynamics and learning a decoder, we also learn an α-policy cover to enable exploration of different reachable values of factors. We define this below:\nDefinition 3 (Policy Cover). A set of policies Ψ is an α-policy cover of Sh for any α > 0 and h if:\n∀s ∈ Sh,K ∈ C≤2κ([d]), sup π∈Ψ Pπ(sh[K] = s[K]) ≥ αηh(K, s[K])." }, { "heading": "3 DISCOVERING EMISSION STRUCTURE WITH CONTRASTIVE LEARNING", "text": "Directly applying the prior work (Du et al., 2019; Misra et al., 2020) to decode a factored state from observation results in failure, as the learned factored state need not obey the transition factorization. Instead, the key high-level idea of our approach is to first learn the latent emission structure ch, and then use it to decode each factor individually. We next discuss our approach for learning ch.\nReducing Identification of Latent Emission Structure to Independence Tests. Assume we are able to perfectly decode the latent state and estimate the transition model till time step h− 1. Our goal is to infer the latent emission structure chh at time step h, which is equivalent to: given an arbitrary pair of atoms u and v, determine if they are emitted from the same factor or not. This is challenging since we cannot observe or control the latent state factors at time step h.\nLet i = ch−1(u) and j = ch−1(v) be the factors that emit x[u] and x[v]. If i = j, then Assumption 2 implies that these atoms are dependent on each other for any roll-in distribution D ∈ ∆(Sh−1 ×A) over previous state and action. However, if i 6= j then deterministically setting the previous action and values of the parent factors pt(i) or pt(j), makes x[u] and x[v] independent. For the example in Figure 1, fixing the value of s[1], s[2] and a would make x[u] and x[u′] independent of each other.\nThis observation motivates us to reduce this identification problem to performing independence tests with different roll-in distributions D ∈ ∆(Sh−1 ×A). Naively, we can iterate over all subsets K ∈ C≤2κ([d]) where for each K we create a roll-in distribution such that the values of sh−1[K] and the action ah−1 are fixed, and then perform independence test under this distribution. If two atoms are independent then there must exist a K that makes them independent. Otherwise, they should always be dependent by Assumption 2.\nHowever, there are two problems with this approach. Firstly, we do not have access to the latent states but only a decoder at time step h− 1. Further, it may not even be possible to find a policy that can set the values of factors deterministically. We later show that our algorithm FactoRL can learn a decoder that induces a bijection between learned factors and values, and the real factors and values. Therefore, maximizing the probability of ÊK;Z = {φ̂h−1(xh−1)[K] = Z} for a set of learned factors K and their values Z , implicitly maximizes the probability of EK′;Z′ = {sh−1[K′] = Z ′} for corresponding real factors K′ and their values Z ′. Since the event ÊK;Z is observable we can use rejection sampling to increase its probability sufficiently close to 1 which makes the probability of EK′;Z′ close to 1. The second problem is to perform independence tests in a domain agnostic setting. Directly estimating mutual information I(x[u];x[v]) can be challenging. Instead, we propose an oraclized independence test that reduces the problem to binary classification using noise-contrastive learning.\nOraclized Independent Test. Here, we briefly sketch the main idea of our independence test scheme and defer the details to Appendix C. We comment that the high-level idea of our independence testing subroutine is similar to Sen et al. (2017). Suppose we want to test if two random variables Y and Z are independent. Firstly, we construct a dataset in the following way: sample a Bernoulli random variable w ∼ Bern(1/2), and two pairs of independent realizations (y(1), z(1)) and (y(2), z(2)); if w = 1, add (y(1), z(1), w) to the dataset, and (y(1), z(2), w) otherwise. We repeat the sampling procedure n times and obtain a dataset {(yi, zi, wi)}ni=1. Then we can fit a classifier that predicts the value of wi using (yi, zi). If Y and Z are independent, then (yi, zi) will provide no information about wi and thus no classifier can do better than random guess. However, if Y and Z are dependent, then\nthe Bayes optimal classifier would perform strictly better than random guess. As a result, by looking at the training loss of the learned classifier, we can determine whether Y and Z are dependent or not.\n4 FactoRL: REINFORCEMENT LEARNING IN FACTORED BLOCK MDPS\nIn this section, we present the main algorithm FactoRL (Algorithm 1). It takes as input the model classes F ,G, failure probability δ > 0, and five hyperparameters σ, ηmin, βmin ∈ (0, 1) and d, κ ∈ N.2 We use these hyperparamters to define three sample sizes nind, nabs, nest and rejection sample frequency k. For brevity, we defer the exact values of these constants to Appendix D.7. FactoRL returns a learned decoder φ̂h : X → {0, 1}dh for some dh ∈ [m], an estimated transition model T̂h, learned parent p̂th and child functions ĉhh, and a 1/2-policy cover Ψh of Sh for every time step h ∈ {2, 3, · · · , H}. We use ŝh to denote the learned state at time step h. Formally, ŝh = (φ̂h1(xh), · · · , φ̂hdh(xh)). In the analysis of FactoRL, we show that dh−1 = d, and ĉhh is equivalent to chh up to permutation with high probability. Further, we show that φ̂h and ĉhh together learn a bijection between learned factors and their values and real factors and their values.\nFactoRL operates inductively over the time steps (Algorithm 1, line 2-8). In the hth iteration, the algorithm performs four stages of learning: identifying the latent emission structure, decoding the factors, estimating the model, and learning a policy cover. We describe these below.\nAlgorithm 1 FactoRL(F ,G, δ, σ, ηmin, βmin, d, κ). RL in Factored Block MDPs. 1: Initialize Ψh = ∅ for every h ∈ [H] and φ̂1 = X → {0}. Set global constants nind, nabs, nest, k. 2: for h ∈ {2, 3, · · · , H} do 3: ĉhh = FactorizeEmission(Ψh−1, φ̂h−1,F) // stage 1: discover latent emission structure 4: φ̂h = LearnDecoder(G,Ψh−1, ĉhh) // stage 2: learn a decoder for factors 5: T̂h, p̂th = EstModel(Ψh−1, φ̂h−1, φ̂h) // stage 3: find latent pth and estimate model 6: for I ∈ C≤2κ([d]),Z ∈ {0, 1}|I| do 7: ϕ̂hIZ = planner(T̂ , RhIZ ,∆pl) where RhIZ := 1{ŝh[I] = Z} // stage 4: planning 8: If V (ϕ̂hIZ ; T̂ , RhIZ) ≥ 3ηmin/4 then Ψh ← Ψh ∪ {ϕ̂hIZ ◦ φ̂h}\nreturn {ĉhh, φ̂h, T̂h, p̂th,Ψh}Hh=2\nIdentifying Latent Emission Process. The FactorizeEmission collects a dataset of observations for every policy in Ψh−1 and action a ∈ A (Algorithm 2, line 1-4). Policies in Ψh−1 are of the type πI;Z where I ∈ C≤2κ([dh−1]) and Z ∈ {0, 1}|I|. We can inductively assume πI;Z to be maximizing the probability of EI;Z = {ŝh−1[I] = Z}. If our decoder is accurate enough, then we hope that maximizing the probability of this event in turn maximizes the probability of fixing the values of a set of real factors. However, it is possible that PπI;Z (ŝh−1[I] = Z) is only O(ηmin). Therefore, as explained earlier, we use rejection sampling to drive the probability of this event close to 1. Formally, we define a procedure RejectSamp(πI;Z , EI;Z , k) which rolls-in at time step h− 1 with πI;Z to observe xh−1 (line 3). If the event EI;Z holds for xh−1 then we return xh−1, otherwise, we repeat the procedure. If we fail to satisfy the event k times then we return the last sample. We use this to define our main sampling procedure xh ∼ DI,Z,a := RejectSamp(πI;Z , EI;Z , k) ◦ a which first samples xh−1 using the rejection sampling procedure and then takes action a to observe xh. We collect a dataset of observation pairs (x(1), x(2)) sampled independently from DI,Z,a.\nFor every pair of atoms u, v ∈ [m], we calculate if they are independent under the distribution induced by DI,Z,a using IndTest with dataset DI,Z,a (line 5-7). We share the dataset across atoms for sample efficiency. If there exists at least one (I,Z, a) triple such that we evaluate x[u], x[v] to be independent, then we mark these atoms as coming from different factors. Intuitively, such an I would contain parent factors of at least ch−1h (u) or ch −1 h (v). If no such I exists then we mark these atoms as being emitted from the same factor.\n2Our analysis can use any non-zero lower bound on ηmin, βmin, σ and an upper bound on d and κ.\nAlgorithm 2 FactorizeEmission(Ψh−1, φ̂h−1,F). 1: for (πI;Z , a) ∈ Ψh−1 ×A and i ∈ [nind] do 2: Define EI;Z := 1{φ̂h−1(xh−1)[I] = Z} 3: Sample x(1)h , x (2) h ∼ RejectSamp(πI;Z , EI;Z , k) ◦ a // rejection sampling procedure\n4: DI;Z;a ← DI;Z;a ∪ {(x(1)h , x (2) h )} // initialize DI;Z;a = ∅\n5: for u ∈ {1, 2, · · · ,m− 1} and v ∈ {u+ 1, · · · ,m} do 6: Mark u, v as coming from the same factor, i.e., ĉh −1 h (u) = ĉh −1 h (v) if ∀(I,Z, a) 7: the oraclized independence test finds xh[u], xh[v] as dependent using DI;Z;a and F return ĉhh // label ordering of parents does not matter.\nAlgorithm 3 LearnDecoder(G,Ψh−1, ĉhh). Child function has type ĉhh : [dh]→ 2[m]\n1: for i in [dh], define ω = ĉhh(i),D = ∅ do 2: for nabs times do // collect a dataset of real (y = 1) and imposter (y = 0) transitions 3: Sample (x(1), a(1), x′(1)), (x(2), a(2), x′(2)) ∼ Unf(Ψh−1) ◦ Unf(A) and y ∼ Bern( 12 ) 4: If y = 1 then D ← D ∪ (x(1), a(1), x′(1)[ω], y) else D ← D ∪ (x(1), a(1), x′(2)[ω], y) 5: ûi, φ̂i = REG(D,G) // train the decoder using noise-contrastive learning\nreturn φ̂ : X → {0, 1}dh where for any x ∈ X and i ∈ [dh] we have φ̂(x)[i] = φ̂i(x[ĉhh(i)]).\nDecoding Factors. LearnDecoder partitions the set of atoms into groups based on the learned child function ĉhh (Algorithm 3). For the ith group ω, we learn a decoder φ̂hi : X ? → {0, 1} by adapting the prediction problem of Misra et al. (2020) to Factored Block MDP setting. We define a sampling procedure (x, a, x′) ∼ Unf(Ψh−1) ◦ Unf(A) where x is observed after roll-in with a uniformly selected policy in Ψh−1 till time step h− 1, action a is taken uniformly, and x′ ∼ T (· | x, a) (line 3). We collect a dataset D of real and imposter transitions. A single datapoint in D is collected by sampling two independent transitions (x(1), a(1), x′(1)), (x(2), a(2), x′(2)) ∼ Unf(Ψh−1) ◦ Unf(A) and a Bernoulli random variable y ∼ Bern(1/2). If y = 1 then we add the real transition (x(1), a(1), x′(1)[ω], y) to D, otherwise we add the imposter transition (x(1), a(1), x′(2)[ω], y) (line 4). The key difference from Misra et al. (2020) is our use x′[ω] instead of x′ which allows us to decode a specific latent factor. We train a model to predict the probability that a given transition (x, a, x′[ω]) is real by solving a regression task with model class G (line 5). The bottleneck structure of G allows us to recover a decoder φ̂i from the learned model. The algorithm also checks for the special case where a factor takes a single value. If it does, then we return the decoder that always outputs 0, otherwise we stick with φ̂i. For brevity, we defer the details of this special case to Appendix D.2.2. The decoder for the hth timestep is given by composition of decoders for each group.\nAlgorithm 4 EstModel(Ψh−1, φ̂h−1, φ̂h). 1: Collect dataset D of nest triplets (x, a, x′) ∼ Unf(Ψh−1) ◦ Unf(A) 2: for I,J ∈ C≤κ([dh−1]) satisfying I ∩ J = ∅ do 3: Estimate P̂(ŝh[k] | ŝh−1[I], ŝh−1[J ], a) from D using φ̂, ∀a ∈ A, k ∈ [dh]. 4: For every k define p̂th(k) as solution to following: (where we bind ŝ ′ = ŝh and ŝ = ŝh−1)\nargmin I max u,J1,J2,w1,w2,a ∣∣∣P̂(ŝ′[k] | ŝ[I] = u, ŝ[J1] = w1, a)− P̂(ŝ′[k] | ŝ[I] = u, ŝ[J2] = w2, a) ∣∣∣ TV\n5: Define T̂h(ŝ′ | ŝ, a) = ∏ k P̂(ŝ[k] | ŝ[p̂th(k)], a) and return T̂h, p̂th\nEstimating the Model. EstModel routine first collects a dataset D of nest independent transitions (x, a, x′) ∼ Unf(Ψh−1) ◦ Unf(A) (Algorithm 4, line 1). We iterate over two disjoint sets of factors I,J of size at most κ. We can view I as the control set and J as the variable set. For every\nlearned factor k ∈ [dh], factor set I,J and action a ∈ A, we estimate the model P̂(ŝh[k] | ŝh−1[I], ŝh−1[J ], a) using count based statistics on dataset D (line 3).\nConsider the case where the ĉht = cht for every t ∈ [h] and where we ignore the label permutation for brevity. If I contains the parent factors pt(k), then we expect the value of P̂(ŝ′[k] | ŝ[I], ŝ[J ], a) ≈ Tk(ŝ\n′[k] | ŝ[pt(k)], a) to not change significantly on varying either the set J or its values. This motivates us to define the learned parent set as the I which achieves the minimum value of this gap (line 4). When computing the gap, we take max only over those values of ŝ[I] and ŝ[J ] which can be reached jointly using a policy in Ψh−1. This is important since we can only reliably estimate the model for reachable factor values. The learned parent function p̂th need not be identical to pth even upto relabeling. However, finding the exact parent factor is not necessary for learning an accurate model, and may even be impossible. For example, two factors may always take the same value making it impossible to distinguish between them. We use the learned parent function p̂th to define T̂h similar to the structure of T (line 5).\nLearning a Policy Cover. We plan in the latent space using the estimated model {T̂t}ht=1, to find a policy cover for time step h. Formally, for every I ∈ C≤2κ([dh]) and Z ∈ {0, 1}|I|, we find a policy ϕ̂hIZ to reach {ŝh[I] = Z} using the planner (Algorithm 1, line 7). This policy acts on the learned state space and is easily lifted to act on observations by composition with the learned decoder. We add every policy that achieves a return of at least O(ηmin) to Ψh (line 8)." }, { "heading": "5 THEORETICAL ANALYSIS AND DISCUSSION", "text": "In this section, we present theoretical guarantees for FactoRL. For technical reasons, we make the following realizability assumption on the function classes F and G. Assumption 3 (Realizability). For any h ∈ [H], i ∈ [d] and distribution ρ ∈ ∆({0, 1}), there exists gihρ ∈ G, such that for all ∀(x, a, x′) ∈ Xh−1 ×A×Xh and x̌ = x′[chh(i)] we have:\ngihρ(x, a, x̌) = Ti(φ\n? i (x̌) | φ?(x), a)\nTi(φ?i (x̌) | φ?(x), a) + ρ(φ?i (x̌)) .\nFor any h ∈ [H], u, v ∈ [m] with u 6= v, and any D ∈ ∆(Sh), there exists fuvD ∈ F satisfying:\n∀s ∈ supp(D), x ∈ supp(q(· | s)), fuvD(x[u], x[v]) = D(x[u], x[v])\nD(x[u], x[v]) +D(x[u])D(x[v]) .\nAssumption 3 requires the function classes to be expressive enough to represent optimal solutions for our regression tasks. Realizability assumptions are common in literature and are in practice satisfied by using deep neural networks (Sen et al., 2017; Misra et al., 2020).\nTheorem 1 (Main Theorem). For any δ > 0, FactoRL returns a transition function T̂h, a parent function ĉhh, a decoder φ̂h, and a set of policies Ψh for every h ∈ {2, 3, · · · , H}, that with probability at least 1− δ satisfies: (i) ĉhh is equal to chh upto permutation, (ii) Ψh is a 1/2 policy cover of Sh, (iii) For every h ∈ [H], there exists a permutation mapping θh : {0, 1}d → {0, 1}d such that for every s ∈ Sh−1, a ∈ A, s′ ∈ Sh and x′ ∈ Xh we have:\nP(φ̂h(x′)= θ−1h (s ′) | s′) ≥ 1−O ( η2min κH ) , ∥∥∥T (s′ | s, a)−T̂h(θ−1h (s′) | θ−1h−1(s), a) ∥∥∥ TV ≤ ηmin 8H ,\nand the sample complexity is poly(d16κ, |A|, H, 1ηmin , 1 δ , 1 βmin , 1σ , lnm, ln |F|, ln |G|).\nDiscussion. The proof and the detailed analysis of Theorem 1 has been deferred to Appendix C-D. Our guarantees show that FactoRL is able to discover the latent emission structure, learn a decoder, estimate the model, and learn a policy cover for every timestep. We set the hyperparmeters in order to learn a 1/2-policy cover, however, they can also be set to achieve a desired accuracy for the decoder or the transition model. This will give a polynomial sample complexity that depends on this desired accuracy. It is straightforward to plan a near-optimal policy for a given reward function in our learned latent space, using the estimated model and the learned decoder. This incurs zero sample cost apart from the samples needed to learn the reward function.\nOur results show that we depend polynomially on the number of factors and only logarithmic in the number of atoms. This appeals to real-world problem where d and m can be quite large. We also depend logarithmically on the size of function classes. This allows us to use exponentially large function classes, further, as stated before, our results can also be easily extended to Rademacher complexity. Our algorithm only makes a polynomial number of calls to computational oracles. Hence, if these oracles can be implemented efficiently then our algorithm has a polynomial computational complexity. The squared loss oracles are routinely used in practice, but planning in a fully-observed factored MDP is EXPTIME-complete (see Theorem 2.24 of Mausam (2012)). However, various approximation strategies based on linear programming and dynamic programming have been employed succesfully (Guestrin et al., 2003). These assumptions provide a black-box mechanism to leverage such efforts. Note that all computational oracles incur no additional sample cost and can be simply implemented by enumeration over the search space. Comparison with Block MDP Algorithms. Our work is closely related to algorithms for Block MDPs, which can be viewed as a non-factorized version of our setting. Du et al. (2019) proposed a model-based approach for Block MDPs. They learn a decoder for a given time step by training a classifier to predict the decoded state and action at the last time step. In our case, this results in a classification problem over exponentially many classes which can be practically undesirable. In contrast, Misra et al. (2020) proposed a model-free approach that learns a decoder by training a classifier to distinguish between real and imposter transitions. As optimal policies for factored MDPs do not factorize, therefore, a model-free approach is unlikely to succeed (Sun et al., 2019). Feng et al. (2020) proposed another approach for solving Block MDPs. They assume access to a purely unsupervised learning oracle, that can learn an accurate decoder using a set of observations. This oracle assumption is significantly stronger than those made in Du et al. (2019) and Misra et al. (2020), and reduces the challenge of learning the decoder. Crucially, these three approaches have a sample complexity guarantee which depends polynomially on the size of latent state space. This yields an exponential dependence on d when applied to our setting. It is unclear if these approaches can be extended to give polynomial dependence on d. For general discussion of related work see Appendix B.\nProof of Concept Experiments. We empirically evaluate FactoRL to support our theoretical results, and to provide implementation details. We consider a problem with d factors each emitting 2 atoms. We generate atoms for factor s[i], by first defining a vector zi = [1, 0] if s[i] = 0, and zi = [0, 1] otherwise. We then sample a scalar Gaussian noise gi with 0 mean and 0.1 standard deviation, and add it to both component of zi. Atoms emitted from each factor are concatenated to generate a vector z ∈ R2d. The observation x is generated by applying a fixed time-dependent permutation to z to shuffle atoms from different factors. This ensures that an algorithm cannot figure out the children function by relying on the order in which atoms are presented. We consider an action space A = {a1, a2, · · · , ad} and non-stationary dynamics. For each time step t ∈ [H], we define σt as a fixed permutation of {1, 2, · · · , d}. Dynamics at time step t are given by: Tt(st+1 | st, a) = ∏d i=1 Tti(st+1[i] | st[i], a), where Tti(st+1[i] = st[i] | st[i], a) = 1 for all a 6= aσt(i), and Tti(st+1[i] = 1− st[i] | st[i], aσt(i)) = 1. We evaluate on the setting d = 10 and H = 10. We implement model classes F and G using feed-forward neural networks. Specifically, for G we apply the Gumbel-softmax trick to model the bottleneck following Misra et al. (2020). We train the models using cross-entropy loss instead of squared loss that we use for theoretical analysis.3 For the independence test task, we declare two atoms to be independent, if the best log-loss on the validation set is greater than c. We train the model using Adam optimization and perform model selection using a held-out set. We defer the full model and training details to Appendix F.\nFor each time step, we collect 20,000 samples and share them across all routines. This gives a sample complexity of 20, 000×H . We repeat the experiment 3 times and found that each time, the model was able to perfectly detect the latent child function, learn a 1/2-policy cover, and estimate the model with error < 0.01. This is in sync with our theoretical findings and demonstrates the empirical use of FactoRL. We will make the code available at: https://github.com/cereb-rl. Conclusion. We introduce Factored Block MDPs that model the real-world difficulties of richobservation environments with exponentially large latent state spaces. We also propose a provable RL algorithm called FactoRL for solving a large class of Factored Block MDPs. We hope the setting and ideas in FactoRL will stimulate both theoretical and empirical work in this important area.\n3We can also easily modify our proof to use cross-entropy loss by using generalization bounds for log-loss (see Appendix E in Agarwal et al. (2020))" }, { "heading": "APPENDIX ORGANIZATION", "text": "This appendix is organized as follows.\n• Appendix A provides a list of notations used in this paper. • Appendix B covers related work • Appendix C describes the independence test algorithm and its sample complexity guarantees • Appendix D provides sample complexity guarantees for FactoRL • Appendix E provides list of supporting results • Appendix F provides details of the experimental setup and optimization" }, { "heading": "A NOTATIONS", "text": "We present notations and their definition in Table 1. In general, calligraphic notations represent sets. All logarithms are with respect to base e." }, { "heading": "B RELATED WORK", "text": "There is a rich literature on sample-efficient reinforcement learning in tabular MDPs with a small number of observed states (Brafman & Tennenholtz, 2002; Strehl et al., 2006; Kearns & Singh, 2002; Jaksch et al., 2010; Azar et al., 2017; Jin et al., 2018). While recent state-of-the-art results along this line achieve near-optimal sample complexity, these algorithms do not exploit the latent structure in the environment, and therefore, cannot scale to many practical settings such as rich-observation environments with possibly a large number of factored latent states.\nIn order to overcome this challenge, one line of research has been focusing on factored MDPs (Kearns & Koller, 1999; Guestrin et al., 2002; 2003; Strehl et al., 2010), which allow a combinatorial number of observed states with a factorized structure. Planning in factored MDPs is EXPTIME-complete (Mausam, 2012) yet often tractable in practice, with factored MDPs statistically learnable with polynomial samples in the number of parameters that encode the factored MDP (Osband & Van Roy, 2014; Li et al., 2011). There has also been several empirical works that either focus on the factored state space setting (e.g., Kim & Mnih (2018); Thomas et al. (2018); Laversanne-Finot et al. (2018); Miladinović et al. (2019)), or the factored action space setting (e.g., He et al. (2016); Sharma et al. (2017)). However, these works do not directly address our problem and do not provide sample complexity guarantees.\nAnother line of work focuses on exploration in a rich observation environment. Empirical results (Tang et al., 2017; Chen et al., 2017; Bellemare et al., 2016; Pathak et al., 2017) have achieved inspiring performance on several RL benchmarks, while theoretical works (Krishnamurthy et al., 2016; Jiang et al., 2017) show that it is information-theoretically possible to explore these environments. As discussed before, recent works of Du et al. (2019), Misra et al. (2020) and Feng et al. (2020) provide computationally and sample efficient algorithms for Block MDP which is a rich-observation setting with a latent non-factored state space. Nevertheless, this line of results crucially relies on the number of latent states being relatively small.\nFinally, we comment that the contrastive learning technique used in this paper has been used by other reinforcement learning algorithms for learning feature representation (e.g., Kim et al. (2019); Nachum et al. (2019); Srinivas et al. (2020)) without theoretical guarantee.\nC INDEPENDENCE TESTING USING NOISE CONTRASTIVE ESTIMATION\nIn this section, we introduce the independence testing algorithm, Algorithm 5 and provide its theoretic guarantees. Algorithm 5 will be used in Algorithm 2 as a subroutine for determining if two atoms are emitted from the same latent factor. We comment that the high-level idea of Algorithm 5 is similar to Sen et al. (2017), which reduces independence testing to regression by adding imposter samples." }, { "heading": "C.1 ALGORITHM DESCRIPTION", "text": "Let D ∈ ∆(Sh−1 × A) be our roll-in distribution that induces a probability distribution PD ∈ ∆(Xh) over observations at time step h. Let u, v ∈ [m] be two different atoms, and PD(x[u], x[v]),PD(xh[u]) and PD(xh[v]) be the joint and marginal distributions over xh[u], xh[v] with respect to roll-in distribution D. The goal of our algorithm to determine if xh[u] and xh[v] are independent under PD.\nAlgorithm 5 IndTest(F ,D, u, v, β) Oraclized Independency Test. We initialize Dtrain = ∅. 1: Initialize Dtrain = ∅ and sample z1, z2, · · · , zn ∼ Bern( 12 ). 2: for i ∈ [n] do 3: if zi = 1 then 4: Dtrain ← Dtrain ∪ {(x(i,1)[u], x(i,1)[v], 1)}. 5: else 6: Dtrain ← Dtrain ∪ {(x(i,1)[u], x(i,2)[v], 0)}. 7: Compute f̂ := arg minf∈F L(Dtrain, f), where\nL(Dtrain, f) := 1\nn\n∑\n(x[u],x[v],z)∈Dtrain {f(x[u], x[v])− z}2 .\n8: return Independent if L(Dtrain, f̂) > 0.25− β2/103 else return Dependent.\nWe solve this task using the IndTest algorithm (Algorithm 5) which takes as input a dataset of observation pairs D = {(x(i,1), x(i,2))}ni=1 where x(i,1), x(i,2) ∼ PD(·, ·), and a scalar β ∈ (0, 1). We use D to create a dataset Dtrain of real and imposter atom pairs (x[u], x[v]). This is done by taking every datapoint in D and sampling a Bernoulli random variable zi ∼ Bern(1/2) (line 1). If zi = 1 then we add the real pair (x(i,1)[u], x(i,1)[v], 1) to Dtrain (line 4), otherwise we add the imposter pair (x(i,1)[u], x(i,2)[v], 0) (line 6). We train a classifier to predict if a given atom pair (x[u], x[v]) is real or an imposter (line 7). The Bayes optimal classifier for this problem is given by:\n∀x ∈ supp(PD), f?D(x[u], x[v]) := PD(z = 1 | x[u], x[v]) = PD(x[u], x[v])\nPD(x[u], x[v]) + PD(x[u])P(x[v]) .\nIf x[u] and x[v] are independent then we have PD(x[u], x[v]) = PD(x[u])PD(x[v]) everywhere on the support of PD. This implies f?D(x) = 12 and its training loss will concentrate around 0.25. Intuitively, this can be interpreted as the classifier having no information to tell real samples from imposter samples. However, if x[u], x[v] are dependent and ‖PD(x[u], x[v])− PD(x[u])PD(x[v])‖TV ≥ β then we can show the training loss of f? is less than 0.25 − O(β2) with high probability. The remainder of this section is devoted to a rigorous proof for this argument." }, { "heading": "C.2 ANALYSIS OF ALGORITHM 5", "text": "Before analyzing Algorithm 5, we want to slightly simplify the problem in terms of notations. We introduce two simple notations X and Y which represents the random variables x[u] and x[v], respectively. We will simply use D to denote the joint distribution of X and Y . Define Dtrain to be the distribution of the training data (Xtrain, Ytrain, z) produced in Algorithm 5. It’s easy to verify that\nDtrain(Xtrain, Ytrain, z) = 1\n2 [zD(Xtrain, Ytrain) + (1− z)D(Xtrain)D(Ytrain)] . (1)\nSuppose the distribution D is specially designed such that at least one of the following hypothesis holds (which can be guaranteed when we invoke Algorithm 5):\nH0 : ‖D(X,Y )−D(X)D(Y )‖1 ≥ β\n2\nv.s. H1 : ‖D(X,Y )−D(X)D(Y )‖1 ≤ β2\n103 .\nIn the remaining part, we will prove that Algorithm 5 can correctly distinguish between H0 and H1 with high probability." }, { "heading": "C.2.1 TWO PROPERTIES OF THE BAYES OPTIMAL CLASSIFIER", "text": "Our first lemma shows that the Bayes optimal classifier for the optimization problem in line 7 is a constant function equal to 1/2 if X and Y are independent.\nLemma 1 (Bayes Optimal Classifier for Independent Case). In Algorithm 5, if X and Y (atoms u and v) are independent under distribution D, then for the optimization problem in line 7 , the Bayes optimal classifier is given by:\n∀(Xtrain, Ytrain) f?(Xtrain, Ytrain) = 1\n2 .\nProof. From Bayes rule we have:\nf?(Xtrain, Ytrain)\n=Dtrain(z = 1 | Xtrain, Ytrain)\n= Dtrain(Xtrain, Ytrain | z = 1)Dtrain(z = 1)\nDtrain(Xtrain, Ytrain | z = 1)Dtrain(z = 1) +Dtrain(Xtrain, Ytrain | z = 0)Dtrain(z = 0)\n= Dtrain(Xtrain, Ytrain | z = 1)\nDtrain(Xtrain, Ytrain | z = 1) +Dtrain(Xtrain, Ytrain | z = 0) ,\nwhere the last identity uses Dtrain(z = 1) = Dtrain(z = 0) = 1/2.\nWhen z = 0, we collect Xtrain and Ytrain from two independent samples. Therefore, we have Dtrain(Xtrain, Ytrain | z = 0) = D(Xtrain)D(Ytrain). When z = 1, using the fact that Xtrain and Ytrain are independent under distribution D, we also have Dtrain(Xtrain, Ytrain | z = 1) = D(Xtrain, Ytrain) = D(Xtrain)D(Ytrain). Consequently, f?(Xtrain, Ytrain) ≡ 1/2.\nOur second lemma provides an upper bound for the expected training loss of the Bayes Optimal Classifier. Later we will use this lemma to show the training loss is less than 0.25−O(β2) with high probability when H0 holds.\nLemma 2 (Square Loss of the Bayes Optimal Classifier). In Algorithm 5 line 7, the Bayes optimal classifier has expected square loss\nEDtrainL(f?,Dtrain) ≤ 1 4 − ( 1 2 ED [ ∣∣∣∣ 1 2 − D(X,Y ) D(X)D(Y ) +D(X,Y ) ∣∣∣∣ ])2 ,\nProof. Recall the formula of the Bayes optimal classifier in Lemma 1,\nf?(X,Y ) = D(X,Y )\nD(X)D(Y ) +D(X,Y ) .\nPlugging it into the square loss, we obtain\nEDtrain [ (f?(X,Y )− y)2 ]\n=EDtrain [ f?(X,Y ) (f?(X,Y )− 1)2 + (1− f?(X,Y )) (f?(X,Y ))2 ]\n=EDtrain [f?(X,Y ) (1− f?(X,Y ))]\n= 1\n4 − EDtrain\n[( 1\n2 − D(X,Y ) D(X)D(Y ) +D(X,Y )\n)2 ]\n≤1 4 − ( EDtrain [ ∣∣∣∣ 1 2 − D(X,Y ) D(X)D(Y ) +D(X,Y ) ∣∣∣∣ ])2 ≤1 4 − ( 1 2 ED [ ∣∣∣∣ 1 2 − D(X,Y ) D(X)D(Y ) +D(X,Y ) ∣∣∣∣ ])2 ." }, { "heading": "C.2.2 THREE USEFUL LEMMAS", "text": "To proceed, we need to take a detour and prove three useful technical lemmas.\nLemma 3. Let µ and ν be two probability measures defined on a countable set X . If ‖µ− ν‖TV ≥ c, then\nEx∼µ ∣∣∣∣\nµ(x) µ(x) + ν(x) − 1 2 ∣∣∣∣ ≥ c 4 ." }, { "heading": "Proof.", "text": "Ex∼µ [∣∣∣∣\nµ(x) µ(x) + ν(x) − 1 2 ∣∣∣∣ ] = Ex∼µ [ 1 2 ∣∣∣∣ µ(x)− ν(x) µ(x) + ν(x) ∣∣∣∣ ]\n≥ 1 2 Ex∼µ\n[ 1{µ(x) > ν(x)} ∣∣∣∣ µ(x)− ν(x) µ(x) + ν(x) ∣∣∣∣ ]\n= 1\n2\n∑ x∈X µ(x)\n[ 1{µ(x) > ν(x)}µ(x)− ν(x)\nµ(x) + ν(x)\n]\n≥ 1 4\n∑ x∈X [1{µ(x) > ν(x)} (µ(x)− ν(x))]\n≥ c 4 .\nLemma 4. Fix δ ∈ (0, 1). Then with probability at least 1− δ, we have ∣∣∣L(f̂ ,Dtrain)− EDtrainL(f?,Dtrain) ∣∣∣ ≤ 10 √ C(F , δ)\nn ,\nwhere Dtrain is the training set consisting of n i.i.d. samples sampled from Dtrain, f̂ is the empirical minimizer of L(f,Dtrain) over F , f? is the population minimizer, and C(F , δ) := ln |F|δ is the complexity measure of function class F .\nProof. By Hoeffding’s inequality and union bound, with probability at least 1− δ, for every f ∈ F , we have\n|L(f,Dtrain)− EDtrainL(f,Dtrain)| ≤ 10 √ C(F , δ)\nn .\nBecause f̂ is the empirical optimizer,\nL(f̂ ,Dtrain) ≤ L(f?,Dtrain) ≤ EDtrainL(f?,Dtrain) + 10 √ C(F , δ)\nn .\nBecause f? is the population optimizer,\nL(f̂ ,Dtrain) ≥ EDtrainL(f̂ ,Dtrain)− √ C(F , δ)\nn ≥ EDtrainL(f?,Dtrain)− 10\n√ C(F , δ)\nn .\nCombining the two inequalities above, we finish the proof.\nFor notational convenience, we introduce the following factor distribution Dfactor defined on the same domain of (X,Y, z):\nDfactor(X,Y, z) = 1\n2 D(X)D(Y ).\nLemma 5. Suppose F contains the constant function f ≡ 1/2. Then with probability at least 1− δ, we have ∣∣∣∣L(f̂ ,Dtrain)− 1\n4\n∣∣∣∣ ≤ 10 √ C(F , δ)\nn + 2‖Dtrain −Dfactor‖TV.\nProof. By Lemma 4, with probability at least 1− δ, we have ∣∣∣L(f̂ ,Dtrain)− EDtrainL(f?,Dtrain) ∣∣∣ ≤ 10 √ C(F , δ)\nn . (2)\nNoticing that L is bounded by 1, we have for every f ∈ F |EDfactorL(f,Dtrain)− EDtrainL(f,Dtrain)| ≤ 2‖Dtrain −Dfactor‖TV, (3)\nwhere EDfactorL(f,Dtrain) defines the expected loss of f over Dtrain where Dtrain consists of samples i.i.d. sampled from Dfactor.\nSince y is a symmetric Bernoulli r.v. independent of (X,Y ) under distribution Dfactor, we have\nmin f∈F\nEDfactorL(f,Dtrain) = 1\n4 . (4)\nUsing the inequality |minf L1(f) − minf L2(f)| ≤ maxf |L1(f) − L2(f)| for any functionals L1, L2, along with (4) and (3) we bound the minimum loss under distribution Dtrain as:\n∣∣∣∣minf∈F EDtrainL(f,Dtrain)− 1 4 ∣∣∣∣ ≤ 2‖Dtrain −Dfactor‖TV. (5)\nCombining (5) with (2) completes the proof." }, { "heading": "C.2.3 MAIN THEOREM FOR ALGORITHM 5", "text": "Finally, we are ready to state and prove the main theorem for Algorithm 5.\nTheorem 2. Under the realizability assumption and n ≥ Ω(C(F,δ)β4 ), Algorithm 5 can distinguish\nH0 : ‖D(X,Y )−D(X)D(Y )‖1 ≥ β\n2\nv.s. H1 : ‖D(X,Y )−D(X)D(Y )‖1 ≤ β2\n103\ncorrectly with probability at least 1− δ.\nProof. If H1 holds, by Lemma 5, we have the following lower bound for the training loss of the empirical minimizer,\nL(f̂ ,D) ≥ 1 4 − 10\n√ C(F , δ)\nn − β\n2\n103 . (6)\nIn contrast, if H0 is true, applying Lemma 4, we obtain\nL(f̂ ,D) ≤ EDtrainL(f?,D) + 10 √ C(F , δ)\nn .\nInvoke Lemma 2 and Lemma 3,\nEDtrainL(f?,Dtrain) ≤ 1 4 − ( 1 2 ED [ ∣∣∣∣ 1 2 − D(X,Y ) D(X)D(Y ) +D(X,Y ) ∣∣∣∣ ])2\n≤ 1 4 − β\n2\n256 .\nTherefore, under H0, the training loss of the empirical minimizer is upper bounded as below\nL(f̂ ,D) ≤ 1 4 − β\n2\n256 + 10\n√ C(F , δ)\nn . (7)\nPlugging n ≥ O(C(F,δ)β4 ) into (6) and (7), we complete the proof.\nD THEORETICAL ANALYSIS OF FactoRL\nIn this section, we provide a detailed theoretical analysis of FactoRL. The structure of the algorithm is iterative making an inductive case argument appealing. We will, therefore, make an induction hypothesis for each time step that we will guarantee at the end of the time step.\nInduction Hypothesis. We make the following induction assumption for FactoRL under Assumption 1-3 and across all time steps. For all t ∈ {2, 3, · · · , H}, at the end of time step t (Algorithm 1, line 8), FactoRL finds a child function ĉht : [d] → 2[m], a decoder φ̂t : X → {0, 1}d, a transition function T̂t : {0, 1}d ×A → {0, 1}d, and a set of policies Ψt satisfying the following:\nIH.1 ĉht : [d] → 2m and cht : [d] → 2m are same upto relabeling, i.e, for all u, v ∈ [m] we have ĉh −1 t (u) = ĉh −1 t (v) if and only if ch −1 t (u) = ch −1 t (v). Note that a child function is\ninvertible by definition. We can ignore this label permutation and assume ĉht = cht for cleaner expressions. This can be done without any effect. We will assume ĉht = cht when stating the next three induction hypothesis.\nIH.2 There exists a permutation mapping θt : {0, 1}d → {0, 1}d and % ∈ (0, 12d ) such that for every i ∈ [d] and s ∈ St we have:\nP(φ̂t(xt)[i] = θ−1t (s)[i] | s[i]) ≥ 1− %,\nP(φ̂t(xt) = θ−1t (s) | s) ≥ 1− d% ≥ 1\n2\nThe two distributions are independent of the roll-in distribution at time step t. The first one holds as φ̂t(xt)[i] only depends upon the value xt[ĉht(i)] = xt[cht(i)] which only depends on s[i]. The second one holds as xt is independent of everything else given st. The form of % will become clear the end of analysis.\nIH.3 For every s ∈ St−1, s′ ∈ St and a ∈ A we have: ∥∥∥T̂t(θ−1t (s′) | θ−1t−1(s), a)− T (s′ | s, a) ∥∥∥ TV ≤ 3d(∆est + ∆app),\nwhere ∆est,∆app > 0 denote estimation and approximation errors whose form will become clear at the end of analysis.\nIH.4 For every s ∈ St and K ∈ C≤2κ([d]), let Z = s[K] and Ẑ = θ−1t (s)[K], then there exists a policy πKẐ ∈ Ψt such that:\nPπKẐ (st[K] = Z) ≥ αηt(K,Z) ≥ αηmin.\nBase Case. In the base case (t = 1), we have a deterministic start state. Therefore, we can without loss of generality assume a single factor and define ĉh1[1] = m. As we can also define ch1[1] = [m] without loss of generality, therefore, this trivially satisfies the induction hypothesis 1. We define φ̂1 : X → [0]d (Algorithm 1, line 1). This satisfies induction hypothesis 2 with θ1 being the identity map. The induction hypothesis 3 is vacuous since there is no transition function before time step 1. For the last condition, we have for any K, Z = [0]|K| and Ẑ = [0]|K|. For any policy π we have Pπ(s1[K] = Z) = Pπ(φ̂1(x1)[K] = Ẑ) = 1 ≥ ηmin2 . Note that we never take any action from this policy, therefore, we can simply define Ψ1 = ∅." }, { "heading": "D.1 GRAPH STRUCTURE IDENTIFICATION", "text": "In this section, we analyze the performance of Algorithm 2, given as input Ψh−1, φ̂h−1,F , β and n. We will analyze the performance a fixed pair of atoms u, v ∈ [m] and then apply the full result using union bound. We first state the result for the rejection sampling.\nLemma 6. For policy πI;Ẑ ∈ Ψh−1, event EI;Ẑ = {ŝh−1[I] = Ẑ} and k ∈ N, let D reject I;Ẑ := RejectSamp(πI;Ẑ , EI;Ẑ , k) be the distribution induced by our rejection sampling procedure. Let\nZ = θ(Ẑ) denote the real factor values corresponding Ẑ . Then we have:\nPDreject I;Ẑ\n(sh−1[I] = Z) ≥ 1− %− (\n1− ηmin 4\n)k . (8)\nProof. From IH.4 we have PπI;Ẑ (sh−1[I] = Z) ≥ ηmin 2 . This implies:\nPπI;Ẑ (ŝh−1[I] = Ẑ) ≥ PπI;Ẑ (ŝh−1[I] = Ẑ | sh−1[I] = Z)PπI;Ẑ (sh−1[I] = Z)\n≥ (1− d%)ηmin 2 ≥ ηmin 4 , (using IH.2 and IH.4).\nLet a = PπI;Ẑ (ŝh−1[I] = Ẑ) be the acceptance probability of event EI;Ẑ . then it is easy to see that the probability of the event occurring under DrejectI;Ẑ is:\nPDreject I;Ẑ\n( EI;Ẑ ) = a+ (1−a)a+ (1−a)2a+ · · · (1−a)k−1a = 1− (1−a)k ≥ 1− ( 1− ηmin\n4\n)k .\nWe express the desired failure probability as shown:\nPDreject I;Ẑ (sh−1[I] 6= Z) = PDreject I;Ẑ\n( sh−1[I] 6= Z, ŝh−1[I] 6= Ẑ ) +PDreject\nI;Ẑ\n( sh−1[I] 6= Z, ŝh−1[I] = Ẑ )\n(9)\nWe bound the two terms below:\nPDreject I;Ẑ\n( sh−1[I] 6= Z, ŝh−1[I] 6= Ẑ ) ≤ PDreject\nI;Ẑ\n( ŝh−1[I] 6= Ẑ ) ≤ (\n1− ηmin 4\n)k , (10)\nPDreject I;Ẑ\n( sh−1[I] 6= Z, ŝh−1[I] = Ẑ ) ≤ PDreject\nI;Ẑ\n( ŝh−1[I] = Ẑ | sh−1[I] 6= Z, ) ≤ % (11)\nCombining Equation 9, Equation 10 and Equation 11 we get:\nPDreject I;Ẑ (sh−1[I] = Z) = 1− PDreject I;Ẑ\n(sh−1[I] 6= Z) ≥ 1− %− (\n1− ηmin 4\n)k . (12)\nWe now analyze the situation for a given pair of atoms. Recall for any distribution D ∈ ∆(Sh−1) and a ∈ A, we denote D ◦ a as the distribution over Sh where s′ ∼ D ◦ a is sampled by sampling s ∼ D and then s′ ∼ T (. | s, a). We want to derive roll-in distributions at time step h, such that atoms coming from the same parent satisfy hypothesis H0 and atoms coming from different parents satisfy hypothesis H1 under this roll-in distribution. This will allow us to use independence test to identify the parent structure in the emission process. Specifically, we consider the roll-in distributions induced by DrejectI;Ẑ ◦ a for some sets I,J and action a. Instantiating the definition of these hypothesis from Appendix C, with these roll-in distributions and setting β = βmin gives us:\nH0 : ‖PDreject I;Ẑ ◦a(x[u], x[v])− PDreject I;Ẑ ◦a(x[u])PDreject I;Ẑ ◦a(x[v])‖1 ≥ βmin 2\nv.s. H1 : ‖PDreject I;Ẑ ◦a(x[u], x[v])− PDreject I;Ẑ ◦a(x[u])PDreject I;Ẑ ◦a(x[v])‖1 ≤ β2min 100\nLemma 7 (Same Factors). If for two atoms u, v we have ch−1h (u) = ch −1 h (v), i.e., they are from the same factor then the hypothesis H0 is true for D ◦ a for any D ∈ ∆(Sh−1) and a ∈ A. In particular, this is true for DrejectI;Ẑ ◦ a for any choice of sets I,J and action a.\nProof. Follows trivially from Assumption 2.\nLemma 8 (Different Factors). If for two atoms u, v we have ch−1h (u) = i and ch −1 h (v) = j and i 6= j, then if I contains pth(i) ∪ pth(j), then for % ≤ β 2 min 1200 and k ≥ 8ηmin ln ( 30 βmin ) , the hypothesis H1 holds for D reject I;Ẑ ◦ a for any Ẑ such that πI;Ẑ ∈ Ψh−1 and a ∈ A.\nProof. Let D′ ∈ ∆(Sh−1) be a distribution that deterministically sets sh−1[I] = Z . Then it is easy to verify that PD′◦a(xh[u], xh[v]) = PD′◦a(xh[u])PD′◦a(xh[v]) for any a ∈ A and xh ∈ Xh. Then for any Ẑ and action a ∈ A we have using triangle inequality:\n∣∣∣∣PDreject I;Ẑ ◦a(xh[u], xh[v])− PDreject I;Ẑ ◦a(xh[u])PDreject I;Ẑ ◦a(xh[v]) ∣∣∣∣ 1\n≤ ∣∣∣∣PDreject\nI;Ẑ ◦a(xh[u], xh[v])− PD′◦a(xh[u], xh[v]) ∣∣∣∣ 1 +\n∣∣∣∣PD′◦a(xh[u])− PDreject I;Ẑ ◦a(xh[u]) ∣∣∣∣ 1 + ∣∣∣∣PD′◦a(xh[v])− PDreject I;Ẑ ◦a(xh[v]) ∣∣∣∣ 1\nAs xh[u] and xh[v] come from different factors, therefore, we have\nP(xh[u], xh[v] | sh−1, a) = P(xh[u] | sh−1[I], a)P(xh[v] | sh−1[I], a).\nWe use this to bound the three terms in the summation above.∣∣∣∣PDreject I;Ẑ ◦a(xh[u], xh[v])− PD′◦a(xh[u], xh[v]) ∣∣∣∣ 1\n= ∑\nxh[u],xh[v]\n∣∣∣∣∣∣ ∑ sh−1[I] P(xh[u] | sh−1[I], a)P(xh[v] | sh−1[I], a) { PDreject I;Ẑ (sh−1[I])− PD′(sh−1[I]) }∣∣∣∣∣∣\n≤ ∑\nsh−1[I]\n∑\nxh[u],xh[v]\nP(xh[u] | sh−1[I], a)P(xh[v] | sh−1[I], a) ∣∣∣∣PDreject\nI;Ẑ (sh−1[I])− PD′(sh−1[I])\n∣∣∣∣\n≤ ∑\nsh−1[I]\n∣∣∣∣PDreject I;Ẑ (sh−1[I])− PD′(sh−1[I]) ∣∣∣∣\n= ∣∣∣∣1− PDreject I;Ẑ (sh−1[I] = Z) ∣∣∣∣+ ∑\nsh−1[I] 6=Z PDreject I;Ẑ (sh−1[I])\n= 2 ( 1− PDreject\nI;Ẑ (sh−1[I] = Z)\n) ≤ 2%+ 2 ( 1− ηmin\n4\n)k .\nThe other two terms are bounded similarly which gives us: ∣∣∣∣PDreject\nI;Ẑ ◦a(xh[u], xh[v])− PDreject I;Ẑ ◦a(xh[u])PD′◦a(xh[v]) ∣∣∣∣ 1 ≤ 6%+ 6 ( 1− ηmin 4 )k .\nWe want this quantity to be less than β 2 min\n100 to satisfy hypothesis H1. We distribute the errors equally and use ln(1 + a) ≤ a for all a > −1 to get:\n% ≤ β 2 min 1200 , k ≥ 8 ηmin ln\n( 30\nβmin\n) . (13)\nTheorem 3 (Learning ĉhh). Fix δind ∈ (0, 1). If % ≤ β 2 min 1200 and k ≥ 8ηmin ln ( 30 βmin ) and nind ≥\nO (\n1 β4min\nln m 2|A||F|(2ed)2κ+1\nδind\n) , then learned ĉhh is equivalent to chh upto label permutation with\nprobability at least 1− δind.\nProof. For any pair of atom u, v, if they are from the same factor then H0 holds from Lemma 7 and IndTest mark them dependent with probability at least 1− δ. This holds for every triplet of I,Z, a and there are at most |A|(2ed)2κ+1, of them. Hence, from union bound we mark u, v correctly as coming from different factors with probability at least 1− |A|(2ed)2κ+1δ. If u and v have different factors then for any I containing the parents of both of them, and any value of Z and a, H1 always holds from Lemma 8 and IndTest marks them as independent. Note that\nsuch an I will exists since the we iterate over all possible sets of size upto 2κ. Hence, with probability at least 1 − |A|2κδ, we find u and v to be independent for every Z and a. Hence, our algorithm correctly will mark them as coming from different factors.\nFor a given u and v, we correctly predict their output with probability at least 1− |A|(2ed)2κ+1δ. Therefore, using union bound we correctly output right result for each u and v with probability at least 1−|A|m2(2ed)2κ+1δ. From Theorem 2, we require nind ≥ O ( 1 β4min ln |F|δ ) . Binding |A|(2ed)2κ+1δ to δind then gives us the required value of nind to achieve a success probability of at least 1−δind. If we correctly assess the dependence for every pair of atoms correctly, then trivially partitioning them using the dependence equivalence relation gives us ĉhh which is same as chh upto label permutation." }, { "heading": "D.2 LEARNING A STATE DECODER", "text": "We focus on the task of learning an abstraction at time step h using Algorithm 3. We have access to ĉhh which is same as chh upto label permutation. We showed how to do this in Appendix C. We will ignore the label permutation to avoid having to complicate our notations. This would essentially mean that we will recover a backward decoder φ̂h = ( φ̂h1, · · · , φ̂hd ) , where there is a bijection between { φ̂hi\n} i and { φ?j } j .\nAs we learn each decoder {φ̂hi} independently of each other, therefore, we will focus on learning the decoder φ̂hi for a fixed i. The same analysis will hold for other decoder and with application of union bound, we will establish guarantees for all decoders. Further, since we are learning the decoder at a fixed time step h, therefore, we will drop the h from the subscript for brevity. We use additional shorthand described below and visualize some of them in Figure 2.\n• s and x denote a state sh−1 and an observation xh−1 at time step h− 1 • s′ and x′ denotes state sh and observation xh at time step h • s′[i] denotes ith factor of state at time step h • š denotes s[pth(i)] which is the set of parent factors of s′[i]. Recall that from the factoriza-\ntion assumption, we have T (s′[i] | s, a) = Ti(s′[i] | š, a) for any s, a. • φ̂i denotes φ̂hi decoder for ith factor at time step h • ω denotes pt(i) which is the set of indices of atoms emitted by s′[i]. • x̌ denotes x′[chh(i)] which is the collection of atoms generated by s′[i]. • N = |Ψh−1| is the size of policy cover for previous time step.\nLet D = {(x(k), a(k), x̌(k), y(k))}nabsk=1 be a dataset of nabs real transitions (y = 1) and imposter transitions (y = 0) collected in Algorithm 3, line 2-4. We define the expected risk minimizer (ERM) solution as:\nĝi = arg min g∈G\n1\nnabs\nnabs∑\nk=1\n{ g(x(k), a(k), x̌(k))− y(k) }2 (14)\nRecall that by the structure of G, we have ĝi = (ûi, φ̂i) where ŵi ∈ W2 and φ̂i ∈ Φ : X ∗ → {0, 1} is the learned decoder. Our algorithm only cares about the properties of the decoder and we throw away the regressor ûi.\nLet D(x, a, x̌) be the marginal distribution over transitions. We get the marginal distribution by marginalizing out the real (y = 1) and imposter transition (y = 0). We also define D(x, a) as the marginal distribution over x, a. We have D(x, a) = µh−1(x) 1|A| as both real and imposter transitions involve sampling x ∼ µh−1 and taking action uniformly. Recall that µh−1 is generated by roll-in with a uniformly selected policy in Ψh−1 till time step h− 1. Let P (x, a, x̌ | y = 1) be the probability of a transition being real and P (x, a, x̌ | y = 0) be the probability of the transition being imposter. We can express these probabilities as:\nP (x, a, x̌ | y = 1) = D(x, a)T (x̌ | x, a), P (x, a, x̌ | y = 0) = D(x, a)ρ(x̌), (15) where ρ(x̌) = E(x,a)∼D[T (x̌ | x, a)] is the marginal distribution over x̌. We will overload the notation ρ to also define ρ(x′) = E(x,a)∼D[T (x′ | x, a)]. Lastly, we can express the marginal distribution over transition as:\nD(x, a, x̌) = P (x, a, x̌ | y = 1)P (y = 1) + P (x, a, x̌ | y = 0)P (y = 0)\n= µh−1(x)\n2|A| {T (x̌ | x, a) + ρ(x̌)}\nWe start by expressing the Bayes optimal classifier for problem in Equation 14.\nLemma 9 (Bayes Optimal Classifier). Bayes optimal classifier g? for problem in Equation 14 is given by:\n∀(x, a, x̌) ∈ suppD, g?(x, a, x̌) = Ti(φ ? i (x̌) | φ?(x)[pt(i)], a)\nTi(φ?i (x̌) | φ?(x)[pt(i)], a) + ρ(φ?i (x̌)) (16)\nProof. The Bayes optimal classifier is given by g?(x, a, x̌) = P (y = 1 | x, a, x̌) which can be expressed using Bayes rule as:\nP (y = 1 | x, a, x̌) = P (x, a, x̌ | y = 1)P (y = 1) P (x, a, x̌ | y = 1)P (y = 1) + P (x, a, x̌ | y = 0)P (y = 0)\n= P (x, a, x̌ | y = 1)\nP (x, a, x̌ | y = 1) + P (x, a, x̌ | y = 0) , using p(y) = Bern( 1/2)\n= D(x, a)T (x̌ | x, a)\nD(x, a)T (x̌ | x, a) +D(x, a)ρ(x̌)\n= T (x̌ | x, a)\nT (x̌ | x, a) + ρ(x̌)\n= qi(x̌ | φ?i (x̌))Ti(φ?i (x̌) | x, a)\nqi(x̌ | φ?i (x̌))Ti(φ?i (x̌) | x, a) + qi(x̌ | φ?i (x̌))ρ(φ?i (x̌))\n= Ti(φ\n? i (x̌) | x, a)\nTi(φ?i (x̌) | x, a) + ρ(φ?i (x̌)) =\nTi(φ ? i (x̌) | φ?(x)[pt(i)], a)\nTi(φ?i (x̌) | φ?(x)[pt(i)], a) + ρ(φ?i (x̌)) .\nTheorem 4 (Decoder Regression Guarantees). For any given δabs ∈ (0, 1) and nabs ∈ N we have the following with probability at least 1− δabs:\nEx,a,x̌∼D [ (ĝi(x, a, x̌)− g?(x, a, x̌))2 ] ≤ ∆(nabs, δabs, |G|),\nwhere ∆(nabs, δabs, |G|) := cnabs ln |G| δabs and c is a universal constant.\nProof. This is a standard regression guarantee derived using Bernstein’s inequality with realizability (3). For example, see Proposition 11 in Misra et al. (2020) for proof.\nCorollary 5. For any given δabs ∈ (0, 1) and nabs ∈ N we have the following with probability at least 1− δabs:\nEx,a,x̌∼D [|ĝi(x, a, x̌)− g?(x, a, x̌)|] ≤ √ ∆(nabs, δabs, |G|) (17)\nProof. Applying Jensen’s inequality (E[ √ Z] ≤ √ E[Z]) to Theorem 4 gives us:\nEx,a,x̌∼D [|ĝ(x, a, x̌)− g?(x, a, x̌)|] = Ex,a,x̌∼D [√ |ĝ(x, a, x̌)− g?(x, a, x̌)|2 ]\n≤ √ Ex,a,x̌∼D [ (ĝ(x, a, x̌)− g?(x, a, x̌))2 ] ≤ √ ∆(nabs, δabs, |G|).\nCoupling Distribution We introduce a coupling distribution following Misra et al. (2020).\nDcoup(x, a, x̌1, x̌2) = D(x, a)ρ(x̌1)ρ(x̌2). (18)\nWe also define the following quantity which will be useful for stating our results:\nξ(x̌1, x̌2, x, a) = T (x̌1 | x, a) ρ(x̌1) − T (x̌2 | x, a) ρ(x̌2) . (19)\nLemma 10. For any fixed δabs ∈ (0, 1) we have the following with probability at least 1− δabs:\nEx,a,x̌1,x̌2∼Dcoup [ 1{φ̂i(x̌1) = φ̂i(x̌2)} |ξ(x̌1, x̌2, x, a)| ] ≤ 8 √ ∆(nabs, δabs, |G|).\nProof. We define a shorthand notation E = 1{φ̂i(x̌1) = φ̂i(x̌2)} for brevity. We also define a different coupled distribution D′coup given below:\nD′coup(x, a, x̌1, x̌2) = D(x, a)D(x̌1 | x, a)D(x̌2 | x, a) (20)\nwhere D(x̌ | x, a) = 12 {T (x̌ | x, a) + ρ(x̌)}. It is easy to see that marginal distribution of D′coup over x, a, x̌1 is same as D(x, a, x̌1).\nWe first use the definition of ξ (Equation 19) and g? (Equation 9) to express their relation:\n|g?(x, a, x̌1)− g?(x, a, x̌2)| = ρ(x̌1)ρ(x̌2) |ξ(x̌1, x̌2, x, a)|\n(T (x̌1 | x, a) + ρ(x̌1))(T (x̌1 | x, a) + ρ(x̌2))\n= ρ(x̌1)ρ(x̌2)\n4D(x̌1 | x, a)D(x̌2 |, x, a) |ξ(x̌1, x̌2, x, a)| . (21)\nThe second line uses the definition ofD(x̌ | x, a). We can view ρ(x̌1)D(x̌1|x,a) and ρ(x̌2) D(x̌2|x,a) as importance weight terms. Multiplying both sides by E and taking expectation with respect to D′coup then gives us:\nED′coup [E |g?(x, a, x̌1)− g?(x, a, x̌2)|] = 1\n4 EDcoup [E|ξ(x̌1, x̌2, x, a)|] (22)\nWe bound the left hand side of Equation 22 as shown below:\nED′coup [E |g?(x, a, x̌1)− g?(x, a, x̌2)|] ≤ ED′coup [E |g?(x, a, x̌1)− ĝi(x, a, x̌1)|] + ED′coup [E |ĝi(x, a, x̌1)− g?(x, a, x̌2)|] = ED′coup [E |g?(x, a, x̌1)− ĝi(x, a, x̌1)|] + ED′coup [E |ĝi(x, a, x̌2)− g?(x, a, x̌2)|] = 2ED′coup [E |g?(x, a, x̌1)− ĝi(x, a, x̌1)|] = 2ED [E |g?(x, a, x̌)− ĝ(x, a, x̌)|] ≤ 2 √ ∆(nabs, δabs,G)\nHere the first inequality follows from triangle inequality. The second step is key where we use ĝi(x, a, x̌1) = ĝi(x, a, x̌2) whenever E = 1. This itself follows from the bottleneck structure of G where ĝi(x, a, x̌i) = ŵi(x, a, φ̂i(x̌)). The third step uses the symmetry of x̌1 and x̌2 inD′coup whereas the fourth step uses the fact that marginal distribution of D′coup is same as D. Lastly, final inequality uses E ≤ 1 and the result of Corollary 5. Combining the derived inequality with Equation 22 proves our result.\nWe define the quantity P(s′[i] = z | D′) := E(s,a)∼D′ [1{s′[i] = z}] for any distribution D′ ∈ ∆(Sh−1 ×A). From the definition of ρ, we have ρ(s′[i] = z) = P(s′[i] = z | D). Intuitively, as we have policy cover at time step h − 1 and we take actions uniformly, therefore, we expect to have good lower bound on P(s′[i] = z | D) for every i ∈ [d] and reachable z ∈ {0, 1}. Note that if z = 0 (z = 1) is not reachable then it means we always have sh[i] = 1 (sh[i] = 0) from our reachability assumption (see Section 2). We formally prove this next which will be useful later. Lemma 11. For any z ∈ {0, 1} such that s′[i] = z is reachable, we have:\nρ(s′[i] = z) = P(s′[i] = z | D) ≥ αηmin N |A|\nProof. Fix z in {0, 1}. As s′[i] = z is reachable, therefore, from the definition of ηmin we have:\nηmin ≤ sup π∈Π Pπ(s′[i] = z) ≤ sup π\n∑\nš,a\nPπ(š)T (s′[i] = z | š, a)\n≤ ∑\nš,a\nsup π∈Π\nPπ(š)T (s′[i] | š, a) = ∑\nš,a\nη(š)T (s′[i] | š, a)\nWe use the derived inequality to bound P(s′[i] = z | D) as shown:\nP(s′[i] = z | D) = ∑\nš,a\nµh−1(š) |A| T (s ′[i] = z | š, a) ≥ α N |A|\n∑\nš,a\nη(š)T (s′[i] = z | š, a) ≥ αηmin N |A| .\nThe first inequality uses the fact that µh−1 is created by roll-in with a uniformly selected policy in Ψh−1 which is an α policy cover. Recall that N = |Ψh−1|. The second inequality uses the derived result above.\nLemma 12. For any x̌1, x̌2 such that φ?i (x̌1) and φ?i (x̌2) is reachable, we have:\nEx,a∼D [|ξ(x̌1, x̌2, x, a)|] ≥ 1{φ?i (x̌1) 6= φ?i (x̌2)} αηminσ\n2N .\nProof. For any x, a, x̌1, x̌2 we can express ξ (Equation 19) as:\n|ξ(x̌1, x̌2, x, a)| = ∣∣∣∣ T (φ?i (x̌1) | φ?(x)[pt(i)], a)\nρ(φ?i (x̌1)) − T (φ\n? i (x̌2) | φ?(x)[pt(i)], a)\nρ(φ?(x̌2))\n∣∣∣∣\nwhere we use the factorization assumption and decodability assumption. Note that we are implicitly assuming φ?i (x̌1) and φ ? i (x̌2) are reachable, for the quantity ξ(x̌1, x̌2, x, a) to be well defined.\nWe define Di to be the marginal distribution over S[pt(i)] × A. Taking expectation on both side gives us:\nEx,a∼D [|ξ(x̌1, x̌2, x, a)|] = Eš,a∼Di [∣∣∣∣ T (φ?i (x̌1) | š, a) ρ(φ?i (x̌1)) − T (φ ? i (x̌2) | š, a) ρ(φ?i (x̌2)) ∣∣∣∣ ]\n= ∑\nš,a\n|PDi(š, a | φ?i (x̌1))− PDi(š, a | φ?i (x̌2))|\n= 2 ‖PDi(., . | φ?i (x̌1))− PDi(., . | φ?i (x̌2))‖TV\nThe second equality uses the definition of backward dynamics PDi over S[pt(i)] × A and the identity ρ(s′[i]) = P(s′[i] | D). If φ?i (x̌1) = φ?i (x̌2) then the quantity on the right is 0. Otherwise, this quantity is given by 2 ‖PDi(., . | s′[i] = 1)− PDi(., . | s′[i] = 0‖TV. In the later case, both\ns′[i] = 1 and s′[i] = 0 configurations are reachable, and without loss of generality we can assume P(s′[i] = 0 | D) ≥ 1/2. Our goal is to bound this term using the margin assumption (Assumption 1). We do so using importance weight as shown below:\n2 ‖PDi(., . | s′[i] = 1)− PDi(., . | s′[i] = 0)‖TV = ∑\nš,a\n∣∣∣∣Pui(š, a | s′[i] = 1) PDi(š, a | s′[i] = 1) Pui(š, a | s′[i] = 1) − Pui(š, a | s′[i] = 0) PDi(š, a | s′[i] = 0) Pui(š, a | s′[i] = 0) ∣∣∣∣\n= ∑\nš,a\nDi(š, a)\nui(š, a) ∣∣∣∣Pui(š, a | s′[i] = 1) P(s′[i] = 1 | ui) P(s′[i] = 1 | Di) − Pui(š, a | s′[i] = 0) P(s′[i] = 0 | ui) P(s′[i] = 0 | Di) ∣∣∣∣\n≥ min š,a\nDi(š, a)\nui(š, a) P(s′[i] = 0 | Di) P(s′[i] = 0 | ui) ‖Pui(., . | s′[i] = 1)− Pui(., . | s′[i] = 0)‖TV\n≥ min š,a\nDi(š, a)\nui(š, a) P(s′[i] = 0 | Di) P(s′[i] = 0 | ui) σ\nThe first step applies importance weight. As ui has support over all reachable configurations š and actions a ∈ A, hence, we can apply importance weight. The second step uses the definition of backward dynamics (PD,Pui). The third step uses Lemma H.1 of Du et al. Du et al. (2019) (see Lemma 24 in Appendix E for statement). Finally, the last step uses Assumption 1. We bound the two multiplicative terms below:\nWe have D(š, a) = µh−1(š) 1|A| ≥ αηmin N |A| . The first equality uses the fact that actions are taken uniformly and second inequality uses the fact that µh−1 is an α-policy cover. As ui is the uniform distribution over S[pt(i)] × A, therefore, we have ui(š, a) = 12|pt(i)||A| . This gives us D(š,a) ui(š,a)\n≥ αηmin N 2 |pt(i)|. We bound the other multiplicative term as shown below:\nP(s′[i] = 0 | Di) P(s′[i] = 0 | ui) ≥ P(s′[i] = 0 | Di) ≥ 1 2 .\nCombining the lower bounds for the two multiplicative terms and using 2|pt(i)| ≥ 1 we get: 2 ‖PDi(., . | s′[i] = 1)− PDi(., . | s′[i] = 0)‖TV ≥ αηminσ\n2N . (23)\nLastly, recall that our desired result is given by 2 ‖PDi(., . | s′[i] = 1)− PDi(., . | s′[i] = 0)‖TV whenever φ?i (x̌1) 6= φ?i (x̌2) and 0 otherwise. Therefore, using the derived lower bound multiplied by 1{φ?i (x̌1) 6= φ?i (x̌2)} gives us the desired result. Corollary 6. We have the following with probability at least 1− δabs:\nEx̌1,x̌2∼ρ [ 1{φ̂i(x̌1) = φ̂i(x̌2)}1{φ?i (x̌1) 6= φ?i (x̌2)} ] ≤ 16N αηminσ √ ∆(nabs, δabs, |G|).\nProof. The proof trivially follows from applying the bound in Lemma 12 to Lemma 10 as shown below:\nEx,a,x̌1,x̌2∼Dcoup [ 1{φ̂i(x̌1) = φ̂i(x̌2)} |ξ(x̌1, x̌2, x, a)| ]\n= Ex̌1,x̌2∼ρ [ 1{φ̂i(x̌1) = φ̂i(x̌2)}Ex,a,∼D [|ξ(x̌1, x̌2, x, a)|] ]\n≥ αηminσ 2N\nEx̌1,x̌2∼ρ [ [1{φ̂i(x̌1) = φ̂i(x̌2)}1{φ?i (x̌1) 6= φ?i (x̌2)} ]\nThe inequality here uses Lemma 12. The left hand side is bounded by 8 √\n∆(nabs, δabs, |G|) using Lemma 10. Combining the two bounds and rearranging the terms proves the result.\nAt this point we analyze the two cases separately. In the first case, s′[i] can be set to both 0 and 1. In the second case, s′[i] can only be set to one of the values and we will call s′[i] as degenerate at time step h. We will show how we can detect the second case, at which point we just output a decoder that always outputs 0. We analyze the first case below.\nD.2.1 CASE A: WHEN s′[i] CAN TAKE ALL VALUES\nCorollary 6 allows us to define a correspondence between learned state (i.e., output of φ̂i) and the actual state (i.e., output of φ?i ). We show this correspondence in the result. Theorem 7 (Correspondence Theorem). For any state factor s′[i], there exists exists û0 ∈ {0, 1} and û1 = 1− û0 with probability at least 1− δabs such that:\nP(φ̂i(x̌) = û0 | s′[i] = 0) ≥ 1− %, P(φ̂i(x̌) = û1 | s′[i] = 1) ≥ 1− %,\nwhere % := 16N 2|A|\nα2η2minσ\n√ ∆(nabs, δabs, |G|) and x̌ ∼ ρ, provided % ∈ ( 0, 12 ) .\nProof. For any u, z ∈ {0, 1} we define the following quantities:\nPz := Ex̌∼ρ[1{φ?i (x̌) = z}], Puz := Ex̌∼ρ[1{φ̂i(x̌) = u}1{φ?i (x̌) = z}]. It is easy to see that these quantities are related by: Pz = Puz + P(1−u)z . We define û0 = arg maxu∈{0,1} Pu0 and û1 = 1− û0. This can be viewed as the learned bit value which is in most correspondence with s[i] = 0. We will derive lower bound on Pû00/P0 and Pû11/P1 which gives us the desired result. We first derive the following lower bound on Pû00:\nPû00 ≥ Pû00 + Pû10 2 ≥ P0 2 , (24)\nwhere we use the fact that max is greater than average. Further, for any u, z ∈ {0, 1} we have:\nEx̌2,x̌2∼ρ [ 1{φ̂i(x̌1) = φ̂i(x̌2)}1{φ?i (x̌1) 6= φ?i (x̌2)} ]\n≥ Ex̌1,x̌2∼ρ [ 1{φ̂i(x̌1) = u}1{φ̂i(x̌2) = u}1{φ?i (x̌1) = z}1{φ?i (x̌2) = 1− z} ]\n= PuzPu(1−z)\nWe define a shorthand notation ∆′ := 16Nαηminσ √\n∆(nabs, δabs, |G|). Then from Corollary 6 we have proven that PuzPu(1−z) ≤ ∆′ for any u, z ∈ {0, 1}. This allows us to write:\nPû11 = P1 − Pû01 ≥ P1 − ∆′\nPû00 ⇒ Pû11 P1 ≥ 1− ∆\n′\nP1Pû00 ≥ 1− 2∆\n′\nP0P1\nwhere the last inequality uses Equation 24. We will derive the same result for Pû00/P0.\nPû00 = P0 − Pû10 ≥ P0 − ∆′\nPû11 ⇒ Pû00 P0 ≥ 1− ∆\n′\nP0Pû11 ≥ 1− ∆\n′\nP0P1 − 2∆′ ,\nwhere the last inequality uses derived bound for Pû11/P1. If we assume ∆′ ≤ P0P14 then we get Pû00 P0 ≥ 1− 2∆′P0P1 .\nAs P0 + P1 = 1, therefore, we get P0P1 = P0 − P 20 = P1 − P 21 . If P0 ≤ 12 then P0 − P 20 ≥ P02 . Otherwise, P0 > 12 which implies P1 ≤ 12 and P1 − P 21 ≥ P12 . This gives us P0P1 ≥ min{P02 , P12 }. Using lower bounds for P0 and P1 from Lemma 11 gives us P0P1 ≥ αηmin2N |A| , and allows us to write:\nPû11 P1 ≥ 1− 4N |A|∆\n′\nαηmin ,\nPû00 P0 ≥ 1− 4N |A|∆\n′\nαηmin .\nIt is easy to verify that % = 4N |A|∆ ′ αηmin . As Pû00P0 = P(φ̂i(x ′) = û0 | s[i] = 0) and Pû11P1 = P(φ̂i(x ′) = û1 | s[i] = 1), therefore, we prove our result. The only requirement we used is that ∆′ ≤ P0P14 which is ensured if % ∈ ( 0, 12 ) .\nD.2.2 CASE B: WHEN s′[i] TAKES A SINGLE VALUE\nWe want to be able to detect this case with high probability so that we can learn a degenerate decoder that only takes value 0. This would trivially give us a correspondence result similar to Theorem 7.\nWe describe the general form of the LearnDecoder subroutine in Algorithm 6. The key difference from the case we covered earlier is line 6-10. For a given factor i, we first learn the model ĝi containing the decoder φ̂i, as before using noise contrastive learning. We then sample ndeg iid triplets Ddeg = {(xj , aj , x̌j)}ndegj=1 where (xj , aj) ∼ D and x̌j ∼ ρ (line 6-8). Recall x̌ = x[chh(i)]. Next, we compute the width of prediction values over Ddeg as defined below:\nmax j,k∈[ndeg]\n|ĝ(xj , aj , x̌j)− ĝ(xk, ak, x̌k)| (25)\nIf the width is smaller than a certain value then we determine the factor to be degenerate and output a degenerate decoder φ̂i := 0, otherwise, we stick to the decoder learned by our regressor task. The form of sample size ndeg will become clear at the end of analysis, and we will determine the reason for the choice of threshold for width in line 9. Intuitively, if the latent factor only takes one value then the optimal classifier will always output 1/2 and so our prediction values should be close to one another. However, if the latent factor takes two values then the model prediction should be distinct.\nAlgorithm 6 LearnDecoder(G,Ψh−1, ĉhh). Child function has type ĉhh : [dh]→ 2[m]\n1: for i in [dh], define ω = ĉhh(i),D = ∅,Ddeg = ∅ do 2: for nabs times do // collect a dataset of real (y = 1) and imposter (y = 0) transitions 3: Sample (x(1), a(1), x′(1)), (x(2), a(2), x′(2)) ∼ Unf(Ψh−1) ◦ Unf(A) and y ∼ Bern( 12 ) 4: If y = 1 then D ← D ∪ (x(1), a(1), x′(1)[ω], y) else D ← D ∪ (x(1), a(1), x′(2)[ω], y) 5: ĝi := ûi, φ̂i = REG(D,G) // train the decoder using noise-contrastive learning 6: for ndeg times do // detect degenerate factors 7: Sample (x(1), a(1), x′(1)), (x(2), a(2), x′(2)) ∼ Unf(Ψh−1) ◦ Unf(A) 8: Ddeg ← Ddeg ∪ {(x(1), a(1), x′(2))} 9: if maxj,k∈[ndeg] ∣∣ĝi(xj , aj , x′j [ω])− ĝi(xk, ak, x′k[ω]) ∣∣≤ α 2η2minσ 40|Ψh−1|2|A| then // max over Ddeg\n10: φ̂i := 0 // output a decoder that always returns 0 return φ̂ : X → {0, 1}dh where for any x ∈ X and i ∈ [dh] we have φ̂(x)[i] = φ̂i(x[ĉhh(i)]).\nFor convenience, we define D′(x, a, x̌) = D(x, a)ρ(x̌), and so (xj , aj , x̌j) ∼ D′. For brevity reasons, we do not add additional qualifiers to differentiate xj , aj , x̌j from the dataset of real and imposter transitions, we used in the previous section for the regression task. In this part alone, we will use xj , aj , x̌j to refer to the transitions collected for the purpose of detecting degenerate factors. Lemma 13 (Markov Bound). Let {(xj , aj , x̌j)}ndegj=1 be a dataset of iid transitions sampled from D′. Fix a > 0. Then with probability at least 1− δabs − 2ndeg √ ∆(nabs,δabs,|G|)\na we have: ∀j ∈ [ndeg], |ĝ(xj , aj , x̌j)− g?(xj , aj , x̌j | ≤ a.\nProof. It is straightforward to verify that for any (xj , aj , x̌j) we have D(xj , aj , x̌j) ≥ D′(xj ,aj ,x̌j)/2. Using Corollary 5 we get:\nEx,a,x̌∼D′ [|ĝ(x, a, x̌)− g?(x, a, x̌)|] ≤ 2 √\n∆(nabs, δabs, |G|) Let Ej denote the event {|ĝ(xj , aj , x̌j)− g?(xj , aj , x̌j)| ≤ a} and Ej be its negation, then:\nP(∩ndegj=1Ej) ≥ 1− ndeg∑\nj=1\nP(Ej) ≥ 1− 2ndeg\n√ ∆(nabs, δabs, |G|)\na ,\nwhere the first inequality uses union bound and the second inequality uses Markov’s inequality. As Corollary 5 holds with probability δabs, our overall failure probability is at most\nδabs + 2ndeg √ ∆(nabs,δabs,|G|) a .\nLemma 14. For any reachable parent factor values š, action a ∈ A and reachable s′[i] ∈ {0, 1}, we have D′(š, a, s′[i]) ≥ α\n2η2min N2|A|2 .\nProof. We have D′(š, a, s′[i]) = µh−1(š)|A| ρ(s ′[i]) ≥ α 2η2min N2|A|2 , where used the induction hypothesis IH.4 that Ψ is an α-policy cover of Sh−1 and Lemma 11. Lemma 15 (Degenerate Factors). Fix a > 0. If s′[i] only takes a single value then with probability at least 1− δabs − 2ndeg √ ∆(nabs,δabs,|G|) a we have:\nmax j,k∈[ndeg]\n|ĝ(xj , aj , x̌j)− ĝ(xk, ak, x̌k)| ≤ 2a\nProof. When s′[i] takes a single value then g? is the constant function 12 . For any j and k we get the following using Lemma 13 and triangle inequality.\n|ĝ(xj , aj , x̌j)− ĝ(xk, ak, x̌k)| ≤ |ĝ(xj , aj , x̌j)− g?(xj , aj , x̌j)|+ |g?(xk, ak, x̌k)− ĝ(xk, ak, x̌k)| ≤ 2a.\nLemma 16 (Non Degenerate Factors). Fix a > 0 and assume ndeg ≥ N 2|A|2\nα2η2min , then we have:\nmax j,k∈[ndeg]\n|ĝ(xj , aj , x̌j)− ĝ(xk, ak, x̌k)| ≥ α2η2minσ\n16N2|A| − 2a\nwith probability at least 1− δabs − 2ndeg √ ∆(nabs,δabs,|G|) a − 4 exp ( −α 2η2minndeg 3N2|A|2 ) .\nProof. Equation 23 implies that there exists š, a such that ∣∣∣∣ T (s′[i] = 1 | š, a) ρ(s′[i] = 1) − T (s ′[i] = 0 | š, a) ρ(s′[i] = 0) ∣∣∣∣ ≥ αηminσ 2N\nCombining this with Equation 21 we get:\n|g?(š, a, s′[i] = 1)− g?(š, a, s′[i] = 0)| ≥ ρ(s ′[i] = 1)ρ(s′[i] = 0)\n4D(s′[i] = 1 | š, a)D(s′[i] = 0 | š, a) αηminσ 2N\n≥ α 2η2minσ\n16N2|A| (26)\nwhere Equation 26 uses ρ(s′[i] = 1)ρ(s′[i] = 0) ≥ αηmin2N |A| , as one of the terms is at least 1/2 and other can be bounded using Lemma 11.\nSay we have two examples in our dataset, say {(x1, a1, x̌1), (x2, a2, x̌2)} without loss of generality, such that φ?(x1)[ω] = φ?(x2)[ω] = š, action a1 = a2 = a, φ?i (x̌1) = 1, and φ ? i (x̌2) = 0. Then we have:\nmax j,k∈[ndeg]\n|ĝ(xj , aj , x̌j)− ĝ(xk, ak, x̌k)| ≥ |ĝ(x1, a1, x̌1)− ĝ(x2, a2, x̌2)|\n≥ |g?(š, a, 1)− g?(š, a, 0)| − |ĝ(x1, a1, x̌1)− g?(x1, a1, x̌1)| − |ĝ(x2, a2, x̌2)− g?(x2, a2, x̌2)|\n≥ α 2η2minσ\n16N2|A| − 2a (using Equation 26 and Lemma 13)\nWe use Lemma 13 which has a failure probability of δabs + 2ndeg √ ∆(nabs,δabs,|G|) a . Further, we also assume that our dataset contains both (š, a, s′[i] = 1) and (š, a, s′[i] = 0). Probability of one of these\nevents is given by Lemma 14. Therefore, if ndeg ≥ N 2|A|2\nα2η2min then from Lemma 25 and union bound,\nthe probability that at least one of these transitions does not occur is given by 4 exp ( −α\n2η2minndeg 3N2|A|2\n) .\nThe total failure probability is given by union bound and computes to:\nδabs + 2ndeg\n√ ∆(nabs, δabs, |G|)\na + 4 exp\n( −α\n2η2minndeg 3N2|A|2\n) .\nIf we fix a = α 2η2minσ\n80N2|A| then in the two case we have:\n(Degenerate Factor) max j,k∈[ndeg]\n|ĝ(xj , aj , x̌j)− ĝ(xk, ak, x̌k)| ≤ α2η2minσ\n40N2|A|\n(Non-Degenerate Factor) max j,k∈[ndeg]\n|ĝ(xj , aj , x̌j)− ĝ(xk, ak, x̌k)| ≥ 3α2η2minσ\n80N2|A| Theorem 8 (Detecting Degenerate Case). We correctly predict if s′[i] is a degenerate factor or not when using ndeg =\n3N2|A|2 α2η2min\nlog (\n4 δabs\n) and % ≤ α\n2η2minδabs 30N2|A|2 log\n−1 (\n4 δabs\n) , with probability at least\n1− 3δabs.\nProof. The result follows by combining Lemma 16 and Lemma 15, and using the value of a described above. These two results hold with probability at least:\n1− δabs − 2ndeg\n√ ∆(nabs, δabs, |G|)\na − 4 exp\n( −α\n2η2minndeg 3N2|A|2\n)\nSetting the hyperparameters to satisfy the following:\nndeg = 3N2|A|2 α2η2min log\n( 4\nδabs\n) , ∆(nabs, δabs, |G|)−1/2 ≥\n480N4|A|3 α4η4minδabsσ log\n( 4\nδabs\n) ,\ngives a failure probability of at most 3δabs. The later condition can be expressed in terms of % which gives us the desired bounds (see Theorem 7 for definition of %). Lastly, note that setting ndeg this way also satisfies the requirement in Lemma 16. Lastly, note that the resultant bound on % is much stronger than required for Theorem 7. Therefore, we can significantly improve the complexity bounds in the setting where there are no degenerate state factors." }, { "heading": "D.2.3 COMBINING CASE A AND CASE B", "text": "Theorem 8 shows that we can detect degenerate state factors with high probability. If we have a degenerate state factor and we detect it, then correspondence theorem holds trivially. However, if we don’t have degeneracy and we correctly predict it, then we stick our learned decoder and Theorem 7 holds true. These two results allows us to define a bijection between real states and learned states that we explain below.\nBijective Mapping between real and learned states For a given time step h and state bit s[i] = z, we will define ûhiz as the corresponding learned state bit. When h and i will be clear from the context then we will express this as ûz . We will use the notation ŝ to denote a learned state at time step h − 1 and ŝ′ to denote learned state at time step h. Let pt(i) = (i1, · · · , il) and s[pt(i)] = w := (w1, w2, · · ·wl), then we define ŵ = (û(h−1)i1w1 · · · û(h−1)ilwl) as the learned state bits corresponding to w. More generally, for a given set K ∈ 2d, we denote the real state factors as s[K] (or s′[K]) and the corresponding learned state factors as ŝ[K] (or ŝ′[K]). We define a mapping θh : {0, 1}d → {0, 1}d from learned state to real state. We will drop the subscript h when the time step is clear from the context. We denote the domain of θh by Ŝh which is a subset of {0, 1}d. Note that every real state may not be reachable at time step h. E.g., maybe our decoder outputs ŝ′ = (0, 0) but that the corresponding real state is not reachable at time step h. Figure 3 visualizes the mapping.\nFor a learned state ŝ we have s = θ(ŝ) if s = (z1, · · · , zd) and ŝ = (uh1z1 , · · · , uhdzd). We would also overload our notation to write w = θ(ŵ) for a given ŝ[K] = ŵ where w = θ(ŝ)[K], whenever factor set K is clear from the context. We call s′ (or s) as reachable if s′ ∈ Sh (or s ∈ Sh−1). Similarly, we call ŝ′ (or ŝ) as reachable if θh(ŝ\n′) (or θh−1(ŝ)) are reachable. For a given set of factorsK, we call w ∈ {0, 1}|K| as reachable for K if there exists a reachable state with factorsK taking on valuew. Similarly, we define ŵ ∈ {0, 1}|K| as reachable for a given K if θ(ŵ) is reachable for K. We use the mapping to state the correspondence theorem for the whole state.\nCorollary 9 (General Case). If ndeg = 3N 2|A|2 α2η2min log ( 4 δabs ) and % ≤ α 2η2minδabs 30N2|A|2 log −1 ( 4 δabs ) holds, then with probability at least 1− 3dδabs, we have: ∀s′ ∈ Sh, P(ŝ′[i] = θ−1(s′)[i] | s′[i]) ≥ 1− %, P(ŝ′ = θ−1(s′) | s′) ≥ 1− d%.\nProof. The first result directly follows from being able to detect if we are in degenerate setting or not and if not, then applying Theorem 7, and if yes then result holds trivially. This holds with probability at least 1− 3δabs from Theorem 8. Applying union bound over all d learned factors gives us success probability across all factors of at least 1− 3dδabs. The second one follows from union bound.\nP(ŝ 6= θ−1(s) | s) = P(∃i : ŝ[i] 6= θ−1(s)[i] | s[i]) ≤ d∑\ni=1\nP(ŝ[i] 6= θ−1(s)[i] | s[i]) ≤ d%.\nAs we noticed before, our bounds can be significantly improved in the case of no degenerate factors. This prevents application of expensive Markov inequality. Therefore, we also state bounds for the special case below. Corollary 10 (Degenerate Factors Absent). If % ∈ ( 0, 12 ) , then with probability at least 1 − dδabs, we have:\n∀s′ ∈ Sh, P(ŝ′[i] = θ−1(s′)[i] | s′[i]) ≥ 1− %, P(ŝ′ = θ−1(s′) | s′) ≥ 1− d%.\nProof. Same as Corollary 9 except we can directly apply Theorem 7 as we don’t need to do any expensive check for a degenerate factor." }, { "heading": "D.3 MODEL ESTIMATION", "text": "Our next goal is to estimate a model T̂h : Ŝh−1 × A → ∆(Ŝh) and latent parent structure p̂th, given roll-in distribution D ∈ ∆(Sh−1), and the learned decoders {φ̂t}t≤h. Recall that our approach estimates the model by count-based estimation. Let D = {(x(k), a(k), x′(k))}nestk=1 be the sample collected for model estimation.\nRecall that we estimate p̂th(i) be using a set of learned factors I that we believe is p̂th(i) and varying a disjoint set of learned factors J . If p̂t(i) ⊆ I then we expect the learned model to behave the same\nirrespective of how we vary the factors J . However, if p̂t(i) 6⊆ I then there exists a parent factor that on varying will different values for the learned dynamics. Let K = I ∪ J be the set of all factors in the control group (I) and variable group (J ). We will first analyze the case for a fixed i ∈ Ŝh and control group I and variable group J . For a given v̂ ∈ {0, 1}, ŵ ∈ {0, 1}|K| and a ∈ A, we have P̂D(ŝ′[i] = v̂ | ŝ[K] = ŵ, a) denoting the estimate probability derived from our count based estimation (Algorithm 4, line 3). Let PD(ŝ′[i] = v̂ | ŝ[K] = ŵ, a) be the probabilities that are being estimated. It is important to note that we use subscript D for these notations as the learned states ŝ are not Markovian and K may not contain pt(i), therefore, the estimated probabilities P̂ and expected probabilities P will be dependent on the roll-in distribution D.\nIn order to estimate P̂D(. | ŝh−1[K] = ŵ, a) we want good lower bounds on PD(ŝh−1[K] = ŵ, a) for every a ∈ A and ŵ reachable for K. Our roll-in distribution D only guarantees lower bound on PD(sh−1[K], a). However, we can use IH.2 to bound the desired quantity below. Lemma 17 (Model Estimation Coverage). If % ≤ 12 then for all K ∈ C≤2κ([d]), a ∈ A and ŵ ∈ {0, 1}|K| reachable for K, we have:\nPD(ŝh−1[K] = ŵ, a) ≥ αηmin\n4κN |A|\nProof. We can express PD(ŝh−1[K] = ŵ, a) = 1|A|PD(ŝh−1[K] = ŵ) as actions are taken uniformly. Let sh−1 = θh−1(ŝh−1) and w = sh−1[K]. We bound PD(ŝh−1[K] = ŵ) as shown:\nPD(ŝh−1[K] = ŵ) ≥ PD (ŝh−1[K] = ŵ, sh−1[K] = w) = PD (ŝh−1[K] = ŵ | sh−1[K] = w)PD (sh−1[K] = w) = ∏\nk∈K PD (ŝh−1[k] = ŵk | sh−1[k] = wk)PD (sh−1[K] = w)\n≥ (1− %)2καηmin N ≥ αηmin 4κN ,\nwhere the third step uses the fact that value of learned state ŝh−1[k] is independent of other decoders given the real state bit sh−1[k]. The fourth step uses IH.2 and the fact that we have good coverage over all sets of state factors of size at most 2κ. Last inequality uses |K| ≤ 2κ and % ≤ 12 .\nWe now show that our count-based estimator P̂D converges to PD and derive the rate of convergence. Lemma 18 (Model Estimation Error). Fix δest ∈ (0, 1). Then with probability at least 1− δest for every K ∈ C≤2κ([d]), ŵ ∈ {0, 1}|K| reachable for K, and a ∈ A we have the following: ∑\nv̂∈{0,1}\n∣∣∣P̂D(ŝh[i] = v̂ | ŝh−1[K] = ŵ, a)− PD(ŝh[i] = v̂ | ŝh−1[K] = ŵ, a) ∣∣∣ ≤ 2∆est(nest, δest),\nwhere ∆est(nest, δest) := 12 ( 2κ+5N |A| αηminnest )1/2 ln ( 4e|A|(ed)2κ δest ) .\nProof. We sample nest samples by roll-in at time step h− 1 with distribution D and taking actions uniformly. We first analyze the failure probability for a given K, ŵ, a. Let E(K, ŵ, a) denote the event {ŝh−1[K] = ŵ, ah−1 = a}. If E(K, ŵ, a) occurs in our dataset at least m times for some m ≥ 16 2 ln(1/δ) then from Corollary 15 we have∑\nv̂∈{0,1}\n∣∣∣P̂D(ŝh[i] = v̂ | ŝh−1[K] = ŵ, a)− PD(ŝh[i] = v̂ | ŝh−1[K] = ŵ, a) ∣∣∣ ≤ ,\nwith probability at least 1 − δ. Lemma 17 shows that probability of E(K, ŵ, a) is at least αηmin4κN |A| . Therefore, from Lemma 28 if nest ≥ 2 2κ+1mN |A| αηmin ln ( e δ ) then we get at least m samples of event E(K, ŵ, a) with probability at least 1− δ. Therefore, the total failure probability is at most 2δ: δ due to not getting at least m samples and δ due to Corollary 15 on getting m samples. This holds for every triplet (K, ŵ, a) and Lemma 23 shows that there are at most 2(ed)2κ|A| such triplets. Hence, with application of union bound we get the desired result.\nLemma 19 (Model Approximation Error). For any i ∈ [d],K ∈ C≤2κ([d]), s ∈ Sh−1, a ∈ A, s′ ∈ Sh, let ŝ = θ−1h−1(s) and ŝ′ = θ−1h (s′). Then we have:\n|PD(ŝ′[i] | ŝ[K], a)− PD(s′[i] | s[K], a)| ≤ ∆app := 5κ%N\nαηmin .\nProof. We will first bound |PD(s′[i] | ŝ[K], a)− PD(s′[i] | s[K], a)| and then use correspondence result (Corollary 9) to prove the desired result. We start by expressing our conditional probabilities as ratio of joint probabilities.\nPD(s′[i] | ŝ[K], a) = PD(s′[i], ŝ[K], a)\nPD(ŝ[K], a) , PD(s′[i] | s[K], a) = PD(s′[i], s[K], a) PD(s[K], a) .\nFrom Lemma 29 we have:\n|PD(s′[i] | ŝ[K], a)− PD(s′[i] | s[K], a)| ≤ ε+ ε2\nPD(s[K], a) , where (27)\nε1 := |PD(s′[i], ŝ[K], a) − PD(s′[i], s[K], a)| and ε2 := |PD(ŝ[K], a) − PD(s[K], a)|. We bound these two quantities below:\nε1 = ∣∣∣∣∣∣ ∑ sh−1[K] PD(s′[i], ŝ[K], sh−1[K], a)− ∑ ŝh−1[K] PD(s′[i], ŝh−1[K], s[K], a) ∣∣∣∣∣∣\n= ∣∣∣∣∣∣ ∑ sh−1[K] 6=s[K] PD(s′[i], ŝ[K], sh−1[K], a)−\n∑\nŝh−1[K] 6=ŝ[K] PD(s′[i], ŝh−1[K], s[K], a) ∣∣∣∣∣∣\n≤ max \n∑\nsh−1[K]6=s[K] PD(s′[i], ŝ[K], sh−1[K], a) ︸ ︷︷ ︸ Term 1\n, ∑\nŝh−1[K] 6=ŝ[K] PD(s′[i], ŝh−1[K], s[K], a) ︸ ︷︷ ︸ Term 2\n ,\nWhere the first inequality uses |a− b| ≤ max{a, b} for a, b > 0. We bound Term 1 below:\nTerm 1: 1 |A| ∑\nsh−1[K]6=s[K] PD(s′[i] | ŝ[K], sh−1[K], a)P(ŝ[K] | sh−1[K])PD(sh−1[K])\n≤ %|A| ∑\nsh−1[K]6=s[K] PD(s′[i] | ŝ[K], sh−1[K], a)PD(sh−1[K])\n≤ %|A| ∑\nsh−1[K]6=s[K] PD(sh−1[K]) ≤\n%\n|A|\nThe key inequality here is P(ŝ[K] | sh−1[K]) = ∏ k∈K P(ŝ[k] | sh−1[k]) ≤ %, as there exist at least one j ∈ K such that sh−1[j] 6= s[j] and for this j we have P(ŝ[j] | sh−1[j]) ≤ %. We bound Term 2 similarly:\nTerm 2: 1 |A| ∑\nŝh−1[K] 6=ŝ[K] PD(s′[i] | ŝh−1[K], s[K], a)PD(ŝh−1[K] | s[K])PD(s[K])\n≤ 1|A| ∑\nŝh−1[K]6=ŝ[K] PD(ŝh−1[K] | s[K]) =\n1\n|A| {1− PD(ŝ[K] | s[K])}\n≤ 1|A| (1− (1− %) |K|) ≤ 2κ%|A|\nwhere we use PD(ŝ[K] | s[K]) = ∏ k∈K P(ŝ[k] | s[k]) ≥ (1− %)|K| and |K| ≤ 2κ.\nThis gives us ε1 ≤ 2κ%|A| . The proof for ε2 is similar.\nε2 = ∣∣∣∣∣∣ ∑ sh−1[K] PD(ŝ[K], sh−1[K], a)− ∑ ŝh−1[K] PD(ŝh−1[K], s[K], a) ∣∣∣∣∣∣\n= ∣∣∣∣∣∣ ∑ sh−1[K] 6=s[K] PD(ŝ[K], sh−1[K], a)−\n∑\nŝh−1[K] 6=ŝ[K] PD(ŝh−1[K], s[K], a) ∣∣∣∣∣∣\nmax \n∑\nsh−1[K] 6=s[K] PD(ŝ[K], sh−1[K], a) ︸ ︷︷ ︸ Term 3\n, ∑\nŝh−1[K]6=ŝ[K] PD(ŝh−1[K], s[K], a) ︸ ︷︷ ︸ Term 4\n \nWe bound Term 3 below similar to Term 1:\nTerm 3: 1 |A| ∑\nsh−1[K] 6=s[K] PD(ŝ[K]|sh−1[K])PD(sh−1[K])\n≤ %|A| ∑\nsh−1[K]6=s[K] PD(sh−1[K]) ≤\n%\n|A|\nand Term 4 is bounded similar to Term 2 below:\nTerm 4: 1 |A| ∑\nŝh−1[K] 6=ŝ[K] PD(ŝh−1[K] | s[K])PD(s[K])\n≤ 1|A| {1− PD(ŝ[K] | s[K])} ≤ 1|A| { 1− (1− %)|K| } ≤ 2κ%|A|\nThis gives us ε2 ≤ 2κ%|A| . Plugging bounds for ε1 and ε2 in Equation 27 and using PD(s[K], a) = PD(s[K]) |A| ≥ αηmin N |A| gives us:\n|PD(s′[i] | ŝ[K], a)− PD(s′[i] | s[K], a| ≤ 4κ% |A|PD(s[K], a) ≤ 4κ%N αηmin . (28)\nWe can use correspondence result to derive a lower bound:\nPD(ŝ′[i] | ŝ[K], a) ≥ PD(ŝ′[i] | s′[i])PD(s′[i] | ŝ[K], a) ≥ (1− %)PD(s′[i] | ŝ[K], a) ≥ PD(s′[i] | ŝ[K], a)− %\nand an upper bound:\nPD(ŝ′[i] | ŝ[K], a) = PD(ŝ′[i] | s′[i])PD(s′[i] | ŝ[K], a)+ P(ŝ′[i] | 1− s′[i])PD(1− s′[i] | ŝ[K], a)\n≤ PD(s′[i] | ŝ[K], a) + % Combing the lower and upper bounds with Equation 28 gives us:\n|PD(ŝ′[i] | ŝ[K], a)− PD(s′[i] | s[K], a)| ≤ 4κ%N αηmin + % ≤ 5κ%N αηmin .\nwhich is the desired result.\nWe can merge the estimation error and approximation error to generate the total error.\nLemma 20 (K-Model Error). For any i ∈ [d], K ∈ C≤2κ([d]), s ∈ Sh−1, a ∈ A, s′ ∈ Sh, let ŝ = θ−1h−1(s) and ŝ ′ = θ−1h (s ′). Then we have:\n∣∣∣P̂D(ŝ′[i] | ŝ[K], a)− PD(s′[i] | s[K], a) ∣∣∣ ≤ ∆est(nest, δest) + ∆app.\nwith probability at least 1− δest.\nProof. Follows trivially by combining the estimation error (Lemma 18) and approximation error (Lemma 19) with application of triangle inequality.\nD.4 DETECTING LATENT PARENT STRUCTURE IN TRANSITION pth\nWe are now ready to analyze the performance of learned parent function p̂th. Let K1,K2 ∈ C≤2κ([2d]) and ŵ1 ∈ {0, 1}|K1|, ŵ2 ∈ {0, 1}|K2|. We will assume ŵ1 is reachable for K1 and ŵ2 is reachable for K2. For convenience we will define the following quantity Ω to measure total variation distance between distributions P̂D(s′[i] | ·, ·) conditioned on setting ŝ[K1] = ŵ1 and ŝ[K2] = ŵ2, and for a fixed action a ∈ A:\nΩ̂ia(K1, ŵ1,K2, ŵ2) := 1\n2\n∑\nv̂∈{0,1}\n∣∣∣P̂D(ŝ′[i] = v̂ | ŝ[K1] = ŵ1, a)− P̂D(ŝ′[i] = v̂ | ŝ[K2] = ŵ2, a) ∣∣∣ .\nWe can compute Ω̂ for every value of i,K1, ŵ1,K2, ŵ2, a in computational time ofO ( (2ed)3κ+3|A| ) . We also define a similar metric for the true distribution for any K1,K2 and v ∈ {0, 1}, w1 ∈ {0, 1}|K1|, w2 ∈ {0, 1}|K2| and a ∈ A:\nΩia(K1, w1,K2, w2) := 1\n2\n∑\nv∈{0,1} |PD(s′[i] = v | s[K1] = w1, a)− PD(s′[i] = v | s[K2] = w2, a)| .\nRecall that [I;J ] denotes concatenation of two ordered sets I and J . We use this notation to state our next result. Lemma 21 (Inclusive Case). Fix i ∈ [d] and I ∈ C≤κ([d]). If pt(i) ⊆ I then for all a ∈ A and û ∈ {0, 1}|I| we get:\nmax J1,J2,ŵ1,ŵ2\nΩ̂ia([I;J1], [û; ŵ1], [I;J2], [û; ŵ2]) ≤ 2∆est(nest, δest) + 2∆app,\nwhere max is taken over J1,J2 ∈ C≤κ([d]), ŵ1 ∈ {0, 1}|J1|, ŵ2 ∈ {0, 1}|J2| such that [û; ŵ1] is reachable for [I;J1], [û; ŵ2] is reachable for [I;J2], and I ∩ J1 = I ∩ J2 = ∅.\nProof. We fix J1,J2, û, ŵ1, ŵ2, a and let K1 = [I;J1], K2 = [I;J2], v = θ(v̂), u = θ(û), w1 = θ(ŵ1), and w2 = θ(ŵ2). As pt(i) ⊆ I, therefore, we have: PD(s′[i] = v | s[K1] = [u;w1], a) = Ti(s′[i] = v | s[I] = u, a) = PD(s′[i] = v | s[K2] = [u;w2], a) Using this result along with Lemma 20 and application of triangle inequality we get: ∣∣∣P̂D(ŝ′[i] = v̂ | ŝ[K1] = [û; ŵ1], a)− P̂D(ŝ′[i] = v̂ | ŝ[K2] = [û; ŵ2], a) ∣∣∣ ≤ 2∆est(nest, δest)+2∆app.\nSumming over v̂, dividing by 2, and using the definition of Ω̂ proves the result.\nThe following is a straightforward corollary of Lemma 21.\nCorollary 11. Fix i ∈ [d] then there exists an I such that for all a ∈ A and û ∈ {0, 1}|I|: max\nJ1,J2,ŵ1,ŵ2 Ω̂ia([I;J1], [û; ŵ1], [I;J2], [û; ŵ2]) ≤ 2∆est(nest, δest) + 2∆app,\nwhere max is taken over J1,J2, ŵ1, ŵ2 satisfy the restrictions stated in Lemma 21.\nProof. Take any I such that pt(i) ⊆ I and apply Lemma 21. Note that we are allowed to pick such an I as |pt(i)| ≤ κ by our assumption.\nRecall that we define p̂t(i) as the solution of the following problem:\np̂t(i) := argmin I max a,û,J1,J2,ŵ1,ŵ2\nΩ̂ia([I;J1], [û; ŵ1], [I;J2], [û; ŵ2]), (29)\nwhere I ∈ C≤κ([d]), a ∈ A, and û,J1,J2, ŵ1, ŵ2 satisfy the restrictions stated in Lemma 21. We are now ready to state our main result for p̂t.\nTheorem 12 (Property of p̂t). For any s ∈ Sh−1, a ∈ A, s′ ∈ Sh, let ŝ = θ−1h−1(s) and ŝ′ = θ−1h (s′). Then the learned parent function p̂t satisfies:\n∀ ∈ [d], ∣∣∣P̂D(ŝ′[i] | ŝ[p̂t(i)], a)− Ti(s′[i] | s[pt(i)], a) ∣∣∣ ≤ 3∆est(nest, δest) + 3∆app.\nProof. Fix i ∈ [d]. Let J = pt(i)− p̂t(i) and K = p̂t(i) ∪ J . As pt(i) ⊆ K, therefore, we have: PD(s′[i] | s[K], a) = Ti(s′[i] | s[pt(i)], a)\nCombining this result with Lemma 20 we get: ∣∣∣P̂D(ŝ′[i] | ŝ[K], a)− Ti(s′[i] | s[pt(i)], a) ∣∣∣ ≤ ∆est(nest, δest) + ∆app.\nFrom the definition of p̂t(i) (Equation 29) and Corollary 11 we have: ∣∣∣P̂D(ŝ′[i] | ŝ[p̂t(i); ∅], a)− P̂D(ŝ′[i] | ŝ[p̂t(i);J ], a) ∣∣∣ ≤ 2∆est(nest, δest) + 2∆app.\nNote that we are allowed to use Corollary 11 as ŝ[p̂t(i); ∅] and ŝ[p̂t(i);J ] are both reachable since they are derived from a reachable real state s, |p̂t(i)| ≤ κ, |[p̂t(i);J ]| ≤ |[p̂t(i); pt(i)]| ≤ 2κ, and p̂t(i) ∩ ∅ = ∅ = p̂t(i) ∩ J . Combining the previous two inequalities using triangle inequality completes the proof." }, { "heading": "D.5 BOUND TOTAL VARIATION BETWEEN ESTIMATED MODEL AND TRUE MODEL", "text": "Given the learned transition parent function p̂th we define the transition model as:\nT̂hi ( ŝ′[i] | ŝ[p̂th(i)], a ) = P̂D ( ŝ′[i] | ŝ[p̂th(i)], a ) , T̂h (ŝ ′ | ŝ, a) = d∏\ni=1\nT̂hi ( ŝ′[i] | ŝ[p̂th(i)], a ) .\nFrom Theorem 12 we have for any i ∈ [d], ŝ ∈ Sh−1, a ∈ A, s′ ∈ Sh, and ŝ = θ−1(s), ŝ′ = θ−1(s′): ∣∣∣T̂hi(ŝ′[i] | ŝ[p̂t(i)], a)− T (s′[i] | s[pt(i)], a) ∣∣∣ ≤ 3∆est(nest, δest) + 3∆app.\nTransition Closure. A subtle point remains before we prove the model error between T̂h and T . Theorem 12 only states guarantee for those ŝ that are inverse of a reachable state s. However, as stated before, due to decoder error we can reach a state ŝ which does not have a corresponding reachable state, i.e. θ(ŝ) 6∈ Sh (see Figure 3). We cannot get model guarantees for these unreachable states ŝ since we may reach them with arbitrarily small probability. However, we can still derive model error if we can simply define the real transition probabilities in terms of the learned probabilities for these states. This will not cause a problem since the real model will never reach these states. We start by defining the closure of the transition model T ◦ for time step h as:\n∀ŝ ∈ Ŝh−1, a ∈ A, ŝ′ ∈ Sh, T ◦h (θ(ŝ′) | θ(ŝ), a) = { T (θ(ŝ′) | θ(ŝ), a), if θ(ŝ) ∈ Sh−1 T̂h(ŝ ′ | ŝ, a), otherwise\nWe also define the state space domain of T ◦h as S◦h−1 = {θh−1(ŝ) | ∀ŝ ∈ Ŝh−1}. It is easy to see that θh−1 represents a bijection between Ŝh−1 and S◦h−1. We will derive our guarantees with respect to T ◦ which will allow us to define a bijection between the domain of T̂ and T ◦, and use important lemmas from the literature. The next result shows that our use of T ◦ is harmless as it assigns the same probability as T to any event.\nLemma 22 (Closure Result). Let T ◦ be the closure of transition model with respect to some learned transition model. Then for any policy π ∈ Π and any event E which is a function of an episode sampled using π, we have Pπ(E ;T ) = Pπ(E ;T ◦), where Pπ(E ;T ′) denotes the probability of event E when sampling from π and using transition model T ′.\nProof. The proof follows form observing that when using T ◦ we will never reach a state s 6∈ Sh−1 for any h − 1 by definition of Sh−1. From definition of T ◦ this means that both T ◦ and T will generate the same range of episodes sampled from π and will assign the same probabilities to them. As E is a function of an episode, therefore, its probability remains unchanged.\nWith the definition of closure, we are now ready to state our last result in this section, which bounds the total variation between the estimated model and the transition closure under the bijection map θ.\nTheorem 13 (Model Error). For any ŝ ∈ Ŝh−1 and a ∈ A we have: ∑\nŝ′∈Ŝh\n∣∣∣T̂h(ŝ′ | ŝ, a)− T ◦h (θ(ŝ′) | θ(ŝ), a) ∣∣∣ ≤ 6d (∆est(nest, δest) + ∆app) .\nProof. If θ(ŝ) 6∈ Sh−1 then by definition T ◦ the bound holds trivially. Therefore, we focus on θ(ŝ) ∈ Sh−1 for which T ◦ = T . We define the quantity for every j ∈ [d]:\nSj = ∑\nŝ′[j]···ŝ′[d]∈{0,1}\n∣∣∣∣∣∣ d∏\ni=j\nT̂hi(ŝ ′[i] | ŝ[p̂t(i)], a)−\nd∏\ni=j\nTi(θ(ŝ ′)[i] | θ(ŝ)[pt(i)], a) ∣∣∣∣∣∣ (30)\nWe claim that Sj ≤ 6(d− j + 1)(∆est + ∆app) for every j ∈ [d]. For base case we have:\nSd = ∑\nŝ′[d]∈{0,1}\n∣∣∣T̂hd(ŝ′[d] | ŝ[p̂t(d)], a)− T (θ(ŝ′)[d] | θ(ŝ)[pt(d)], a) ∣∣∣ ≤ 6(∆est + ∆app),\nfrom Theorem 12. We will assume the induction hypothesis to be true for Sk for all k > j. We handle the inductive below with triangle inequality:\nSj ≤ ∑\nŝ′[j]···ŝ′[d]∈{0,1}\nd∏\ni=j+1\nT̂hi(ŝ ′[i] | ŝ[p̂t(i)], a)|T̂hj(ŝ′[j] | ŝ[p̂t(j)], a)−\nT (θ(ŝ′)[j] | θ(ŝ)[pt(j)], a)|+ ∑\nŝ′[j]···ŝ′[d]∈{0,1} T (θ(ŝ′)[j] | θ(ŝ)[pt(j)], a)|\nd∏\ni=j+1\nT̂hi(ŝ ′[i] | ŝ[p̂t(i)], a)−\nd∏\ni=j+1\nT (θ(ŝ′)[i] | θ(ŝ)[pt(i)], a)|\nThe first term is equivalent to ∑ ŝ′[j]∈{0,1} |T̂hj(ŝ′[j] | ŝ[p̂t(j)], a) − T (θ(ŝ′)[j] | θ(ŝ)[pt(j)], a)| which is bounded by 6(∆est + ∆app) following base case analysis. The second term is equivalent to Sj+1 which is bounded by 6(d − j)(∆est + ∆app) by induction hypothesis. Combining these two bounds proves the induction hypothesis and the result then follows from bound for S1." }, { "heading": "D.6 LEARNING A POLICY COVER", "text": "In this section we show how we learn the policy cover. We start by defining some notation.\nTwo MDPs. After time step h, we can define two Markov Decision Processes (MDPs) at this time Mh and M̂h. Mh is the true MDP consists of state space (S◦1 , · · · ,S◦h), action space A, horizon h, a deterministic start state s1 = {0}d, and transition function T ◦t : S◦t−1 × A → ∆(St) for all t ∈ [h]. Recall that the set Sh ⊆ {0, 1}d denote states which are reachable at time step h, and the set\nSt ⊆ S◦t ⊆ {0, 1}d represents the closure of state space Sh based on the learned state space Ŝt. For any t ∈ [h], s ∈ St and K ∈ C≤2κ([d]) we know supπ∈ΠNS Pπ(st[K] = s[K]) ≥ ηmin.\nThe second MDP M̂h consists of the learned state space (Ŝ1, · · · , Ŝh), action space A, horizon h, a deterministic start state ŝ1 = {0}d, and transition function T̂t : Ŝt−1 ×A → ∆(Ŝt). For every t ∈ [h], we have θt : Ŝt → S◦t represent a bijection from the learned state space to the closure of the set of reachable states at time step t. The learned decoder φ̂t predict θt(s) given s ∈ St with high probability for all t < h by IH.2 and for t = h due to Corollary 9.\nLastly, the transition model T ◦t and Tt are close in L1 distance for t < h due to IH.3 and for t = h due to Theorem 13.\nThese results enable us to utilize the analysis of Du et al. (2019) for learning to learn a policy cover.\nLet ϕ̂ : Ŝ → A denote a non-stationary deterministic policy that operates on the learned state space. Similarly, ϕ : S◦ → A denote a non-stationary deterministic policy that operates on the real state. We denote ϕ̂ = ϕ ◦ θ if for every ŝ ∈ Ŝ, ϕ̂(ŝ) = ϕ(θ(ŝ)). Similarly, we denote ϕ = ϕ̂ ◦ θ−1 if for every s ∈ S◦, ϕ(s) = ϕ̂(θ−1(s)). Let π : X → A be a non-stationary deterministic policy operating on the observation space. We say π = ϕ̂ ◦ φ̂ if for every x ∈ X we have π(x) = ϕ̂(φ̂(x)). Similarly, we define π = ϕ ◦ φ? if for every x ∈ X we have π(x) = ϕ(φ?(x)). We will use Pπ[E ] to denote probability of an event E when actions are taken according to policy π : X → A. We will use Pϕ[E ] to denote the probability of event E when we operate directly on the real state and take actions using ϕ. Similarly, we define Pϕ̂[E ] to denote the probability of event Ê when we operate on the learned state space. Lastly, let P̂ϕ̂[E ] denote probability of an event E when actions are taken according to policy ϕ̂ operating directly over the latent state and following our estimated transition dynamics T̂ : Ŝ × A → ∆(Ŝ). Recall that our planner will be optimizing with respect to P̂ϕ̂[E ]. Theorem 14 (Planner Guarantee). Fix ∆pl ≥ 0, h ∈ [H]. Let I ∈ C≤2κ([d]) and ŵ ∈ {0, 1}|I|. We define a reward function R : Ŝ → [0, 1] where R(ŝ) := 1{τ(ŝ) = h ∧ ŝ[I] = ŵ}. Let ϕ̂R = planner(T̂ , R, h,∆pl) be the policy learned by the planner. Let π̂ := ϕ̂R ◦ φ̂ then:\nPπ̂ (sh[I] = θ(ŵ)) ≥ η(sh[I] = θ(w))− 2d%H − 12dH∆est − 12dH∆app −∆pl, (31) further, we have:\nP̂ϕ̂R({ŝh[I] = ŵ}) ≥ η(sh[I] = θ(w))− 6dH∆est − 6dH∆app −∆pl, (32) and if {sh[I] = θ(ŵ)} is unreachable, then\nP̂ϕ̂R({ŝh[I] = ŵ}) ≤ 6dH∆est + 6dH∆app. (33)\nProof. We define two events E := {sh[I] = θ(ŵ)} and Ê := {ŝh[I] = ŵ}. We define a policy ϕR = ϕ̂R ◦ θ−1 where for every s ∈ S we have ϕR(s) = ϕ̂R(θ−1(s)). We also define π̄ : X → A as π̄(x) = ϕR ◦ φ?(x). If for a given x ∈ X and φ?(x) = s we have φ̂(x) = θ−1(s) then π̄(x) = ϕ̂R(φ̂(x)) = π̂(x). Hence, every time our decoder outputs the correct mapped state θ(s), policies π̄ and π̂ take the same action. We use the result of Du et al. (2019) stated in Lemma 30 (setting ε set to d% using Corollary 9) to write:\n|Pπ̂(E)− Pπ̄(E)| = |Pπ̂(E)− PϕR(E)| ≤ 2d%H (34)\nLet ϕ : S◦ → A be any policy on real state space and let ϕ̂ : Ŝ → A be the induced policy on learned state space given by ϕ̂(ŝ) = ϕ ◦ θ(ŝ) = ϕ(θ(ŝ)) for any ŝ ∈ Ŝ. We showed in Theorem 13 that T̂ and T have small L1 distance under the bijection θ. Therefore, from the perturbation result of Du et al. (2019) stated in Lemma 31 we have:\n∑\nsh∈S◦h\n∣∣∣P̂ϕ̂(θ−1(ŝh))− Pϕ(sh) ∣∣∣ ≤ hε ≤ Hε,\nwhere ε := 6d (∆est(nest, δest) + ∆app) due to Theorem 13. As {sh[I] = θ(ŵ)} ⇔ {ŝh[I] = ŵ}, therefore, we can derive the following bound:\n∣∣∣Pϕ(E)− P̂ϕ̂(Ê) ∣∣∣ = ∣∣∣∣∣∣ ∑\nsh∈S◦h;sh[I]=θ(ŵ) Pϕ(sh)−\n∑\nsh∈S◦h;sh[I]=θ(ŵ) P̂ϕ̂(θ−1(sh))\n∣∣∣∣∣∣\n≤ ∑\nsh∈S◦h\n∣∣∣Pϕ(sh)− P̂ϕ̂(θ−1(sh)) ∣∣∣ ≤ Hε (35)\nLet ϕ? = arg maxPϕ[E ] be the optimal policy to satisfy {sh[I] = θ(w)}. Note that ϕ? is also the latent policy that optimizes the reward function R on the real dynamics. Let ϕ̂? = ϕ? ◦ θ−1 be the induced policy on learned states. We now bound the desired quantity as shown:\nPπ̂(E) ≥ PϕR [E ]− 2d%H (using Equation 34) ≥ P̂ϕ̂R(Ê)− 2d%H −Hε (using Equation 35) ≥ P̂ϕ̂?(Ê)− 2d%H −Hε−∆pl (ϕR is ∆pl-optimal on T̂ ) ≥ Pϕ?(E)− 2d%H − 2Hε−∆pl (using Equation 35) = η(sh[I] = θ(w))− 2d%H − 2Hε−∆pl.\nThis proves Equation 31 and Equation 32. Note that our calculations above show:\nP̂ϕ̂R(Ê) ≤ PϕR [E ] +Hε If {sh[I] = θ(ŵ)} is unreachable then PϕR [E ] = 0. Plugging this in the above equation proves Equation 33 and completes the proof.\nD.7 WRAPPING UP THE PROOF FOR FactoRL\nWe are almost done. All we need to do is to make sure is to set the hyperparameters and verify each induction hypothesis. We first set hyperparameters.\nSetting Hyperparameters. Let {sh[I] = θ(ŵ)} be reachable for some I ∈ C≤2κ([d]) and ŵ ∈ {0, 1}|I|. Then applying Theorem 14 and using the definition of ηmin we have: Pπ̂ (sh[I] = θ(w)) ≥ ηmin − 2d%H − 12dH∆est − 12dH∆app −∆pl As we want the right hand side to be at least αηmin we divide the error equally between the three terms. This gives us:\n(Planning Error) ∆pl ≤ (1−α)ηmin/4 (36)\n(Model Approximation Error) ∆app ≤ (1−α)ηmin/48dH ⇒ % ≤ α(1− α)η2min\n240κdHN (37)\n(Model Estimation Error) ∆est ≤ (1−α)ηmin/48dH\n⇒ nest ≥ 18432 2κd2H2N |A| α(1− α)2η3min\nln2 (\n4e|A|(ed)2κ δest\n) (38)\n(Decoding Error) % ≤ (1−α)ηmin/8dH (39)\nThe model approximation error places a more stringent requirement on % than the decoding error for planning. However, throughout the proof for FactoRL in this section, we made other requirements on our hyperparameters. For % this is given by min{β2min/1200, 1/2} = β2min/1200 by combining constraints in Lemma 8 and Theorem 7, and an additional constraint for detecting non-degenerate factors stated in Corollary 9. Due to the inefficiency of the non-degenerate factors detection, we state results separately for the two cases:\n% ≤ min { β2min 1200 , α(1− α)η2min 240κdHN } (no non-degenerate factor) % ≤ min { β2min 1200 , α(1− α)η2min 240κdHN , α2η2minδabs 30N2|A|2 log −1 ( 4\nδabs\n)} (general case)\nUsing the definition of % from Theorem 7, we get a value of nabs for non-degenerate factor (Equation 40) and general case (Equation 41) given below:\nnabs ≥ 38402N4|A|2 α4η4minσ 2 ln ( |G| δabs ) max { κ2d2H2N2 α2(1− α)2η4min , 25 β4min } (40) nabs ≥ 38402N4|A|2 α4η4minσ 2 ln ( |G| δabs ) max { κ2d2H2N2 α2(1− α)2η4min , 25 β4min , N4|A|4 α4η4minδ 2 abs ln2 ( 4 δabs )} (41)\nRecall that for detecting degenerate factors we collect ndeg samples. Corollary 9 gives value of this hyperparameter as\nndeg = 3N2|A|2 α2η2min log\n( 4\nδabs\n) ,\nwhich also satisfies the condition in Lemma 16. Lastly, Theorem 3 gives number of samples for independence testing nind and rejection sampling frequency k as:\nnind ≥ O ( 1\nβ4min ln m2|A||F|(2ed)2κ+1 δind\n) , k ≥ 8\nηmin ln\n( 30\nβmin\n)\nFailure probabilities for a single timestep are bounded by δind due to identification of emission structure (Theorem 3), 3dδabs due to decoding (Corollary 9), and δest due to model estimation (Lemma 17). The total failure probability using union bound for a single step is given by δind + 3dδabs + δest, and for the whole algorithm is given by δindH + 3dδabsH + δestH . Binding δindH 7→ δ/3, 3dδabsH 7→ δ/3, δestH 7→ δ/3, gives us total failure probability of δ and the right value of hyperparameters.\nSample complexity of FactoRL is at most kHnind +Hnabs +Hndeg +Hnest episodes which is order of:\npoly { d16κ, |A|, H, 1\nηmin ,\n1 δ , 1 βmin , 1 σ , lnm, ln |F|, ln |G|)\n} ,\nwhere use the fact that N = |Ψh−1| can be at most 2(ed)2κ from Lemma 23. Note that if we did not have to apply the expensive degeneracy detection step, then we would get logarithmic dependence on 1/δabs. Cheaper ways of detecting degeneracy can, therefore, significantly improve the sample complexity.\nWe have not attempted to optimize the degree and exponent in the sample complexity above.\nFor our choice of two hyperparameters nest and nabs, we can bound the model error and decoding failure by:\n(Model Error) 6d(∆mod + ∆app) ≤ (1− α)ηmin 4H , (Decoding Failure) % ≤ α(1− α)η 2 min 240κdHN .\nVerifying Induction Hypothesis. Finally, we verify the different induction hypothesis below.\n1. We already verified IH.1 with Theorem 3. We learn a ĉhh that is equivalent to chh upto label permutation.\n2. We already verified IH.2 with Corollary 9. Given a real state s ∈ Sh, our decoder outputs the corresponding learned state with high probability. We also derived the form of %.\n3. We already verified IH.3 with Theorem 13. We also derived the form of ∆est and ∆app. 4. Lastly, Theorem 14 and our subsequent calculations for hyperparameter show that Ψh is\nan α-policy cover of Sh and that the size of Ψh is at most 2(ed)2κ from Lemma 23. Lastly, for all reachable factor values we get the value of learned policy as at least (1+α)ηmin/2 using Equation 32 and our choice of hyperparameter values. Similarly, from Equation 33 we get the value of learned policy for all unreachable factor values as at most (1−α)ηmin/4. This allows us to filter all unreachable factor values. In the main paper, we focus on the value of α = 1/2, which explains why on Algorithm 1, line 8 we only keep those policies with value at least (1+α)ηmin/2 = 3ηmin/4. This verifies IH.4.\nThis completes the analysis for FactoRL." }, { "heading": "E SUPPORTING RESULT", "text": "Lemma 23 (Assignment Counting Lemma). For a given k, d ∈ N and k ≤ d, the cardinality of the set {(K, u) | K ∈ C≤k([d]), u ∈ {0, 1}|K|} is bounded by 2(ed)k.\nProof. Assume k ≥ 2. The cardinality of this set is given by∑ki=0 ( d i ) 2i which can be bounded as shown below: k∑\ni=0\n( d\ni\n) 2i = 1 + 2d+ k∑\ni=2\n( d\ni\n) 2i ≤ 1 + 2d+ k∑\ni=2\n( ed\ni\n)i 2i ≤ 1 + 2d+ k∑\ni=2\n(ed) i <\nk∑\ni=0\n(ed) i .\nThe first inequality here uses the well-known bound for binomial coefficients ( n i ) ≤ ( ed i )i for any n, i ∈ N and i ≤ n. Further bounding the above result using ed− 1 ≥ ed/2 gives us: k∑\ni=0\n(ed) i ≤ (ed)\nk+1\ned− 1 ≤ 2(ed) k.\nThe proof is completed by checking that inequality holds for k < 2.\nLemma 24 (Lemma H.1 in Du et al. (2019)). Let u, v ∈ Rd+ with ‖u‖1 = ‖v‖1 = 1 and ‖u−v‖1 ≥ ε. Then for any α > 0 we have ‖αu− u‖1 ≥ ε2 . Lemma 25 (Chernoff Bound). Let q be the probability of an event occurring. Then given n iid samples with n ≥ 1q , the probability that the event occurred at least once is at least 1− 2 exp( −qn 3 ).\nProof. Let Xi be a 0-1 indicator denoting if the event occurred or not, and let X = ∑n i=1Xi. We have E[Xi] = q and E[X] = qn. Let t = 1− 1/qn. We will assume that qn > 1 and so t ∈ (0, 1). Then the probability that the event never occurs is bounded by:\nP(X < 1) = P(X < (1− t)qn) ≤ exp (−qnt2\n3\n) < 2 exp { −qn\n3\n} .\nLemma 26 (Hoeffding’s Inequality). LetX1, X2, · · · , Xn be independent random variables bounded by the interval [0, 1]. Let empirical mean of these random variables be X = 1n ∑n i=1Xn, then for any t > 0 we have: P( ∣∣X − E[X]\n∣∣ ≥ t) ≤ 2 exp(−2nt2). Lemma 27 (Theorem 2.1 of Weissman et al. (2003)). Let P be a probability distribution over a discrete set of size a. Let Xn = X1, X2, · · · , Xn be independent identical distributed random variables distributed according to P . Let P̂Xn be the empirical probability distribution estimated from sample set Xn. Then for all > 0:\nP(‖P − PXn‖1 ≥ ) ≤ (2a − 2) exp ( −n 2\n8\n) .\nThe next result is a direct corollary of Lemma 27. Corollary 15. For any m ≥ 8a 2 ln (1/δ) samples, we have ‖P − PXn‖1 < with probability at least 1− δ. Lemma 28. Let X1, X2, · · · , Xn be 0-1 independent identically distributed random variables with mean µ. Let X = ∑n i=1Xi. Fix m ∈ N and δ ∈ (0, 1). If n ≥ 2mµ ln ( e δ ) then P(X < m) ≤ δ.\nProof. This is a standard Chernoff bound argument. We have E[X] = nµ. Assuming n ≥ m/µ then from multiplicative Chernoff bound we have: P(X < m) = P ( X ≤ { 1− { 1− m\nnµ\n}} nµ ) ≤ exp ( −nµ\n2\n{ 1− m\nnµ\n}2) ≤ exp ( m− nµ\n2\n)\nSetting right hand side equal to δ and solving gives us n ≥ 2µ ( m+ ln( 1δ ) ) which is satisfied\nwhenever n ≥ 2mµ ln ( e δ ) .\nLemma 29 (Lemma H.3 of Du et al. (2019)). For any a, b, c, d > 0 with a ≤ b and c ≤ d we have: ∣∣∣a b − c d ∣∣∣ ≤ |a− c|+ |b− d| max{b, d}\nThe next Lemma is borrowed from Du et al. (2019). They state their Lemma for a specific event (E = α(ŝ) in their notation) but this choice of event is not important and their proof holds for any event.\nLemma 30 (Lemma G.5 of Du et al. (2019)). Let S = (S1, · · · ,SH) and Ŝ = (Ŝ1, · · · , ŜH) be the real and learned state space. We assume access to a decoder φ̂ : X → Ŝ and let φ? : X → S be the oracle decoder. Let θh : Ŝh → Sh be a bijection for every h ∈ [H] and θ : Ŝ → S where θ(ŝ) = θh(ŝ) for ŝ ∈ Ŝh. For any h ∈ [H] and sh ∈ Sh, we assume P(ŝh = θ−1h (sh) | sh) ≥ 1− ε, i.e., given the real state sh, our decoder (φ̂) will map it to θ−1h (sh) with probability at least 1− ε.\nLet ϕ : S → A be a deterministic policy on the real state space and ϕ̂ : Ŝ → A be the induced policy given by ϕ̂(ŝ) = ϕ(θ(ŝ)) for every ŝ ∈ Ŝ. Let π, π̂ : X → A where π(x) = ϕ(φ?(x)) and π̂(x) = ϕ̂(φ̂(x)) for every x ∈ X . For every random event E we have:\n|Pπ(E)− Pπ̂(E)| ≤ 2εH\nLemma 31 (Lemma H.2. of Du et al. (2019)). Let there be two tabular MDPsM and M̂. Let S = (S1, · · · ,SH) be the state space ofM and Ŝ = (Ŝ1, · · · , ŜH) be the state space of M̂. Both MDPs have a A be the action space of both MDPs and horizon of H . For every h ∈ [H], there exists a bijection θh : Ŝh → Sh. Let T : S ×A → ∆(S) and T̂ : Ŝ × A → ∆(Ŝ) be transition dynamics forM and M̂ satisfying:\n∀h ∈ [H], a ∈ A, ŝ ∈ Ŝ, ∑\nŝ∼Ŝh\n∣∣∣Th(θ(ŝ′) | θ(ŝ), a)− T̂h(ŝ′ | ŝ, a) ∣∣∣ ≤ ε\nLet ϕ : S → A be a policy forM. Let ϕ̂ : Ŝ → A be the induced policy on M̂ such that for any ŝ ∈ Ŝ we have ϕ̂(ŝ) = ϕ(θ(ŝ)). Then for any h ∈ [H] we have:\n∑\nsh∈Sh\n∣∣∣P̂ϕ̂(θ−1(ŝh))− Pϕ(sh) ∣∣∣ ≤ hε" }, { "heading": "F EXPERIMENT DETAILS", "text": "We provide details for our proof of concept experiment below.\nModeling Details. We model F , used for performing independence test, using a single-layer feedforward network θF with Leaky ReLu non-linearity (Maas et al., 2013) and a softmax output layer. Give a pair of atoms x[u] and x[v], we concatenate these atoms and map it to a probability distribution over {0, 1} by applying θF . We implement the model class G for learning state decoder following suggestion of Misra et al. (2020). Recall that a function in G maps a transition (x, a, x̌) ∈ X ×A×X ? to a value in [0, 1]. We first map x and x̌ to vectors v1 and v2 respectively, using two separate linear layers. We map the action a to its one-hot vector representation 1a. We map the vector v2 to a probability distribution using the Gumbel-softmax trick (Jang et al., 2016), by computing qi ∝ exp(v2[i] + ϑi) for all i ∈ {1, 2}, where ϑi is sampled independently from the Gumbel distribution. We concatenate the vectors v1,1a and q and map it to a probability distribution over {0, 1}, through a single layer feed-forward network θG with Leaky ReLu non-linearity. We recover a decoder φ from the model that maps a set of atoms x̌ to φ(x̌) = arg maxi∈{0,1} qi+1.\nLearning Details. We train the two models using cross-entropy loss. Formally, given a dataset Dind = {(xi[u], xi[v], yi)}nindi=1 for performing independence testing, and a dataset Dabs = {(xi, ai, x̌i, yi)}nabsi=1 for learning a decoder, we optimize the model by minimizing the cross-entropy loss as shown below:\nf̂ = arg max f∈F\n1\nnind\nnind∑\ni=1\nln f(y | xi[u], xi[v]), ĝ = arg max g∈G\n1\nnabs\nnabs∑\ni=1\nln g(y | xi, ai, x̌i).\nHere we overload our notation to allow the output of models to be distribution over {0, 1} rather than a scalar value in [0, 1] as we assumed before. This is in sync with how we implement these model class and allows us to conveniently perform cross-entropy loss minimization.\nPlanner Details. We use a simple planner based on approximate dynamic programming. Given model estimate, reward function and a set of visited learned states, we perform dynamic programming to compute optimal Q-values for the set of visited states. We assume the Q-values for non-visited states to be 0. This allows us to compute Q-values in a computationally-efficient manner. Later, if the agent visits an unvisited state, then it simply takes random action.\nHyperparameters. We set the hidden dimension of θF to 10 and that of θG to 56. We set the threshold c on held-out log-loss value, when performing independence test to be 0.65. For reference, if one uses a random uniform classifier then its performance is − ln(0.5) ≈ 0.69. We train both models for 10 epochs using Adam optimization with learning rate of 0.001, and a batch size of 32. We remove 0.2% of the training data and use it as a validation set. We evaluate on the validation set after every epoch, and use the model with the best performance on the validation set. We used PyTorch 1.6 to develop the code and used default initialization scheme for all parameters." } ]
2,021
PROVABLE RICH OBSERVATION REINFORCEMENT LEARNING WITH COMBINATORIAL LATENT STATES
SP:5908636440ae0162f1bf98b6e7b8969cc163f9a6
[ "Motivated by the observation that prevalent metrics (Inception Score, Frechet Inception Distance) used to assess the quality of samples obtained from generative models are gameable (due to either the metric not correlating well with visually assessed sample quality or the metric being susceptible to training sample memorization), the authors conduct a large scale “controlled” study to assess the gameability of said metrics. The authors conducted a competition and subsequently analyzed how approaches tend to cheat so as to obtain higher FID scores. Furthermore, to assess the extent of memorization w.r.t. the FID score, the authors propose a new metric — Memorization-Informed Frechet Inception Distance (MiFID) — which takes into account sample memorization w.r.t. a reference set. The authors conclude on a few notable observations — (1) unintentional memorization in generative models is a serious and prevalent issue; (2) the choice of latent space used to compute FID based scores can make a significant difference." ]
Many recent developments on generative models for natural images have relied on heuristically-motivated metrics that can be easily gamed by memorizing a small sample from the true distribution or training a model directly to improve the metric. In this work, we critically evaluate the gameability of such metrics by running a competition that ultimately resulted in participants attempting to cheat. Our competition received over 11000 submitted models and allowed us to investigate both intentional and unintentional memorization. To stop intentional memorization, we propose the “Memorization-Informed Fréchet Inception Distance” (MiFID) as a new memorization-aware metric and design benchmark procedures to ensure that winning submissions made genuine improvements in perceptual quality. Furthermore, we manually inspect the code for the 1000 top-performing models to understand and label different forms of memorization. The inspection reveals that unintentional memorization is a serious and common issue in popular generative models. The generated images and our memorization labels of those models as well as code to compute MiFID are released to facilitate future studies on benchmarking generative models.
[]
[ { "authors": [ "Shane Barratt", "Rishi Sharma" ], "title": "A note on the inception score", "venue": "arXiv preprint arXiv:1801.01973,", "year": 2018 }, { "authors": [ "Ali Borji" ], "title": "Pros and cons of gan evaluation measures", "venue": "Computer Vision and Image Understanding,", "year": 2019 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale adversarial representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jeff Donahue", "Philipp Krähenbühl", "Trevor Darrell" ], "title": "Adversarial feature learning", "venue": "arXiv preprint arXiv:1605.09782,", "year": 2016 }, { "authors": [ "Vincent Dumoulin", "Ishmael Belghazi", "Ben Poole", "Olivier Mastropietro", "Alex Lamb", "Martin Arjovsky", "Aaron Courville" ], "title": "Adversarially learned inference", "venue": "arXiv preprint arXiv:1606.00704,", "year": 2016 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Colin Raffel", "Luke Metz" ], "title": "Towards gan benchmarks which require generalization", "venue": null, "year": 2018 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A. Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Pavel Izmailov", "Polina Kirichenko", "Marc Finzi", "Andrew Gordon Wilson" ], "title": "Semi-supervised learning with normalizing flows", "venue": "arXiv preprint arXiv:1912.13025,", "year": 2019 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tero Karras", "Samuli Laine", "Miika Aittala", "Janne Hellsten", "Jaakko Lehtinen", "Timo Aila" ], "title": "Analyzing and improving the image quality of stylegan", "venue": "arXiv preprint arXiv:1912.04958,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Christian Ledig", "Lucas Theis", "Ferenc Huszár", "Jose Caballero", "Andrew Cunningham", "Alejandro Acosta", "Andrew Aitken", "Alykhan Tejani", "Johannes Totz", "Zehan Wang", "Wenzhe Shi" ], "title": "Photo-realistic single image super-resolution using a generative adversarial network", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Mario Lucic", "Karol Kurach", "Marcin Michalski", "Sylvain Gelly", "Olivier Bousquet" ], "title": "Are gans created equal? a large-scale study, 2017", "venue": null, "year": 2017 }, { "authors": [ "Lars Maaløe", "Marco Fraccaro", "Valentin Liévin", "Ole Winther" ], "title": "Biva: A very deep hierarchy of latent variables for generative modeling", "venue": null, "year": 1902 }, { "authors": [ "Jacob Menick", "Nal Kalchbrenner" ], "title": "Generating high fidelity images with subscale pixel networks and multidimensional upscaling", "venue": "arXiv preprint arXiv:1812.01608,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks, 2018", "venue": null, "year": 2018 }, { "authors": [ "Augustus Odena" ], "title": "Semi-supervised learning with generative adversarial networks", "venue": "arXiv preprint arXiv:1606.01583,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "arXiv preprint arXiv:1601.06759,", "year": 2016 }, { "authors": [ "Taesung Park", "Ming-Yu Liu", "Ting-Chun Wang", "Jun-Yan Zhu" ], "title": "Semantic image synthesis with spatially-adaptive normalization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": null, "year": 2015 }, { "authors": [ "Ali Razavi", "Aaron van den Oord", "Oriol Vinyals" ], "title": "Generating diverse high-fidelity images with vq-vae-2", "venue": "arXiv preprint arXiv:1906.00446,", "year": 2019 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "arXiv preprint arXiv:1505.05770,", "year": 2015 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Jiahui Yu", "Zhe Lin", "Jimei Yang", "Xiaohui Shen", "Xin Lu", "Thomas S. Huang" ], "title": "Free-form image inpainting with gated convolution", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Han Zhang", "Ian Goodfellow", "Dimitris Metaxas", "Augustus Odena" ], "title": "Self-attention generative adversarial networks, 2018", "venue": null, "year": 2018 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A. Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Shizhan Zhu", "Raquel Urtasun", "Sanja Fidler", "Dahua Lin", "Chen Change Loy" ], "title": "Be your own prada: Fashion synthesis with structural coherence", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": null, "text": "Many recent developments on generative models for natural images have relied on heuristically-motivated metrics that can be easily gamed by memorizing a small sample from the true distribution or training a model directly to improve the metric. In this work, we critically evaluate the gameability of such metrics by running a competition that ultimately resulted in participants attempting to cheat. Our competition received over 11000 submitted models and allowed us to investigate both intentional and unintentional memorization. To stop intentional memorization, we propose the “Memorization-Informed Fréchet Inception Distance” (MiFID) as a new memorization-aware metric and design benchmark procedures to ensure that winning submissions made genuine improvements in perceptual quality. Furthermore, we manually inspect the code for the 1000 top-performing models to understand and label different forms of memorization. The inspection reveals that unintentional memorization is a serious and common issue in popular generative models. The generated images and our memorization labels of those models as well as code to compute MiFID are released to facilitate future studies on benchmarking generative models." }, { "heading": "1 INTRODUCTION", "text": "Recent work on generative models for natural images has produced huge improvements in image quality, with some models producing samples that can be indistinguishable from real images (Karras et al., 2017; 2019a;b; Brock et al., 2018; Kingma & Dhariwal, 2018; Maaløe et al., 2019; Menick & Kalchbrenner, 2018; Razavi et al., 2019). Improved sample quality is important for tasks like super-resolution (Ledig et al., 2017) and inpainting (Yu et al., 2019), as well as creative applications (Park et al., 2019; Isola et al., 2017; Zhu et al., 2017a;b). These developments have also led to useful algorithmic advances on other downstream tasks such as semi-supervised learning (Kingma et al., 2014; Odena, 2016; Salimans et al., 2016; Izmailov et al., 2019) or representation learning (Dumoulin et al., 2016; Donahue et al., 2016; Donahue & Simonyan, 2019).\nModern generative models utilize a variety of underlying frameworks, including autoregressive models (Oord et al., 2016), Generative Adversarial Networks (GANs; Goodfellow et al., 2014), flow-based models (Dinh et al., 2014; Rezende & Mohamed, 2015), and Variational Autoencoders (VAEs; Kingma & Welling, 2013; Rezende et al., 2014). This diversity of approaches, combined with the philosophical nature of evaluating generative performance, has prompted the development of heuristically-motivated metrics designed to measure the perceptual quality of generated samples such as the Inception Score (IS; Salimans et al., 2016) or the Fréchet Inception Distance (FID; Heusel et al., 2017). These metrics are used in a benchmarking procedure where “state-of-the-art” results are claimed based on a better score on standard datasets.\nIndeed, much recent progress in the field of machine learning as a whole has relied on useful benchmarks on which researchers can compare results. Specifically, improvements on the benchmark metric should reflect improvements towards a useful and nontrivial goal. Evaluation of the metric should be a straightforward and well-defined procedure so that results can be reliably compared. For example, the ImageNet Large-Scale Visual Recognition Challenge (Deng et al., 2009; Russakovsky et al., 2015) has a useful goal (classify objects in natural images) and a well-defined evaluation procedure (top-1 and top-5 accuracy of the model’s predictions). Sure enough, the ImageNet\nbenchmark has facilitated the development of dramatically better image classification models which have proven to be extremely impactful across a wide variety of applications.\nUnfortunately, some of the commonly-used benchmark metrics for generative models of natural images do not satisfy the aforementioned properties. For instance, although the IS is demonstrated to correlate well with human perceived image quality (Salimans et al., 2016), Barratt & Sharma (2018) points out several flaws of the IS when used as a single metric for evaluating generative modeling performance, including its sensitivity to pretrained model weights which undermines generalization capability. Seperately, directly optimizing a model to improve the IS can result in extremely unrealistic-looking images (Barratt & Sharma, 2018) despite resulting in a better score. It is also well-known that if a generative model memorizes images from the training set (i.e. producing non-novel images), it will achieve a good IS (Gulrajani et al., 2018). On the other hand, the FID is widely accepted as an improvement over IS due to its better consistency under perturbation (Heusel et al., 2017). However, there is no clear evidence of the FID resolving any of the flaws of the IS. A large-scale empirical study is necessary to provide robust support for understanding quantitatively how flawed the FID is.\nMotivated by these issues, we want to benchmark generative models in the “real world”, i.e. outside of the research community by holding a public machine learning competition. To the extent of our knowledge, no large-scale generative modeling competitions have ever been held, possibly due to the immense difficulty of identifying training sample memorization in a efficient and scalable manner. We designed a more rigorous procedure for evaluating competition submissions, including a memorization-aware variant of FID for autonomously detecting cheating via intentional memorization. We also manually inspected the code for the top 1000 submissions to reveal different forms of intentional or unintentional cheating, to ensure that the winning submissions reflect meaningful improvements, and to confirm efficacy of our proposed metric. We hope that the success of the first-ever generative modeling competition can serve as future reference and stimulate more research in developing better generative modeling benchmarks.\nOur main goal in this paper is to conduct an empirical study on issues of relying on the FID as a benchmark metric to guide the progression of generative modeling. In Section 2, we briefly review the metrics and challenges of evaluating generative models. In Section 3, we explain in detail the competition design choices and propose a novel benchmarking metric, the Memorization-Informed Fréchet Inception Distance (MiFID). We show that MiFID enables fast profiling of participants that intentionally memorize the training dataset. In Section 4, we introduce a dataset released along with this paper that includes over one hundred million generated images and manual labels obtained by painstaking code review. In Section 5, we connect phenomena observed in large-scale benchmarking of generative models in the real world back to the research community and point out crucial but neglected flaws in the FID." }, { "heading": "2 BACKGROUND", "text": "In generative modeling, our goal is to produce a model pθ(x) (parameterized by θ) of some true distribution p(x). We are not given direct access to p(x); instead, we are provided only with samples drawn from it x ∼ p(x). In this paper, we will assume that samples x from p(x) are 64-by-64 pixel natural images, i.e. x ∈ R64×64×3. A common approach is to optimize θ so that pθ(x) assigns high likelihood to samples from p(x). This provides a natural evaluation procedure which measures the likelihood assigned by pθ(x) to samples from p(x) that were held out during the optimization of θ. However, not all models facilitate exact computation of likelihoods. Notably, Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) learn an “implicit” model of p(x) from which we can draw samples but that does not provide an exact (or even an estimate) of the likelihood for a given sample. The GAN framework has proven particularly successful at learning models which can generate extremely realistic and high-resolution images, which leads to a natural question: How should we evaluate the quality of a generative model if we can’t compute the likelihood assigned to held-out samples?\nThis question has led to the development of many alternative ways to evaluate generative models (Borji, 2019). A historically popular metric, proposed in (Salimans et al., 2016), is the Inception Score (IS) which computes\nIS(pθ) = Ex∼pθ(x)[DKL(IN(y|x)‖ IN(y))]\nwhere IN(y|x) is the conditional probability of a class label y assigned to a datapoint x by a pretrained Inception Network (Szegedy et al., 2015). More recently, (Heusel et al., 2017) proposed the Fréchet Inception Distance (FID) which better correlates with perceptual quality. The FID uses the estimated mean and covariance of the Inception Network feature space distribution to calculate the distance between the real and fake distributions up to second order. The FID between the real images r and generated images g is computed as:\nFID(r, g) = ‖µr − µg‖22 + Tr ( Σr + Σg − 2 (ΣrΣr) 1 2 ) where µr and µg are the mean of the real and generated images in latent space, and Σr and Σg are the covariance matrices for the real and generated feature vectors. A drawback of both the IS and FID is that they assign a very good score to a model which simply memorizes a small and finite sample from p(x) (Gulrajani et al., 2018), an issue we address in section 3.1." }, { "heading": "3 GENERATIVE MODELING COMPETITION DESIGN", "text": "We designed the first generative model competition where participants were invited to generate realistic dog images given 20,579 images of dogs from ImageNet (Russakovsky et al., 2015). Participants were required to implement their generative model in a constrained computation environment to prevent them from obtaining unfair advantages. The computation environment was designed with:\n• Limited computation resource (9 hours on a NVIDIA P100 GPU for each submission) since generative model performance is known to be highly related to the amount of computational resources used (Brock et al., 2018) • Isolated containerization to avoid continuous training by reloading model checkpoints from\nprevious sessions • No access to external resources (i.e. the internet) to avoid usage of pre-trained models or\nadditional data\nEach submission is required to provide 10,000 generated images of dimension 64 × 64 × 3 and receives a public score in return. Participants are allowed to submit any number of submissions during the two-month competition. Before the end of the competition, each team is required to choose two submissions, and the final ranking is determined by the better private score (described below) out of the two selected submissions.\nIn the following sections, we discuss how the final decisions were made regarding pretrained model selection (for FID feature projection) and how we enforced penalties to ensure the fairness of the competition." }, { "heading": "3.1 MEMORIZATION-INFORMED FRÉCHET INCEPTION DISTANCE (MIFID)", "text": "The most crucial part of the competition is the performance evaluation metric to score the submissions. To assess the quality of generated images, we adopted the Fréchet Inception Distance (Heusel et al., 2017) which is a widely used metric for benchmarking generative tasks. Compared to the Inception Score (Salimans et al., 2016), the FID has the benefits of better robustness against noise and distortion and more efficient computation (Borji, 2019).\nFor a generative modeling competition, a good metric not only needs to reflect the quality of generated samples but must also allow easy identification of cheating with as little manual intervention as possible. Many forms of cheating were prevented by setting up the aforementioned computation environment, but even with these safeguards it would be possible to “game” the FID score. Specifically, we predicted that memorization of training data would be a major issue, since current generative model evaluation metrics such as IS or FID are prone to reward high scores to memorized instances (Gulrajani et al., 2018). This motivated the addition of a ”memorization-aware” metric that penalizes models producing images too similar to the training set.\nCombining memorization-aware and generation quality components, we introduced the MemorizationInformed Fréchet Inception Distance (MiFID) as the metric used for the competition:\nMiFID(Sg, St) = mτ (Sg, St) · FID(Sg, St)\nwhere Sg is the generated set, St is the original training set, FID is the Fréchet Inception Distance, and mτ is the memorization penalty which we discuss in the following section." }, { "heading": "3.1.1 MEMORIZATION PENALTY", "text": "To capture the similarity between two sets of data – in our case, generated images and original training images – we started by measuring similarity between individual images. Cosine similarity, the inner product of two vectors, is a commonly used similarity measure. It is easy to implement with high computational efficiency (with existing optimized BLAS libraries) which is ideal when running a competition with hundreds of submissions each day. The value is also bounded, making it possible to intuitively understand and compare the degree of similarity.\nWe define the memorization distance s of a target projected generated set Sg ⊆ Rd with respect to a reference projected training set St ⊆ Rd as 1 subtracted by the mean of minimum (signed cosine) similarity of all elements Sg and St. Intuitively, lower memorization distance is associated with more severe training sample memorization. Note that the distance is asymmetric i.e. s(Sg, St) 6= s(St, Sg), but this is irrelevant for our use-case.\ns(Sg, St) := 1− 1 |Sg| ∑ xg∈Sg min xt∈St |〈xg, xt〉| |xg| · |xt|\nWe hypothesize that cheating submissions with intentional memorization would generate images with significantly lower memorization distance. To leverage this idea, only submissions with distance lower than a specific threshold τ are penalized. Thus, the memorization penalty mτ is defined as\nmτ (Sg, St) =\n{ 1\ns(Sg,St)+ ( 1), if s(Sg, St) < τ 1, otherwise\nMore memorization (subceeding the predefined threshold τ ) will result in higher penalization. Dealing with false positives and negatives under this penalty scheme is further discussed in Section 3.2." }, { "heading": "3.1.2 PREVENTING OVERFITTING", "text": "In order to prevent participants of the competition from overfitting to the public leaderboard, we used different data for calculating the public and private score and we generalized the FID to use any visually-relevant latent space for feature projection. Specifically, we selected different pre-trained ImageNet classification models for public and private score calculation. For the same score, the same pre-trained model is used for both feature projection for the memorization penalty and for standard FID calculation. Inception V3 was used for public score following past literature, while the private score used NASNet (Zoph et al., 2018). We will discuss how NASNet was selected in Section 3.2.1." }, { "heading": "3.2 DETERMINING FINAL RANKS", "text": "After the competition was closed to submission there is a two-week window to re-process all the submissions and remove ones violating the competition rules (e.g. by intentionally memorizing the training set) before the final private leaderboard was announced. The memorization penalty term in MiFID was efficiently configured for re-running with a change of the parameter τ , allowing finalizing of results within a short time frame." }, { "heading": "3.2.1 SELECTING PRE-TRAINED MODEL FOR THE PRIVATE SCORE", "text": "As it is commonly assumed that FID is generally invariant to the projection space, the pre-trained model for private score was selected to best combat cheating via training set memorization. The goal is to separate cheating and non-cheating submissions as cleanly as possible. We calculate the memorization distance for a subset of submissions projected with the chosen pre-trained model and coarsely label whether the submission intentionally memorized training samples. Coarse labeling of submissions was achieved by exploiting competition-related clues to obtain noisy labels.\nThere exists a threshold τ∗ that best separates memorized versus non-memorized submissions via the memorization distance (see Figure 1). Here we define the memorization margin d of pre-trained\nmodel M as d(M) = min\nτ ∑ ∀Sg (s(Sg, St)− τ)2\nThe pre-trained model with largest memorization margin was then selected for calculation of the private score, in this case, NASNet (Zoph et al., 2018), and the optimal corresponding memorization penalty mτ where τ = τ∗." }, { "heading": "3.2.2 HANDLING FALSE PENALIZATION", "text": "While MiFID was designed to handle penalization automatically, in practice we observed minor mixing of cheating and non-cheating submissions between the well-separated peaks (Figure 1). While it is well accepted that no model can be perfect, it was necessary to ensure that competition was fair. Therefore, different strategies were adopted to resolve false positives and negatives. For legitimate submissions that are falsely penalized (false positives), participants are allowed to actively submit rebuttals for the result. For cheating submissions that are dissimilar enough to the training set to dodge penalization (false negatives), the code was manually reviewed to determine if intentional memorization was present. This manual reviewing process of code submissions was labor intensive, as it required expert knowledge of generative modeling. The goal was to review enough submissions such that the top 100 teams on the leaderboard would be free of cheaters, since we reward the top 100 ranked teams. Thanks to our design of MiFID, it is possible to set the penalty threshold τ such that we were comfortable that most users ranked lower than 100 on the leaderboard who cheated with memorization were penalized by MiFID. This configuration of MiFID significantly reduced the time needed to finish the review, approximately by 5x. The results of the manual review is presented in Section 4.2." }, { "heading": "4 RESULTS AND DATA RELEASE", "text": "A total of 924 teams joined the competition, producing over 11,192 submissions. Visual samples from submitted images are shown in the appendix." }, { "heading": "4.1 DATA RELEASE", "text": "The complete dataset will be released with the publication of this paper to facilitate future work on benchmarking generative modeling. It includes:\n• A total of 11,192 submissions, each containing 10,000 generated dog images with dimension 64× 64× 3.\n• Manual labels for the top 1000 ranked submissions of whether the code is a legitimate generative method and the type of illegitimacy involved if it is not. This was extremely labor-intensive to obtain.\n• Crowd-labeled image quality: 50,000 human labeled quality and diversity of images generated from the top 100 teams (non-memorized submissions).\nWe will also release the code to reproduce results in the paper." }, { "heading": "4.2 MEMORIZATION METHODS SUMMARY", "text": "The 1000 top submissions are manually labeled as to whether or not (and how) they cheated. As we previously predicted, the most pronounced way of cheating was training sample memorization. We observed different levels of sophistication in these methods - from very naive (submitting the training images) to highly complex (designing a GAN to memorize). The labeling results are summarized in Table 1." }, { "heading": "4.3 COMPETITION RESULTS SUMMARY", "text": "In Figure 2 (left), we observe that non-generative methods score extremely good (low) FID scores on both the public and private leaderboard. Specifically, memorization GAN achieved top tier performance and it was a highly-debated topic for a long time whether it should be allowed in the competition. Ultimately, memorization GAN was banned, but it serves as a good reminder that generative-looking models may not actually be generative. In Figure 2 (right), we observe that the range of memorization calculated by NASNet (private) spans twice as wide as Inception (public), allowing easier profiling of cheating submissions by memorization penalty. It reflects the effectiveness of our strategy selecting the model for calculating private score.\nParticipants generally started with basic generative models such as DCGAN (Radford et al., 2015) and moved to more complex ones as they grow familiar with the framework. Most notably BigGAN (Brock et al., 2018), SAGAN (Zhang et al., 2018) and StyleGAN (Karras et al., 2019a) achieved the most success. Interestingly, one submission using DCGAN (Radford et al., 2015) with spectralnormalization (Miyato et al., 2018) made it into top 10 in the private leaderboard, suggesting that different variations of GANs with proper tuning might all be able to achieve good FID scores (Lucic et al., 2017)." }, { "heading": "5 INSIGHTS", "text": "" }, { "heading": "5.1 UNINTENTIONAL CHEATING: MODELS WITH BETTER FID MEMORIZE MORE", "text": "In our observation, almost all cheating submissions attempted to cheat by memorizing the training set. This is likely because it is well-known that memorization achieves a good FID score. The research community has long been aware that memorization can be an issue for the FID metric. However, there has been no formal studies on the impact of IS or FID scores affected by memorization. This can pose a serious problem when researchers continue to claim state-of-the-art results based on improvements to the FID score if there is not a systematic way to measure and address training set memorization. With disturbing findings from our study, we caution the danger of ignoring memorization in research benchmark metrics, especially with unintentional memorization of training data.\nIn Figure 3 (right) we plot the relationship between FID and memorization distance for all 500 non-cheating models in the public and private leaderboard, respectively. Note that these models are non-cheating, most of which popular variants of state-of-the-art generative models such as DCGAN and SAGAN recently published in top machine learning conferences. Disturbingly, the Pearson correlation between FID and memorization distance is above 0.95 for both leaderboards. High correlation does not imply that memorization solely enables good model performance evaluated by FID but it is reasonable to suspect that generation of images close to the training set can result in a high FID score.\nIt is important for us to take memorization more seriously, given how easy it is for memorization to occur unintentionally. The research community needs to better study and understand the limitation of current generative model benchmark metrics. When proposing new generative techniques, it is crucial to adopt rigorous inspections of model quality, especially regarding training sample memorization. Existing methods such as visualizing pairs of generated image and their nearest neighbors in the training dataset should be mandatory in benchmarks. Furthermore, other methods such as the FID and memorization distance correlation (Figure 3) for different model parameters can also be helpful to include in publications." }, { "heading": "5.2 DEBUNKING FID: CHOICE OF LATENT SPACE FOR FEATURE PROJECTION IS NON-TRIVIAL", "text": "In the original paper where FID is proposed (Heusel et al., 2017), features from the coding layer of an Inception model are used as the projected latent space to obtain “vision-relevant” features. It is generally assumed that Fréchet Distance is invariant to the chosen latent space for projection as long as the space is ”information-rich”, which is why the arbitrary choice of the Inception model has been widely accepted. Interestingly, there has not been much study on the extent of our knowledge as to whether the assumption holds true even though a relatively large amount of new generative model architectures are being proposed (many of which rely heavily on FID for performance benchmarking).\nIn our competition, we used different models for the public and private leaderboards in an attempt to avoid models which “overfit” to some particular feature space.\nIn Figure 3 (left), we examine the relationship between Fréchet Distance calculated by two different pre-trained image models that achieved close to state-of-the-art performance on ImageNet classification (specifically, Inception (Szegedy et al., 2016) and NasNet (Zoph & Le, 2016)). At first glance, a Spearman correlation of 0.93 seems to support the assumption of FID being invariant to the projection space. However, on closer inspection we noticed that the mean absolute rank difference is 124.6 between public and private leaderboards for all 1675 effective submissions. If we take out the consistency of rank contributed by intentional memorization by considering the top 500 labeled, non-memorized submissions only, the mean absolute rank difference is as large as 94.7 (18.9 %). To put it into perspective, only the top 5 places receive monetary awards and there is only 1 common member between the top 5 evaluated by FID projected with the two models.\nIt’s common to see publications claiming state-of-art performance with less than 5% improvement compared to others. As summarized in the Introduction section of this paper, generated model evaluation, compared to other well-studied tasks such as classification, is extremely difficult. Observing that model performance measured by FID fluctuates in such great amplitude relative to the improvement of many newly proposed generation techniques, we would suggest taking progression on the FID metric with a grain of salt." }, { "heading": "6 CONCLUSIONS", "text": "We summarized our design of the first ever generative modeling competition and shared insights obtained regarding FID as a generative modeling benchmark metric. By running a public generative modeling competition we observed how participants attempted to game the FID, specifically with memorization, when incentivized with monetary awards. Our proposed Memorization-Informed Fréchet Inception Distance (MiFID) effectively punished models that intentionally memorize the training set which current popular generative modeling metrics do not take into consideration.\nWe shared two main insights from analyzing the 11,000+ submissions. First, unintentional training sample memorization is a serious and possibly widespread issue. Careful inspection of the models and analysis on memorization should be mandatory when proposing new generative model techniques. Second, contrary to popular belief, the choice of pre-trained model latent space when calculating FID is non-trivial. The top 500 labeled, non-memorized submission mean absolute rank difference percentage between our two models is 18.9 %, suggesting that FID is rather unstable to serve as the benchmark metric for new studies to claim minor improvement over past methods." }, { "heading": "A APPENDIX", "text": "" } ]
2,020
null
SP:9ce7a60c5f2e40f7d59e98c90171a7b49621c67c
[ "Observing that the existed ER-based sampling method may introduce bias or redundancy in sampled transitions, the paper proposes a new sampling method in the ER learning setting. The idea is to take into consideration the context, i.e. many visited transitions, rather than a single one, based on which one can measure the relative importance of each transition. Specifically, the weights of transitions are also learned through a Reinforce agent and hence the sampling distribution is learned to directly improve sample efficiency. " ]
Experience replay, which enables the agents to remember and reuse experience from the past, has played a significant role in the success of off-policy reinforcement learning (RL). To utilize the experience replay efficiently, the existing sampling methods allow selecting out more meaningful experiences by imposing priorities on them based on certain metrics (e.g. TD-error). However, they may result in sampling highly biased, redundant transitions since they compute the sampling rate for each transition independently, without consideration of its importance in relation to other transitions. In this paper, we aim to address the issue by proposing a new learning-based sampling method that can compute the relative importance of transition. To this end, we design a novel permutation-equivariant neural architecture that takes contexts from not only features of each transition (local) but also those of others (global) as inputs. We validate our framework, which we refer to as Neural Experience Replay Sampler (NERS)1, on multiple benchmark tasks for both continuous and discrete control tasks and show that it can significantly improve the performance of various off-policy RL methods. Further analysis confirms that the improvements of the sample efficiency indeed are due to sampling diverse and meaningful transitions by NERS that considers both local and global contexts.
[ { "affiliations": [], "name": "REPLAY BUFFERS" }, { "affiliations": [], "name": "Youngmin Oh" }, { "affiliations": [], "name": "Kimin Lee" }, { "affiliations": [], "name": "Jinwoo Shin" }, { "affiliations": [], "name": "Eunho Yang" }, { "affiliations": [], "name": "Sung Ju Hwang" } ]
[ { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Marc Brittain", "Josh Bertram", "Xuxi Yang", "Peng Wei" ], "title": "Prioritized sequence experience replay", "venue": "arXiv preprint arXiv:1905.12726,", "year": 2019 }, { "authors": [ "William Fedus", "Prajit Ramachandran", "Rishabh Agarwal", "Yoshua Bengio", "Hugo Larochelle", "Mark Rowland", "Will Dabney" ], "title": "Revisiting fundamentals of experience replay", "venue": "arXiv preprint arXiv:2007.06700,", "year": 2020 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Kristian Hartikainen", "George Tucker", "Sehoon Ha", "Jie Tan", "Vikash Kumar", "Henry Zhu", "Abhishek Gupta", "Pieter Abbeel" ], "title": "Soft actor-critic algorithms and applications", "venue": "arXiv preprint arXiv:1812.05905,", "year": 2018 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado Van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Yuenan Hou", "Lifeng Liu", "Qing Wei", "Xudong Xu", "Chunlin Chen" ], "title": "A novel ddpg method with prioritized experience replay", "venue": "In SMC,", "year": 2017 }, { "authors": [ "David Isele", "Akansel Cosgun" ], "title": "Selective experience replay for lifelong learning", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech Marian Czarnecki", "Tom Schaul", "Joel Z Leibo", "David Silver", "Koray Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Su Young Lee", "Choi Sungik", "Sae-Young Chung" ], "title": "Sample-efficient deep reinforcement learning via episodic backward update", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Guido Novati", "Petros Koumoutsakos" ], "title": "Remember and forget for experience replay", "venue": "In SMC,", "year": 2019 }, { "authors": [ "Yangchen Pan", "Hengshuai Yao", "Amir-Massoud Farahmand", "Martha White" ], "title": "Hill climbing on value estimates for search-control in dyna", "venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Yangchen Pan", "Jincheng Mei", "Amir-massoud Farahmand" ], "title": "Frequency-based search-control in dyna", "venue": "arXiv preprint arXiv:2002.05822,", "year": 2020 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "David Silver", "Guy Lever", "Nicolas Heess", "Thomas Degris", "Daan Wierstra", "Martin Riedmiller" ], "title": "Deterministic policy gradient algorithms", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Richard S Sutton" ], "title": "Integrated modeling and control based on reinforcement learning and dynamic programming", "venue": "In Advances in neural information processing systems,", "year": 1991 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In IROS,", "year": 2012 }, { "authors": [ "Hado P van Hasselt", "Matteo Hessel", "John Aslanides" ], "title": "When to use parametric models in reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Che Wang", "Keith Ross" ], "title": "Boosting soft actor-critic: Emphasizing recent experience without forgetting the past", "venue": "arXiv preprint arXiv:1906.04009,", "year": 2019 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado Van Hasselt", "Marc Lanctot", "Nando De Freitas" ], "title": "Dueling network architectures for deep reinforcement learning", "venue": "arXiv preprint arXiv:1511.06581,", "year": 2015 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Daochen Zha", "Kwei-Herng Lai", "Kaixiong Zhou", "Xia Hu" ], "title": "Experience replay optimization", "venue": "In IJCAI,", "year": 2019 }, { "authors": [ "Shangtong Zhang", "Richard S. Sutton" ], "title": "A deeper look at experience replay", "venue": "In ICMR,", "year": 2015 }, { "authors": [ "Fujimoto" ], "title": "2018) and Soft actor critic (SAC) Haarnoja et al. (2018a;b) in openAI", "venue": "DDPG", "year": 2018 }, { "authors": [ "van Hasselt" ], "title": "After flattening and reducing the output of the CNN-layers by FC-layers", "venue": null, "year": 2019 }, { "authors": [ "in van Hasselt" ], "title": "2019) although there is room for better performance if more learning", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Experience replay (Mnih et al., 2015), which is a memory that stores the past experiences to reuse them, has become a popular mechanism for reinforcement learning (RL), since it stabilizes training and improves the sample efficiency. The success of various off-policy RL algorithms largely attributes to the use of experience replay (Fujimoto et al., 2018; Haarnoja et al., 2018a;b; Lillicrap et al., 2016; Mnih et al., 2015). However, most off-policy RL algorithms usually adopt a unique random sampling (Fujimoto et al., 2018; Haarnoja et al., 2018a; Mnih et al., 2015), which treats all past experiences equally, so it is questionable whether this simple strategy would always sample the most effective experiences for the agents to learn.\nSeveral sampling policies have been proposed to address this issue. One of the popular directions is to develop rule-based methods, which prioritize the experiences with pre-defined metrics (Isele & Cosgun, 2018; Jaderberg et al., 2016; Novati & Koumoutsakos, 2019; Schaul et al., 2016). Notably, since TD-error based sampling has improved the performance of various off-policy RL algorithms (Hessel et al., 2018; Schaul et al., 2016) by prioritizing more meaningful samples, i.e., high TD-error, it is one of the most frequently used rule-based methods. Here, TD-error measures how unexpected the returns are from the current value estimates (Schaul et al., 2016).\nHowever, such rule-based sampling strategies can lead to sampling highly biased experiences. For instance, Figure 1 shows randomly selected 10 transitions among 64 transitions sampled using certain\n1Code is available at https://github.com/youngmin0oh/NERS\nmetrics/rules under a policy-based learning, soft actor critic (SAC) (Haarnoja et al., 2018a), on Pendulum-v0 after 30,000 timesteps, which goal is to balance the pendulum to make it stay in the upright position. We observe that sampling by the TD-error alone mostly selects initial transitions (see Figure 1(a)), where the rods are in the downward position, since it is difficult to estimate Q-value on them. Conversely, the sampled transitions by Q-value describe rods in the upright position (see Figure 1(b)), which will provide high returns to agents. Both can largely contribute to the update of the actor and critic since the advantage term and mean-square of TD-errors are large. Yet, due to the bias, the agent trained in such a manner will mostly learn what to do in a specific state, but will not learn about others that should be experienced for proper learning of the agent. Therefore, such biased (and redundant) transitions may not lead to increased sample efficiency, even though each sampled transition may be individually meaningful.\nOn the other hand, focusing only on the diversity of samples also has an issue. For instance, sampling uniformly at random is able to select out diverse transitions including intermediate states such as those in the red boxes of Figure 1(c), where the rods are in the horizontal positions which are necessary for training the agents as they provide the trajectory between the two types of states. However, the transitions are occasionally irrelevant for training both the policy and the Q networks. Indeed, states in the red boxes of Figure 1(c) possess both lowQ-values and TD-errors. Their low TD-errors suggest that they are not meaningful for the update of Q networks. Similarly, low Q-values cannot be used to train the policy what good actions are.\nMotivated by the aforementioned observations, we aim to develop a method to sample both diverse and meaningful transitions. To cache both of them, it is crucial to measure the relative importance among sampled transitions since the diversity should be considered in them, not all in the buffer. To this end, we propose a novel neural sampling policy, which we refer to Neural Experience Replay Sampler (NERS). Our method learns to measure the relative importance among sampled transitions by extracting local and global contexts from each of them and all sampled ones, respectively. In particular, NERS is designed to take a set of each experience’s features as input and compute its outputs in an equivariant manner with respect to the permutation of the set. Here, we consider various features of transition such as TD-error, Q-value and the raw transition, e.g., expecting to sample intermediate transitions as those in blue boxes of Figure 1(c)) efficiently.\nTo verify the effectiveness of NERS, we validate the experience replay with various off-policy RL algorithms such as soft actor-critic (SAC) (Haarnoja et al., 2018a) and twin delayed deep deterministic (TD3) (Fujimoto et al., 2018) for continuous control tasks (Brockman et al., 2016; Todorov et al., 2012), and Rainbow (Hessel et al., 2018) for discontinuous control tasks (Bellemare et al., 2013). Our experimental results show that NERS consistently (and often significantly for complex tasks having high-dimensional state and action spaces) outperforms both the existing the rule-based (Schaul et al., 2016) and learning-based (Zha et al., 2019) sampling methods for experience replay.\nIn summary, our contribution is threefold:\n• To the best of our knowledge, we first investigate the relative importance of sampled transitions for the efficient design of experience replays.\n• For the purpose, we design a novel permutation-equivariant neural sampling architecture that utilizes contexts from the individual (local) and the collective (global) transitions with various features to sample not only meaningful but also diverse experiences.\n• We validate the effectiveness of our neural experience replay on diverse continuous and discrete control tasks with various off-policy RL algorithms, on which it consistently outperforms both existing rule-based and learning-based sampling methods." }, { "heading": "2 NEURAL EXPERIENCE REPLAY SAMPLER", "text": "We consider a standard reinforcement learning (RL) framework, where an agent interacts with an environment over discrete timesteps. Formally, at each timestep t, the agent receives a state st from the environment and selects an action at based on its policy π. Then, the environment returns a reward rt, and the agent transitions to the next state st+1. The goal of the agent is to learn the policy π that maximizes the return Rt = ∑∞ k=0 γ\nkrt+k, which is the discounted cumulative reward from the timestep t with a discount factor γ ∈ [0, 1), at each state st. Throughout this section, we focus on off-policy actor-critic RL algorithms with a buffer B, which consist of the policy πψ(a|s) (i.e., actor) and Q-function Qθ(s, a) (i.e., critic) with parameters ψ and θ, respectively." }, { "heading": "2.1 OVERVIEW OF NERS", "text": "We propose a novel neural sampling policy f with parameter φ, called Neural Experience Replay Sampler (NERS). It is trained for learning to select important transitions from the experience replay buffer for maximizing the actual cumulative rewards. Specifically, at each timestep, NERS receives a set of off-policy transitions’ features, which are proportionally sampled in the buffer B based on priorities evaluated in previous timesteps. Then it outputs a set of new scores from the set, in order for the priorities to be updated. Further, both the sampled transitions and scores are used to optimize the off-policy policy πψ(a|s) and action-value function Qθ(s, a). Note that the output of NERS should be equivariant of the permutation of the set, so we design its neural architecture to satisfy the property. Next, we define the reward rre as the actual performance gain, which is defined as the difference of the expectation of the sum of rewards between the current and previous evaluation policies, respectively. Figure 2 shows an overview of the proposed framework, which learns to sample from the experience replay. In the following section, we describe our method of learning the sampling policy for experience replay and the proposed network architecture in detail." }, { "heading": "2.2 DETAILED COMPONENTS OF NERS", "text": "Input observations. Throughout this paper, we denote the set {1, · · · , n} by [n] for positive integer n. Without loss of generality, suppose that the replay buffer B stores the following information as its i-th transition Bi = ( sκ(i), aκ(i), rκ(i), sκ(i)+1 ) where κ (i) is a function from the index of B to a\ncorresponding timestep. We use a set of priorities PB = { σ1, · · · , σ|B| } that is updated whenever sampling transitions for training the actor and critic. One can sample an index set I in [|B|] with the probability pi of i-th transition as follows:\npi = σαi∑\nk∈[|B|] σ α k\n, (1)\nAlgorithm 1 Training NERS: batch size m and sample size n Initialize NERS parameters φ, a replay buffer B ← ∅, priority set PB ← ∅, and index set I ← ∅ for each timestep t do\nChoose at from the actor and collect a sample (st, at, rt, st+1) from the environment Update replay buffer B ← B ∪ {(st, at, rt, st+1)} and priority set PB ← PB ∪ {1.0} for each gradient step do\nSample an index I by the given set PB and Eq. (1) with |I| = m Calculate a score set {σk}k∈I and weights {wi}i∈I by Eq. (4) and Eq. (5), respectively Train the actor and critic using batch {Bi}i∈I ⊂ B and corresponding weights {wi}i∈I Collect I ← I ⋃ I and update PB (I) by the score set {σk}k∈I\nend for for the end of an episode do\nChoose a subset Itrain from I uniformly at random such that |Itrain| = n Calculate rre as in Eq. (6) Update sampling policy φ using the gradient (7) with respect to Itrain Empty I, i.e., I ← ∅\nend for end for\nwith a hyper-parameter α > 0. Then, we define the following sequence of features for {Bi}i∈I : D (B, I) = { sκ(i), aκ(i), rκ(i), sκ(i)+1, κ(i), δκ(i), rκ(i) + γmax a Qθ̂ ( sκ(i) + a )} i∈I , (2)\nwhere γ is a discount factor, θ̂ is the target network parameter, and δκ(i) is the TD-error defined as follows:\nδκ(i) = rκ(i) + γmax a\nQθ̂ ( sκ(i)+1, a ) −Qθ ( sκ(i), aκ(i) ) .\nThe TD-error indicates how ‘surprising’ or ‘unexpected’ the transition is (Schaul et al., 2016). Note that the input D (B, I) contains various features including both exact values (i.e., states, actions, rewards, next states, and timesteps) and predicted values in the long-term perspective (i.e., TD-errors and Q-values). We abbreviate the notation D (B, I) = D (I) for simplicity. Utilizing various information is crucial in selecting diverse and important transitions (see Section 3).\nArchitecture and action spaces. Now we explain the neural network structure of NERS f . Basically, f takes D (I) as an input and generate their scores, where these values are used to sample transitions proportionally. Specifically, f consists of fl, fg, and fs called learnable local, global and score networks with output dimensions dl, dg, and 1. The local network is used to capture attributes in each transition by fl (D (I)) = { fl,1 (D (I)) , · · · fl,|I| (D (I)) } ∈ R|I|×dl , where fl,k (D (I)) ∈ Rdl (k ∈ [|I|]). The global network is used to aggregate collective information of transitions by taking fg avg (D (I)) = ∑ fg(D(I)) |I| ∈ R\n1×dg , where fg (D (I)) ∈ R|I|×dg . Then by concatenating them, one can make an input for the score network fs as follows:\nDcat(I) := { fl,1 (D (I))⊕ fgavg (D (I)) , · · · , fl,|I| (D (I))⊕ fgavg (D (I)) } ∈ R|I|×(dl+dg),\n(3) where ⊕ denotes concatenation. Finally, the score network generates a score set:\nfs (D cat(I)) = {σi}i∈I ∈ R |I|. (4)\nOne can easily observe that fs is permutation-equivariant with respect to input D (I). The set {σi}i∈I is used to update the priorities set P for transitions corresponding to I by Eq. (1) and to compute importance-sampling weights for updating the critic, compensating the bias of probabilities (Schaul et al., 2016)):\nwi =\n( 1\n|B|p(i)\n)β , (5)\nwhere β > 0 is a hyper-parameter. Then the agent and critic receive training batch D (I) and corresponding weights {wi}i∈I for training, i.e., the learning rate for training sample Bi is set to be proportional to wi. Due to this structure satisfying the permutation-equivariant property, one\ncan evaluate the relative importance of each transition by observing not only itself but also other transitions.\nReward function and optimizing sampling policy. We update NERS at each evaluation step. To optimize our sampling policy, we define the replay reward rre of the current evaluation as follows: for policies π and π′ used in the current and previous evaluations as in (Zha et al., 2019),\nrre := Eπ ∑ t∈{timesteps in an episode} rt − Eπ′ ∑ t∈{timesteps in an episode} rt . (6) The replay reward is interpreted as measuring how much actions of the sampling policy help the learning of the agent for each episode. Notice that rre only observes the difference of the mean of cumulative rewards between the current and previous evaluation policies since NERS needs to choose transitions without knowing which samples will be added and how well agents will be trained in the future. To maximize the sample efficiency for learning the agent’s policy, we propose to train the sampling policy to selects past transitions in order to maximize rre. To train NERS, one can choose Itrain that is a subset of a index set I for totally sampled transitions in the current episode. Then we use the following formula by REINFORCE (Williams, 1992):\n∇φEItrain [rre] = EItrain\n[ rre\n∑ i∈Itrain ∇φ log pi (D (Itrain))\n] , (7)\nwhere pi is defined in Eq. (1). The detailed description is provided in Algorithm 1.\nWhile ERO Zha et al. (2019) uses a similar replay-reward (Eq. 6), there are a number of fundamental differences between it and our method. First of all, ERO does not consider the relative importance between the transitions as NERS does, but rather learns an individual sampling rate for each transition. Moreover, they consider only three types of features, namely TD-error, reward, and the timestep, while NERS considers a larger set of features by considering more informative features that are not used by ERO, such as raw features, Q-values, and actions. However, the most important difference between the two is that ERO performs two-stage sampling, where they first sample with the individually learned Bernoulli sampling probability for each transition, and further perform random sampling from the subset of sampled transitions. However, with such a strategy, the first-stage sampling is highly inefficient even with moderate size experience replays, since it should compute the sampling rate for each individual instance. Accordingly, its time complexity of the first-stage sampling depends finally on the capacity of the buffer B, i.e., O (|B|). On the contrary, NERS uses a sum-tree structure as in (Schaul et al., 2016) to sample transitions with priorities, so that its time complexity for sampling depends highly on O (log |B|). Secondly, since the number of experiences selected from the first stage sampling is large, it may have little or no effect, making it to behave similarly to random sampling. Moreover, ERO updates its network with the replay reward and experiences that are not sampled from two-stage samplings but sampled by the uniform sampling at random (see Algorithm 2 in Zha et al. (2019)). In other words, samples that are never selected affect the training of ERO, while NERS updates its network solely based on the transitions that are actually selected by itself." }, { "heading": "3 EXPERIMENTS", "text": "In this section, we conduct experiments to answer the following questions:\n• Can the proposed sampling method improve the performances of various off-policy RL algorithms for both continuous and discrete control tasks?\n• Is it really effective to sample diverse and meaningful samples by considering the relative importance with various contexts?" }, { "heading": "3.1 EXPERIMENTAL SETUP", "text": "Environments. In this section, we measure the performances of off-policy RL algorithms optimized with various sampling methods on the following standard continuous control environments\nwith simulated robots (e.g., Ant-v3, Walker2D-v3, and Hopper-v3) from the MuJoCo physics engine (Todorov et al., 2012) and classical and Box2D continuous control tasks (i.e., Pendulum∗2, LunarLanderContinuous-v2, and BipedalWalker-v3) from OpenAI Gym (Brockman et al., 2016). We also consider a subset of the Atari games (Bellemare et al., 2013) to validate the effect of our experience sampler on the discrete control tasks (see Table 2). The detailed description for environments is explained in supplementary material.\nOff-policy RL algorithms. We apply our sampling policy to state-of-the-art off-policy RL algorithms, such as Twin delayed deep deterministic (TD3) (Fujimoto et al., 2018), and soft actor-critic (SAC) (Haarnoja et al., 2018a), for continuous control tasks. For discrete control tasks, instead of the canonical Rainbow (Hessel et al., 2018), we use a data-efficient variant of it as introduced in (van Hasselt et al., 2019). Notice that Rainbow already adopts PER. To compare sampling methods, we replaced it by NERS, RANDOM, and ERO in Rainbow, respectively. Due to space limitation, we provide more experimental details in the supplementary material.\nBaselines. We compare our neural experience replay sampler (NERS) with the following baselines:\n• RANDOM: Sampling transitions uniformly at random. • PER (Prioritized Experience Replay): Rule-based sampling of the transitions with high\ntemporal difference errors (TD-errors) (Schaul et al., 2016) • ERO (Experience Replay Optimization): Learning-based sampling method (Zha et al.,\n2019), which computes the sampling score for each transition independently, using TD-error, timestep, and reward as features." }, { "heading": "3.2 COMPARATIVE EVALUATION", "text": "Figure 3 shows learning curves of each off-policy RL algorithm during training on classical and Box2D continuous control tasks, respectively. Furthermore, Table 1 and Table 2 show the mean of cumulative rewards on MuJoCo and Atari environments after 500,000 and 100,000 training steps,\n2Pendulum∗: We slightly modify the original Pendulum that openAI Gym supports to distinguishing performances of sampling methods more clearly by making rewards sparser. Its detailed description is given in the supplementary material.\nrespectively, over five runs with random seeds, respectively.3 We observe that NERS consistently outperforms baseline sampling methods in all tested cases. In particular, It significantly improves the performance of all off-policy RL algorithms on various tasks, which come with high-dimensional state and action spaces. These results imply that sampling good off-policy data is crucial in improving the performance of off-policy RL algorithms. Furthermore, they demonstrate the effectiveness of our method for both continuous and discrete control tasks, as it obtains significant performance gains on both types of tasks. On the other hand, we observe that PER, which is the rule-based sampling method, often shows worse performance than uniform random sampling (i.e., RANDOM) on these continuous control tasks, similarly as observed in (Zha et al., 2019). We suspect that this is because PER is more appropriate for Q-learning based algorithms than for policy-based learning, since TD-errors are used to update the Q network. Moreover, even though ERO is a learning-based sampling method, its performance and sampling behavior is close to that of RANDOM, due to two reasons. First, it considers the importance of each transition individually by assuming the Bernoulli distribution, which may result in sampling of redundant transitions. Second, ERO performs two-stage sampling, where the transitions are first sampled due to their individual importance, and then further randomly sampled to construct a batch. However, since too many transitions are sampled in the first stage, the second-stage random sampling is similar to random sampling from the entire experience replay.\n3Learning curves for each environment are provided in the supplementary material." }, { "heading": "3.3 ANALYSIS OF OUR FRAMEWORK", "text": "In this subsection, we first show that each component of NERS is crucial to improve sample efficiency (Figure 4). Next, we show that NERS really samples not only diverse but also meaningful transitions to update both actor and critic (Figure 5).\nContribution by each component. We analyze NERS to better understand the effect of each component. Figure 4(a) validates the contributions of our suggested techniques, where one can observe that the performance of NERS is significantly improved when using the full set of features. This implies that essential transitions for training can be sampled only by considering various aspects of the past experiences. Using only few features such as reward, TD-error, and timestep does not result in sampling transitions that yield high expected returns in the future. Figure 4(a) also shows the effect\nof the relative importance by comparing NERS with and without considering the global context; we found that the sample efficiency is significantly improved due to consideration of the relative importance among sampled transitions, via learning the global context. Furthermore, although we have considered standard environments where evaluations are free, if there exists an environment where the total number of evaluations is restricted, it may be hard to calculate the replay reward in Eq.(6) since cumulative rewards at each evaluation should be computed. Due to this reason, we consider a variance of NERS (NERS*) which computes the difference of cumulative rewards in not evaluations but training episodes. Figure 4(b) and Figure 4(c) show the performance of NERS* compared to NERS and other sampling methods under BipedalWalker-v3 and LunearLanderContinuous-v2, respectively. These figures show that the performance between the two types of replay rewards is not significantly different.\nAnalysis on statistics of sampled transitions. We now check if NERS samples both meaningful and diverse transitions by examining how its sampling behavior changes during the training. To this end, we plot the TD-errors and Q-values for the sampled transitions during training on BipedalWalker-v3, Ant-v3, and Walker2D-v3 under SAC in Figure 5. We can observe that NERS learns to focus on sampling transitions with high TD-errors in the early training steps, while it samples transitions with both high TD-errors and Q-values (diverse) at later training iterations. In the early training steps, the critic network for value estimation may not be well trained, rendering the excessive learning of the agent to be harmful, and thus it is reasonable that NERS selects transitions with high TD-errors to focus on updating critic networks (Figure 5(d-f)), while it focuses both on transitions with both high Q-values and TD-errors since both the critic and the actor will be reliable in the later stage (Figure 5(a-c)). Such an adaptive sampling strategy is a unique trait of NERS that contributes to its success, while other sampling methods, such as PER and ERO, cannot do so. Table 3 denotes the statistical values for sampled transitions’ TD-errors and Q-values on Pendulum-v3 under SAC at 10,000 steps (with initially 1,000 random actions). It is observable that NERS has higher standard deviation of Q-values and TD-errors than RANDOM and ERO. Although PER has the highest standard deviation of TD-errors than other sampling methods, it has the lowest standard deviation of Q-values instead. Figure 5 and Table 3 show that NERS learns to sample diverse, which means the NERS’s ability to sample transitions with different criteria, and meaningful experiences for agents." }, { "heading": "4 RELATED WORK", "text": "Off-policy algorithms. One of the well-known off-policy algorithms is deep Q-network (DQN) learning with a replay buffer (Mnih et al., 2015). There are various variants of the DQN learning, e.g., (Hasselt, 2010; Wang et al., 2015; Hessel et al., 2018). Especially, Rainbow (Hessel et al., 2018), which is one of the state-of-the-art Q-learning algorithms, was proposed by combining various techniques to extend the original DQN learning. Moreover, DQN was combined with a policy-based learning, so that various actor-critic algorithms have appeared. For instance, an actor-critic algorithm, which is called deep deterministic policy gradient (DDPG) (Lillicrap et al., 2016), specialized for continuous control tasks was proposed by a combination of DPG (Silver et al., 2014) and deep Q-learning (Mnih et al., 2015). Since DDPG is easy to brittle for hyper-parameters setting, various algorithms have been proposed to overcome this issue. For instance, to reduce the the overestimation of the Q-value in DDPG, twin delayed DDPG (TD3) was proposed (Fujimoto et al., 2018), which extended DDPG by applying double Q-networks, target policy smoothing, and different frequencies\nto update a policy and Q-networks, respectively. Moreover, another actor-critic algorithm called soft actor-critic (SAC) (Haarnoja et al., 2018a;b) was developed by adding the entropy measure of an agent policy to the reward in the actor-critic algorithm to encourage the exploration of the agent.\nSampling method. Due to the ease of applying random sampling, it has been used to various off-policy algorithms until now. However, it is known that it cannot guarantee optimal results, so that a prioritized experience replay (PER) (Schaul et al., 2016) that samples transitions proportionally to the TD-error in DQN learning was proposed. As a result, it showed performance improvements in Atari environments. Applying PER is also easily applicable to various policy-based algorithms, so it is one of the most frequently used rule-based sampling methods (Hessel et al., 2018; Hou et al., 2017; Schaul et al., 2016; Wang & Ross, 2019). Furthermore, since it is reported that the newest experiences are significant for efficient Q-learning (Zhang & Sutton, 2015), PER generally imposes the maximum priority on recent transitions to sample them frequently. Based on PER, imposing weights for recent transitions was also suggested (Brittain et al., 2019) to increase priorities for them. Instead of TD-error, a different metric can be also used to PER, e.g., the expected return (Isele & Cosgun, 2018; Jaderberg et al., 2016). Meanwhile, different approaches from PER have been proposed. For instance, to update the policy in a trust region, computing the importance weight of each transitions was proposed (Novati & Koumoutsakos, 2019), so far-policy experiences were ignored when computing the gradient. Another example is backward updating of transitions from a whole episode (Lee et al., 2019) for deep Q-learning. Although the rule-based methods have shown their effectiveness on some tasks, they sometimes derive sub-optimal results on other tasks. To overcome this issue, a neural network for replay buffer sampling was adopted (Zha et al., 2019) and it showed the validness of their method on some continuous control tasks in the DDPG algorithm. However, its effectiveness is arguable in other tasks and algorithms (see Section 3), as it only considers transitions independently and regard few features as timesteps, rewards, and TD-errors (unlike ours). Recently, Fedus et al. (2020) showed that increasing replay capacity and downweighting the oldest transition in the buffer generally improves the performance of Q-learning agents on Atari tasks. How to sample prior experiences is also a crucial issue to model-based RL algorithms, e.g., Dyna Sutton (1991) which is a classical architecture. There are variants of Dyna that study strategies for search-control, to selects which states to simulate. For instance, inspired by the fact that a high-frequency space requires many samples to learn, Dyna-Value Pan et al. (2019) and Dyna-Frequency Pan et al. (2020) select states with high-frequency hill climbing on value function, and gradient and hessian norm of it, respectively for generating more samples by the models. In other words, how to prioritize transitions when sampling is nontrivial, and learning the optimal sampling strategy is critical for the sample-efficiency of the target off-policy algorithm." }, { "heading": "5 CONCLUSION", "text": "We proposed NERS, a neural policy network that learns how to select transitions in the replay buffer to maximize the return of the agent. It predicts the importance of each transition in relation to others in the memory, while utilizing local and global contexts from various features in the sampled transitions as inputs. We experimentally validate NERS on benchmark tasks for continuous and discrete control with various off-policy RL methods, whose results show that it significantly improves the performance of existing off-policy algorithms, with significant gains over prior rule-based and learning-based sampling policies. We further show through ablation studies that this success is indeed due to modeling relative importance with consideration of local and global contexts." }, { "heading": "A ENVIRONMENT DESCRIPTION", "text": "A.1 MUJOCO ENVIRONMENTS\nMulti-Joint Dynamics with Contact (MuJoCo) Todorov et al. (2012) is a physics engine for robot simulations supported by openAI gym4. MuJoCo environments provide a robot with multiple joints and reinforcement learning (RL) agents should control the joints (action) to achieve a given goal. The observation of each environment basically includes information about the angular velocity and position for those joints. In this paper, we consider the following environments belonging to MuJoCo.\nHopper(-v3) is a environment to control a one-legged robot. The robot receives a high return if it hops forward as soon as possible without failure.\nWalker2d(-v3) is an environment to make a two-dimensional bipedal legs to walk. Learning to quick walking without failure ensures a high return.\nAnt(-v3) is an environment to control a creature robot with four legs used to move. RL agents should to learn how to use four legs for moving forward quickly to get a high return.\nA.2 OTHER CONTINUOUS CONTROL ENVIRONMENTS\nAlthough MuJoCo environments are popular to evaluate RL algorithms, openAI gym also supports additional continuous control environments which belong to classic or Box2D simulators. We conduct experiments on the following environments among them.\nPendulum∗ is an environment which objective is to balance a pendulum in the upright position to get a high return. Each observation represents the angle and angular velocity. An action is a joint effort which range is [−2, 2]. Pendulum∗ is slightly modified from the original (Pendulum-v0) which openAI supports. The only difference from the original is that agents receive a reward 1.0 only if the rod is in sufficiently upright position (between the angle in [−π/3, π/3], where the zero angle means that the rod is in completely upright position) at least more than 20 steps.\nLunarLander(Continuous-v2) is an environment to control a lander. The objective of the lander is landing to a pad, which is located at coordinates (0, 0), with safety and coming to rest as soon as possible. There is a penalty if the lander crashes or goes out of the screen. An action is about parameters to control engines of the lander.\n4https://gym.openai.com/\nBipedalWalker(-v3) is an environment to control a robot. The objective is to make the robot move forward far from the initial state as far as possible. An observation is information about hull angle speed, angular velocity, vertical speed, horizontal speed, and so on. An action consists of torque or velocity control for two hips and two knees.\nTable A.1 describes the observation and action spaces and the maximum steps for each episode (horizon) in MuJoCo and other continuous control environments. Here, R and [−1, 1] denote sets of real numbers and those between 0 and 1, respectively.\nA.3 DISCRETE CONTROL ENVIRONMENT\nTo evaluate sampling methods under Rainbow Hessel et al. (2018), we consider the following Atari environments. RL agents should learn their policy by observing the RGB screen to acheive high scores for each game.\nAlien(NoFrameskip-v4) is a game where player should destroy all alien eggs in the RGB screen with escaping three aliens. The player has a weapon which paralyzes aliens.\nAmidar(NoFrameskip-v4) is a game which format is similar to MsPacman. RL agents control a monkey in a fixed rectilinear lattice to eat pellets as much as possible while avoiding chasing masters. The monkey loses one life if it contacts with monsters. The agents can go to the next stage by visiting a certain location in the screen.\nAssault(NoFrameskip-v4) is a game where RL agents control a spaceship. The spaceship is able to move on the bottom of the screen and shoot motherships which deploy smaller ships to attack the agents. The objective is to eliminate the enemies.\nAsterix(NoFrameskip-v4) is a game to control a tornado. The objective of RL agents is to eat hamburgers in the screen with avoiding dynamites.\n(a) Alien (b) Amidar (c) Assault (d) Asterix\nBattleZone(NoFrameskip-v4) is a tank combat game. This game provides a first-person perspective view. RL agents control a tank to destroy other tanks. The agent should avoid other tanks or missile attacks. It is also possible to hide from various obstacles and avoid enemy attacks.\nBoxing(NoFrameskip-v4) is a game about the sport of boxing. There are two boxers with a topdown view and RL agents should control one of them. They get one point if their punches hit from a long distance and two points if their punches hit from a close range. A match is finished after two minues or 100 punches hitted to the opponent.\nChopperCommand(NoFrameskip-v4) is a game to control a helicopter in a desert. The helicopter should destroy all enemy aircrafts and helicopters while protecting a convoy of trucks.\n(e) BattleZone (f) Boxing (g) ChopperCommand\nFreeway(NoFrameskip-v4) is a game where RL agents control chickens to run across a ten-lane highway with traffic. They are only allowed to move up or down. The objective is to get across as possible as they can until two minutes.\nFrostbite(NoFrameskip-v4) is a game to control a man who should collect ice blocks to make his igloo. The bottom two thirds of the screen consists of four rows of horizontal ice blocks. He can move from the current row to another and obtain an ice block by jumping. RL agents are required to collect 15 ice blocks while avoiding some opponents, e.g., crabs and birds.\nKungFuMaster(NoFrameskip-v4) is a game to control a fighter to save his girl friend. He can use two types of attacks (punch and kick) and move/crunch/jump actions.\nMsPacman(NoFrameskip-v4) is a game where RL agents control a pacman in given mazes for eatting pellets as much as possible while avoiding chasing masters. The pacman loses one life if it contacts with monsters.\n(h) Freeway (i) Frostbite (j) KungFuMaster (k) MsPacman\nPong(NoFrameskip-v4) is a game about table tennis. RL agents control an in-game paddle to hit a ball back and forth. The objective is to gain 11 points before the opponent. The agents earn each point when the opponent fails to return the ball.\nPrivateEye(NoFrameskip-v4) is a game mixing action, adventure, and memorizationm which control a private eye. To solve five cases, the private eye should find and return items to suitable places.\nQbert(NoFrameskip-v4) is a game where RL agents control a character under a pyramid made of 28 cubes. The character should change the color of all cubes while avoiding obstacles and enemies.\nRoadRunner(NoFrameskip-v4) is a game to control a roadrunner (chaparral bird). The roadrunner runs to the left on the road. RL agents should pick up bird seeds while avoiding a chasing coyote and obstacles such as cars.\nSeaquest(NoFrameskip-v4) is a game to control a submarine to rescue divers. It can also attack enemies by missiles.\n(l) Pong (m) PrivateEye (n) Qbert (o) RoadRunner (p) Seaquest" }, { "heading": "B TRAINING DETAILS", "text": "5β increases to 1.0 by the rule β = 0.4η + 1.0(1− η), where η = the current step/the maximum steps.\nTable B provides hyper-parameters which we used. We basically adopt parameters for Twin delayed DDPG (TD3) Fujimoto et al. (2018) and Soft actor critic (SAC) Haarnoja et al. (2018a;b) in openAI baselines 6. Furthermore, we adopt parameters Rainbow as in van Hasselt et al. (2019) to make data efficient Rainbow for Atari environments. In the case of continuous control environments, we train five instances of TD3 and SAC, where they perform one evaluation rollout per the maximum steps. In the case of discrete control environments, we trained five instances of Rainbow, where they perform 10 evaluation rollouts per 1000 steps. During evaluations, we collect cumulative rewards to compute the replay reward rre.\nWe follow the hyper-parameters in van Hasselt et al. (2019) for prioritized experience replay (PER). We also use the hyper-parameters for experience replay optimization (ERO) used in Zha et al. (2019). Since NERS is interpreted as an extension of PER, it basically shares hyper-parameters in PER, e.g., α and β. NERS uses various features, e.g., TD-errors and Q-values, but the newest samples have unknown Q-values and TD-errors before sampling them to update agents policy. Accordingly, we normalize Q-values and TD-errors by taking the hyperbolic tangent function and set 1.0 for the newest samples’ TD-errors and Q-values. Furthermore, notice that NERS uses both current and next states in a transition as features, so that we adopt CNN-layers in NERS for Atari environments as in van Hasselt et al. (2019). After flattening and reducing the output of the CNN-layers by FC-layers (256-64-32) , we make a vector by concatenating the reduced output with the other features. Then the vector is input of both local and global networks fl and fg . In the case of ERO, it does not use states as features, so that CNN-layers are unneccesary.\nOur objective is not to achieve maximal performance but compare sampling methods. Accordingly, to evaluate sampling methods on Atari environments, we conduct experiments until 100,000 steps as in van Hasselt et al. (2019) although there is room for better performance if more learning. In the case of continuous control environments, we conduct experiments until 500,000 steps." }, { "heading": "C ADDITIONAL EXPERIMENTAL RESULTS", "text": "Figure C.1 shows additional continuous control environments: Ant, Walker2d, and Hopper under TD3 and SAC, respectively. All tasks possess have high-dimensional observation and action spaces (see Table A.1). One can show that NERS outperforms other sampling methods at most cases. Moreover, one can observe that RANDOM and ERO have almost similar performance and PER could not show\n6https://github.com/openai/baselines\nbetter performance to policy-based RL algorithms compared to other sampling methods. Detailed learning curves of Rainbow for each environment are observable in Figure C.2.\nWe believe that in spite of the effectiveness of PER under Rainbow, the poor performance of PER under policy-based RL algorithms results from that it is specialized to update Q-newtorks, so that the actor networks cannot be efficiently trained.\nOne can observe that there are high variances in some environments. Indeed, it is known that learning more about environments in Figure C.1 and Figure C.2 improves performance of algorithms. However, our focus is not to obtain the high performance but to compare the speed of learning according to the sampling methods under the same off-policy algorithms, so we will not spend more timesteps." } ]
2,021
null
SP:ca6ab92369346b3d457f575fc652333255f2dfec
[ "The paper considers the problem of slow sampling in autoregressive generative models. Sampling in such models is sequential, so its computational cost scales with the data dimensionality. Existing work speeds up autoregressive sampling by caching activations or distilling into normalizing flows with fast sampling. Authors of this work instead propose a method that returns (approximate) samples given an arbitrary computational budget, a behaviour referred to as *anytime sampling*. The proposed model is based on VQ-VAE by van den Oord et al. (2017), where an autoregressive model is fit to a latent space of a trained discrete autoencoder, rather than to raw pixels. Authors adapt the *nested dropout* idea by Rippel et al. (2014) to encourage the discrete autoencoder to order latent dimensions by their \"importance\" for reconstruction. Experiments demonstrate that the ordered latent space allows to stop the autoregressive sampling process at an arbitrary latent dimension and still obtain \"complete\" samples. The quality of samples increases as more latent dimensions are sampled, which allows to trade sample quality for reduced computational cost." ]
Autoregressive models are widely used for tasks such as image and audio generation. The sampling process of these models, however, does not allow interruptions and cannot adapt to real-time computational resources. This challenge impedes the deployment of powerful autoregressive models, which involve a slow sampling process that is sequential in nature and typically scales linearly with respect to the data dimension. To address this difficulty, we propose a new family of autoregressive models that enables anytime sampling. Inspired by Principal Component Analysis, we learn a structured representation space where dimensions are ordered based on their importance with respect to reconstruction. Using an autoregressive model in this latent space, we trade off sample quality for computational efficiency by truncating the generation process before decoding into the original data space. Experimentally, we demonstrate in several image and audio generation tasks that sample quality degrades gracefully as we reduce the computational budget for sampling. The approach suffers almost no loss in sample quality (measured by FID) using only 60% to 80% of all latent dimensions for image data. Code is available at https://github.com/Newbeeer/Anytime-Auto-Regressive-Model.
[ { "affiliations": [], "name": "ORDERED AUTOENCODING" }, { "affiliations": [], "name": "Yilun Xu" }, { "affiliations": [], "name": "Yang Song" }, { "affiliations": [], "name": "Linyuan Gong" } ]
[ { "authors": [ "X. Bao", "J. Lucas", "S. Sachdeva", "R.B. Grosse" ], "title": "Regularized linear autoencoders recover the principal components, eventually", "venue": "ArXiv, abs/2007.06731,", "year": 2020 }, { "authors": [ "Y. Bengio", "N. Léonard", "A. Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "C. Donahue", "J.J. McAuley", "M. Puckette" ], "title": "Adversarial audio synthesis", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "M. Germain", "K. Gregor", "I. Murray", "H. Larochelle" ], "title": "Made: Masked autoencoder for distribution estimation", "venue": "In ICML,", "year": 2015 }, { "authors": [ "P. Ghosh", "M.S.M. Sajjadi", "A. Vergari", "M.J. Black", "B. Schölkopf" ], "title": "From variational to deterministic autoencoders", "venue": null, "year": 1903 }, { "authors": [ "P. Guo", "X. Ni", "X. Chen", "X. Ji" ], "title": "Fast PixelCNN: Based on network acceleration cache and partial generation", "venue": "International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS),", "year": 2017 }, { "authors": [ "M. Heusel", "H. Ramsauer", "T. Unterthiner", "B. Nessler", "S. Hochreiter" ], "title": "GANs trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "J. Ho", "A. Jain", "P. Abbeel" ], "title": "Denoising diffusion probabilistic models", "venue": "ArXiv, abs/2006.11239,", "year": 2020 }, { "authors": [ "S. Hochreiter", "J. Schmidhuber" ], "title": "Long Short-Term Memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "N. Kalchbrenner", "L. Espeholt", "K. Simonyan", "Oord", "A. v. d", "A. Graves", "K. Kavukcuoglu" ], "title": "Neural machine translation in linear time", "venue": "arXiv preprint arXiv:1610.10099,", "year": 2016 }, { "authors": [ "N. Kalchbrenner", "A. van den Oord", "K. Simonyan", "I. Danihelka", "O. Vinyals", "A. Graves", "K. Kavukcuoglu" ], "title": "Video pixel networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational bayes", "venue": "In International Conference on Learning Representations,", "year": 2013 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "D. Kunin", "J.M. Bloom", "A. Goeva", "C. Seed" ], "title": "Loss landscapes of regularized linear autoencoders", "venue": "ArXiv, abs/1901.08168,", "year": 2019 }, { "authors": [ "Z. Liu", "P. Luo", "X. Wang", "X. Tang" ], "title": "Deep learning face attributes in the wild", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "J. Menick", "N. Kalchbrenner" ], "title": "Generating high fidelity images with subscale pixel networks and multidimensional upscaling", "venue": "arXiv preprint arXiv:1812.01608,", "year": 2018 }, { "authors": [ "Oord", "A. v. d", "S. Dieleman", "H. Zen", "K. Simonyan", "O. Vinyals", "A. Graves", "N. Kalchbrenner", "A. Senior", "K. Kavukcuoglu" ], "title": "WaveNet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Oord", "A. v. d", "N. Kalchbrenner", "K. Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "arXiv preprint arXiv:1601.06759,", "year": 2016 }, { "authors": [ "A. Radford", "J. Wu", "R. Child", "D. Luan", "D. Amodei", "I. Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "P. Ramachandran", "T.L. Paine", "P. Khorrami", "M. Babaeizadeh", "S. Chang", "Y. Zhang", "M.A. Hasegawa-Johnson", "R.H. Campbell", "T.S. Huang" ], "title": "Fast generation for convolutional autoregressive models", "venue": null, "year": 2017 }, { "authors": [ "A. Razavi", "A. van den Oord", "O. Vinyals" ], "title": "Generating diverse high-fidelity images with VQ-VAE-2", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "O. Rippel", "M.A. Gelbart", "R.P. Adams" ], "title": "Learning ordered representations with nested dropout", "venue": "In ICML,", "year": 2014 }, { "authors": [ "T. Salimans", "A. Karpathy", "X. Chen", "D.P. Kingma" ], "title": "PixelCNN++: Improving the PixelCNN with discretized logistic mixture likelihood and other modifications", "venue": "arXiv preprint arXiv:1701.05517,", "year": 2017 }, { "authors": [ "M. Scholz", "R. Vigário" ], "title": "Nonlinear PCA: a new hierarchical approach", "venue": "In ESANN,", "year": 2002 }, { "authors": [ "Y. Song", "S. Ermon" ], "title": "Generative modeling by estimating gradients of the data distribution", "venue": "ArXiv, abs/1907.05600,", "year": 2019 }, { "authors": [ "C. Szegedy", "V. Vanhoucke", "S. Ioffe", "J. Shlens", "Z. Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "A. van den Oord", "N. Kalchbrenner", "L. Espeholt", "K. Kavukcuoglu", "O. Vinyals", "A. Graves" ], "title": "Conditional image generation with PixelCNN decoders", "venue": null, "year": 2016 }, { "authors": [ "A. van den Oord", "O. Vinyals", "K. Kavukcuoglu" ], "title": "Neural discrete representation learning", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "A. van den Oord", "Y. Li", "I. Babuschkin", "K. Simonyan", "O. Vinyals", "K. Kavukcuoglu", "G. van den Driessche", "E. Lockhart", "L.C. Cobo", "F. Stimberg", "N. Casagrande", "D. Grewe", "S. Noury", "S. Dieleman", "E. Elsen", "N. Kalchbrenner", "H. Zen", "A. Graves", "H. King", "T. Walters", "D. Belov", "D. Hassabis" ], "title": "Parallel WaveNet: Fast high-fidelity speech", "venue": "synthesis. ArXiv,", "year": 2018 }, { "authors": [ "A. Vaswani", "N. Shazeer", "N. Parmar", "J. Uszkoreit", "L. Jones", "A.N. Gomez", "Ł. Kaiser", "I. Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "C. Veaux", "J. Yamagishi", "K. Macdonald" ], "title": "Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning", "venue": null, "year": 2017 }, { "authors": [ "Donahue" ], "title": "The decoders for dataset above are the counterpart of the corresponding encoders. For VCTK dataset, we use a encoder that has 5 convolutional layers with a filter size of 25 and stride of 4. The activation functions are chosen to be LeakyRelu-0.2. We adopt a decoder architecture which has 4 convolutional and upsampling layers. The architecture is the same with the generator architecture", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Autoregressive models are a prominent approach to data generation, and have been widely used to produce high quality samples of images (Oord et al., 2016b; Salimans et al., 2017; Menick & Kalchbrenner, 2018), audio (Oord et al., 2016a), video (Kalchbrenner et al., 2017) and text (Kalchbrenner et al., 2016; Radford et al., 2019). These models represent a joint distribution as a product of (simpler) conditionals, and sampling requires iterating over all these conditional distributions in a certain order. Due to the sequential nature of this process, the computational cost will grow at least linearly with respect to the number of conditional distributions, which is typically equal to the data dimension. As a result, the sampling process of autoregressive models can be slow and does not allow interruptions.\nAlthough caching techniques have been developed to speed up generation (Ramachandran et al., 2017; Guo et al., 2017), the high cost of sampling limits their applicability in many scenarios. For example, when running on multiple devices with different computational resources, we may wish to trade off sample quality for faster generation based on the computing power available on each device. Currently, a separate model must be trained for each device (i.e., computational budget) in order to\ntrade off sample quality for faster generation, and there is no way to control this trade-off on the fly to accommodate instantaneous resource availability at time-of-deployment.\nTo address this difficulty, we consider the novel task of adaptive autoregressive generation under computational constraints. We seek to build a single model that can automatically trade-off sample quality versus computational cost via anytime sampling, i.e., where the sampling process may be interrupted anytime (e.g., because of exhausted computational budget) to yield a complete sample whose sample quality decays with the earliness of termination.\nIn particular, we take advantage of a generalization of Principal Components Analysis (PCA) proposed by Rippel et al. (2014), which learns an ordered representations induced by a structured application of dropout to the representations learned by an autoencoder. Such a representation encodes raw data into a latent space where dimensions are sorted based on their importance for reconstruction. Autoregressive modeling is then applied in the ordered representation space instead. This approach enables a natural trade-off between quality and computation by truncating the length of the representations: When running on devices with high computational capacity, we can afford to generate the full representation and decode it to obtain a high quality sample; when on a tighter computational budget, we can generate only the first few dimensions of the representation and decode it to a sample whose quality degrades smoothly with truncation. Because decoding is usually fast and the main computation bottleneck lies on the autoregressive part, the run-time grows proportionally relative to the number of sampled latent dimensions.\nThrough experiments, we show that our autoregressive models are capable of trading off sample quality and inference speed. When training autoregressive models on the latent space given by our encoder, we witness little degradation of image sample quality using only around 60% to 80% of all latent codes, as measured by Fréchet Inception Distance (Heusel et al., 2017) on CIFAR-10 and CelebA. Compared to standard autoregressive models, our approach allows the sample quality to degrade gracefully as we reduce the computational budget for sampling. We also observe that on the VCTK audio dataset (Veaux et al., 2017), our autoregressive model is able to generate the low frequency features first, then gradually refine the waveforms with higher frequency components as we increase the number of sampled latent dimensions." }, { "heading": "2 BACKGROUND", "text": "Autoregressive Models Autoregressive models define a probability distribution over data points x ∈ RD by factorizing the joint probability distribution as a product of univariate conditional distributions with the chain rule. Using pθ to denote the distribution of the model, we have:\npθ(x) = D∏ i=1 pθ(xi | x1, · · · , xi−1) (1)\nThe model is trained by maximizing the likelihood:\nL = Epd(x)[log pθ(x)], (2)\nwhere pd(x) represents the data distribution.\nDifferent autoregressive models adopt different orderings of input dimensions and parameterize the conditional probability pθ(xi | x1, · · · , xi−1), i = 1, · · · , D in different ways. Most architectures over images order the variables x1, · · · , xD of image x in raster scan order (i.e., left-toright then top-to-bottom). Popular autoregressive architectures include MADE (Germain et al., 2015), PixelCNN (Oord et al., 2016b; van den Oord et al., 2016; Salimans et al., 2017) and Transformer (Vaswani et al., 2017), where they respectively use masked linear layers, convolutional layers and self-attention blocks to ensure that the output corresponding to pθ(xi | x1, · · · , xi−1) is oblivious of xi, xi+1, · · · , xD.\nCost of Sampling During training, we can evaluate autoregressive models efficiently because x1, · · · , xD are provided by data and all conditionals p(xi | x1, · · · , xi−1) can be computed in parallel. In contrast, sampling from autoregressive models is an inherently sequential process and cannot be easily accelerated by parallel computing: we first need to sample x1, after which we sample x2 from pθ(x2 | x1) and so on—the i-th variable xi can only be obtained after we have already computed x1, · · · , xi−1. Thus, the run-time of autoregressive generation grows at least linearly with respect to the length of a sample. In practice, the sample length D can be more than hundreds of\nthousands for real-world image and audio data. This poses a major challenge to fast autoregressive generation on a small computing budget." }, { "heading": "3 ANYTIME SAMPLING WITH ORDERED AUTOENCODERS", "text": "Our goal is to circumvent the non-interruption and linear time complexity of autoregressive models by pushing the task of autoregressive modeling from the original data space (e.g., pixel space) into an ordered representation space. In doing so, we develop a new class of autoregressive models where premature truncation of the autoregressive sampling process leads to the generation of a lower quality sample instead of an incomplete sample. In this section, we shall first describe the learning of the ordered representation space via the use of an ordered autoencoder. We then describe how to achieve anytime sampling with ordered autoencoders." }, { "heading": "3.1 ORDERED AUTOENCODERS", "text": "Consider an autoencoder that encodes an input x ∈ RD to a code z ∈ RK . Let z = eθ(x) : RD → RK be the encoder parameterized by θ and x′ = dφ(z) : RK → RD be the decoder parameterized by φ. We define eθ(·)≤i : x ∈ RD 7→ (z1, z2, · · · , zi, 0, · · · , 0)T ∈ RK , which truncates the representation to the first i dimensions of the encoding z = eθ(x), masking out the remainder of the dimensions with a zero value. We define the ordered autoencoder objective as\n1\nN N∑ i=1 1 K K∑ j=1 ‖xi − dφ(eθ(xi)≤j)‖22 . (3)\nWe note that Eq. (3) is equivalent to Rippel et al. (2014)’s nested dropout formulation using a uniform sampling of possible truncations. Moreover, when the encoder/decoder pair is constrained to be a pair of orthogonal matrices up to a transpose, then the optimal solution in Eq. (3) recovers PCA." }, { "heading": "3.1.1 THEORETICAL ANALYSIS", "text": "Rippel et al. (2014)’s analysis of the ordered autoencoder is limited to linear/sigmoid encoder and a linear decoder. In this section, we extend the analysis to general autoencoder architectures by employing an information-theoretic framework to analyze the importance of the i-th latent code to reconstruction for ordered autoencoders. We first reframe our problem from a probabilistic perspective. In lieu of using deterministic autoencoders, we assume that both the encoder and decoder are stochastic functions. In particular, we let qeθ (z | x) be a probability distribution over z ∈ RK conditioned on input x, and similarly let pdφ(x | z) be the stochastic counterpart to dφ(z). We then use qeθ (z | x)≤i to denote the distribution of (z1, z2, · · · , zi, 0, · · · , 0)T ∈ RK , where z ∼ qeθ (z | x), and let pdφ(x | z)≤i represent the distribution of pdφ(x | (z1, z2, · · · , zi, 0, · · · , 0)T ∈ RK). We can modify Eq. (3) to have the following form:\nEx∼pd(x),i∼U{1,K}Ez∼qeθ (z|x)≤i [− log pdφ(x|z)≤i], (4)\nwhere U{1,K} denotes a uniform distribution over {1, 2, · · · ,K}, and pd(x) represents the data distribution. We can choose both the encoder and decoder to be fully factorized Gaussian distributions with a fixed variance σ2, then Eq. (13) can be simplified to\nEpd(x) [ 1\nK K∑ i=1 Ez∼N (eθ(x)≤i;σ2) [ 1 2σ2 ‖x− dφ(z)≤i‖22 ]] .\nThe stochastic encoder and decoder in this case will become deterministic when σ → 0, and the above equation will yield the same encoder/decoder pair as Eq. (3) when σ → 0 and N →∞. The optimal encoders and decoders that minimize Eq. (13) satisfy the following property. Theorem 1. Let x denote the input random variable. Assuming both the encoder and decoder are optimal in terms of minimizing Eq. (13), and ∀i ∈ 3, · · · ,K, zi−1 ⊥ zi | x, z≤i−2, we have\n∀i ∈ {3, · · · ,K} : I(zi;x|z≤i−1) ≤ I(zi−1;x|z≤i−2),\nwhere z≤i denotes (z1, z2, · · · , zi).\nWe defer the proof to Appendix A.1. The assumption zi−1 ⊥ zi | x, z≤i−2 holds whenever the encoder qθ(z | x) is a factorized distribution, which is a common choice in variational autoencoders (Kingma & Welling, 2013), and we use I(a;b | c) to denote the mutual information between random variables a and b conditioned on c. Intuitively, the above theorem states that for optimal encoders and decoders that minimize Eq. (13), one can extract less additional information about the raw input as the code gets longer. Therefore, there exists a natural ordering among different dimensions of the code based on the additional information they can provide for reconstructing the inputs." }, { "heading": "3.2 ANYTIME SAMPLING", "text": "Once we have learned an ordered autoencoder, we then train an autoregressive model on the full length codes in the ordered representation space, also referred to as ex-post density estimation (Ghosh et al., 2020). For each input xi in a dataset x1, · · · ,xN , we feed it to the encoder to get zi = eθ(xi). The resulting codes z1, z2, · · · , zN are used as training data. After training both the ordered autoencoder and autoregressive model, we can perform anytime sampling on a large spectrum of computing budgets. Suppose for example we can afford to generate T code dimensions from the autoregressive model, denoted as z≤T ∈ RT . We can simply zero-pad it to get (z1, z2, · · · , zT , 0, · · · , 0)T ∈ RK and decode it to get a complete sample. Unlike the autoregressive part, the decoder has access to all dimensions of the latent code at the same time and can decode in parallel. The framework is shown in Fig. 1(b). For the implementation, we use the ordered VQ-VAE (Section 4) as the ordered autoencoder and the Transformer (Vaswani et al., 2017) as the autoregressive model. On modern GPUs, the code length has minimal effect on the run-time of decoding, as long as the decoder is not itself autoregressive (see empirical verifications in Section 5.2.2)." }, { "heading": "4 ORDERED VQ-VAE", "text": "In this section, we apply the ordered autoencoder framework to the vector quantized variational autoencoder (VQ-VAE) and its extension (van den Oord et al., 2017; Razavi et al., 2019). Since these models are quantized autoencoders paired with a latent autoregressive model, they admit a natural extension to ordered VQ-VAEs (OVQ-VAEs) under our framework—a new family of VQVAE models capable of anytime sampling. Below, we begin by describing the VQ-VAE, and then highlight two key design choices (ordered discrete codes and channel-wise quantization) critical for OVQ-VAEs. We show that, with small changes of the original VQ-VAE, these two choices can be applied straightforwardly." }, { "heading": "4.1 VQ-VAE", "text": "To construct a VQ-VAE with code length of K discrete latent variables , the encoder must first map the raw input x to a continuous representation ze = eθ(x) ∈ RK×D, before feeding it to a vector-valued quantization function q : RK×D → {1, 2, · · · , C}K defined as\nq(ze)j = arg min i∈{1,··· ,C}\n∥∥ei − zej∥∥2 ,\nwhere q(ze)j ∈ {1, 2, · · · , C} denotes the j-th component of the vector-valued function q(ze), zej ∈ RD denotes the j-th row of ze, and ei denotes the i-th row of the embedding matrix E ∈ EC×D. Next, we view q(ze) as a sequence of indices and use them to look up embedding vectors from the codebook E. This yields a latent representation zd ∈ RK×D, given by zdj = eq(ze)j , where zdj ∈ RD denotes the j-th row of zd. Finally, we can decode zd to obtain the reconstruction dφ(zd). This procedure can be viewed as a regular autoencoder with a non-differentiable nonlinear function that maps each latent vector zej to 1-of-K embedding vectors ei.\nDuring training, we use the straight-through gradient estimator (Bengio et al., 2013) to propagate gradients through the quantization function, i.e., gradients are directly copied from the decoder input zd to the encoder output ze. The loss function for training on a single data point x is given by∥∥dφ(zd)− x∥∥22 + ∥∥sg[eθ(x)]− zd∥∥2F + β ∥∥eθ(x)− sg[zd]∥∥2F , (5) where sg stands for the stop_gradient operator, which is defined as identity function at forward computation and has zero partial derivatives at backward propagation. β is a hyper-parameter ranging from 0.1 to 2.0. The first term of Eq. (5) is the standard reconstruction loss, the second term is for embedding learning while the third term is for training stability (van den Oord et al., 2017). Samples from a VQ-VAE can be produced by first training an autoregressive model on its latent space, followed by decoding samples from the autoregressive model into the raw data space." }, { "heading": "4.2 ORDERED DISCRETE CODES", "text": "Since the VQ-VAE outputs a sequence of discrete latent codes q(ze), we wish to impose an ordering that prioritizes the code dimensions based on importance to reconstruction. In analogy to Eq. (3), we can modify the reconstruction error term in Eq. (5) to learn ordered latent representations. The modified loss function is an order-inducing objective given by\n1\nK K∑ i=1 [ ∥∥dφ(zd≤i)− x∥∥22 + ∥∥sg[eθ(x)≤i]− zd≤i∥∥2F + β ∥∥eθ(x)≤i − sg[zd≤i]∥∥2F ], (6) where eθ(x)≤i and zd≤i denote the results of keeping the top i rows of eθ(x) and z\nd and then masking out the remainder rows with zero vectors. We uniformly sample the masking index i ∼ U{1,K} to approximate the average in Eq. (6) when K is large. An alternative sampling distribution is the geometric distribution (Rippel et al., 2014). In Appendix C.3, we show that OVQ-VAE is sensitive to the choice of the parameter in geometric distribution. Because of the difficulty, Rippel et al. (2014) applies additional tricks. This results in the learning of ordered discrete latent variables, which can then be paired with a latent autoregressive model for anytime sampling." }, { "heading": "4.3 CHANNEL-WISE QUANTIZATION", "text": "In Section 4.1, we assume the encoder output to be a K ×D matrix (i.e., ze ∈ RK×D). In practice, the output can have various sizes depending on the encoder network, and we need to reshape it to a two-dimensional matrix. For example, when encoding images, the encoder is typically a 2D convolutional neural network (CNN) whose output is a 3D latent feature map of size L×H ×W . Here L, H , and W stand for the channel, height, and width of the feature maps. We discuss below how the reshaping procedure can significantly impact the performance of anytime sampling and propose a reshaping procedure that facilitates high-performance anytime sampling.\nConsider convolutional encoders on image data, where the output feature map has a size ofL×H×W .\nThe most common way of reshaping this 3D feature map, as in van den Oord et al. (2017), is to let H ×W be the code length, and let the number of channels L be the size of embedding vectors, i.e., K = H ×W and D = L. We call this pattern spatial-wise quantization, as each spatial location in the feature map corresponds to one code dimension and will be quantized separately. Since the code dimensions\ncorrespond to spatial locations of the feature map, they encode local features due to a limited receptive field. This is detrimental to anytime sampling, because early dimensions cannot capture the global information needed for reconstructing the entire image. We demonstrate this in Fig. 2, which shows that OVQ-VAE with spatial-wise quantization is only able to reconstruct the top rows of an image with 1/4 of the code length.\nTo address this issue, we propose channel-wise quantization, where each channel of the feature map is viewed as one code dimension and quantized separately (see Fig. 1(a) for visual comparison of spatial-wise and channel-wise quantization). Specifically, the code length is L (i.e., K = L), and the size of the embedding vectors is H ×W (i.e., D = H ×W ). In this case, one code dimension includes all spatial locations in the feature map and can capture global information better. As shown in the right panel of Fig. 2, channel-wise quantization clearly outperforms spatial-wise quantization for anytime sampling. We use channel-wise quantization in all subsequent experiments. Note that in practice we can easily apply the channel-wise quantization on VQ-VAE by changing the code length from H ×W to L, as shown in Fig. 1(a)." }, { "heading": "5 EXPERIMENTS", "text": "In our experiments, we focus on anytime sampling for autoregressive models trained on the latent space of OVQ-VAEs, as shown in Fig. 1(b). We first verify that our learning objectives in Eq. (3), Eq. (6) are effective at inducing ordered representations. Next, we demonstrate that our OVQ-VAE models achieve comparable sample quality to regular VQ-VAEs on several image and audio datasets, while additionally allowing a graceful trade-off between sample quality and computation time via anytime sampling. Due to limits on space, we defer the results of audio generation to Appendix B and provide additional experimental details, results and code links in Appendix E and G." }, { "heading": "5.1 ORDERED VERSUS UNORDERED CODES", "text": "Our proposed ordered autoencoder framework learns an ordered encoding that is in contrast to\nthe encoding learned by a standard autoencoder (which we shall refer to as unordered). In addition to the theoretical analysis in Section 3.1.1, we provide further empirical analysis to characterize the difference between ordered and unordered codes. In particular, we compare the importance of the i-th code—as measured by the reduction in reconstruction error ∆(i)—for PCA, standard (unordered) VQ-VAE, and ordered VQ-VAE. For VQ-VAE and ordered VQ-VAE, we define the reduction in reconstruction error ∆(i) for a data point x as\n∆x(i) ,\n∥∥dφ(zd≤i−1)− x∥∥2F − ∥∥dφ(zd≤i)− x∥∥2F (7)\nAveraging ∆x(i) over the entire dataset thus yields ∆(i). Similarly we define ∆(i) as the reduction on reconstruction error of the entire dataset, when adding the i-th principal component for PCA.\nFig. 3(a) shows the ∆(i)’s of the three models on the CIFAR-10. Since PCA and ordered VQ-VAE both learn an ordered encoding, their ∆(i)’s decay gradually as i increases. In contrast, the standard VQ-VAE with an unordered encoding exhibits a highly irregular ∆(i), indicating no meaningful ordering of the dimensions.\nFig. 3(b) further shows how the reconstruction error decreases as a function of the truncated code length for the three models. Although unordered VQ-VAE and ordered VQ-VAE achieve similar reconstruction errors for sufficiently large code lengths, it is evident that an ordered encoding achieves significantly better reconstructions when the code length is aggressively truncated. When sufficiently truncated, we observe even PCA outperforms unordered VQ-VAE despite the latter being a more expressive model. In contrast, ordered VQ-VAE achieves superior reconstructions compared to PCA and unordered VQ-VAE across all truncation lengths.\nWe repeat the experiment on standard VAE to disentangle the specific implementations in VQ-VAE. We observe that the order-inducing objective has consistent results on standard VAE (Appendix C.1)." }, { "heading": "5.2 IMAGE GENERATION", "text": "We test the performance of anytime sampling using OVQ-VAEs (Anytime + ordered) on several image datasets. We compare our approach to two baselines. One is the original VQ-VAE model proposed by van den Oord et al. (2017) without anytime sampling. The other is using anytime sampling with unordered VQ-VAEs (Anytime + unordered), where the models have the same architectures as ours but are trained by minimizing Eq. (5). We empirically verify that 1) we are able to generate high quality image samples; 2) image quality degrades gracefully as we reduce the sampled code length for anytime sampling; and 3) anytime sampling improves the inference speed compared to naïve sampling of original VQ-VAEs.\nWe evaluate the model performance on the MNIST, CIFAR-10 (Krizhevsky, 2009) and CelebA (Liu et al., 2014) datasets. For CelebA, the images are resized to 64× 64. All pixel values are scaled to the range [0, 1]. We borrow the model architectures and optimizers from van den Oord et al. (2017). The full code length and the codebook size are 16 and 126 for MNIST, 70 and 1000 for CIFAR-10, and 100 and 500 for CelebA respectively. We train a Transformer (Vaswani et al., 2017) on our VQ-VAEs, as opposed to the PixelCNN model used in van den Oord et al. (2017). PixelCNNs use standard 2D convolutional layers to capture a bounded receptive field and model the conditional dependence. Transformers apply attention mechanism and feed forward network to model the conditional dependence of 1D sequence. Transformers are arguably more suitable for channel-wise quantization, since there are no 2D spatial relations among different code dimensions that can be leveraged by convolutional models (such as PixelCNNs). Our experiments on different autoregressive models in Appendix C.2 further support the arguments." }, { "heading": "5.2.1 IMAGE QUALITY", "text": "In Fig. 4(a), we report FID (Heusel et al., 2017) scores (lower is better) on CIFAR-10 and CelebA when performing anytime sampling for ordered versus unordered VQ-VAE. FID (Fréchet Inception Distance) score is the Fréchet distance between two multivariate Gaussians, whose means and covariances are estimated from the 2048-dimensional activations of the Inception-v3 (Szegedy et al., 2016) network for real and generated samples respectively. As a reference, we also report the FID scores when using the original VQ-VAE model (with residual blocks and spatial-wise quantization) sampled at the full code length (van den Oord et al., 2017). Our main finding is that OVQ-VAE achieves a better FID score than unordered VQ-VAE at all fractional code lengths (ranging from 20% to 100%); in other words, OVQ-VAE achieves strictly superior anytime sampling performance compared to unordered VQ-VAE on both CIFAR-10 and CelebA. On CIFAR-10 dataset, a better FID score is achieved by OVQ-VAE even when sampling full codes. In Appendix C.4, we show that the regularization effect of the ordered codes causes this phenomenon.\nIn Fig. 5 (more in Appendix G.1), we visualize the sample quality degradation as a function of fractional code length when sampling from the OVQ-VAE. We observe a consistent increase in sample quality as we increased the fractional code length. In particular, we observe the model to initially generate a global structure of an image and then gradually fill in local details. We further show in Appendix F that, samples sharing the highest priority latent code have similar global structure.\nAlthough our method was inspired by PCA, we encountered limited success when training an autoregressive model on the PCA-represented data. Please refer to Appendix D for more details." }, { "heading": "5.2.2 INFERENCE SPEED", "text": "We compare the inference speed of our approach vs. the original VQ-VAE model by the wall-clock time needed for sampling. We also include the decoding time in our approach. We respectively measure the time of generating 50000 and 100000 images on CIFAR-10 and CelebA datasets, with a batch size of 100. All samples are produced on a single NVIDIA TITAN Xp GPU.\nFig. 4(b) shows that the time needed for anytime sampling increases almost linearly with respect to the sampled code length. This supports our argument in Section 3.2 that the decoding time is negligible compared to the autoregressive component. Indeed, the decoder took around 24 seconds to generate all samples for CelebA, whereas the sampling time of the autoregressive model was around 610 seconds—over an order of magnitude larger. Moreover, since we can achieve roughly the highest sample quality with only 60% of the full code length on CelebA, anytime sampling can save around 40% run-time compared to naïve sampling without hurting sample quality.\nIn addition, our method is faster than the original VQ-VAE even when sampling the full code length, without compromising sample quality (cf ., Section 5.2.1). This is because the Transformer model we used is sufficiently shallower than the Gated PixelCNN (van den Oord et al., 2016) model in the original VQ-VAE paper. Compared to PixelCNN++ (Salimans et al., 2017), an autoregressive model on the raw pixel space, the sampling speed of our method can be an order of magnitude faster since our autoregressive models are trained on the latent space with much lower dimensionality." }, { "heading": "6 RELATED WORK", "text": "Prior work has tackled the issue of slow autoregressive generation by improving implementations of the generation algorithm. For example, the sampling speed of convolutional autoregressive models can be improved substantially by caching hidden state computation (Ramachandran et al., 2017). While such approaches provide substantial speedups in generation time, they are still at best linear in the dimension of the sample space. van den Oord et al. (2018) improves the inference speed by allowing parallel computing. Compared to our approach, they do not have the test-time adaptivity to computational constraints. In contrast, we design methods that allow trade-offs between generation speed and sample quality on-the-fly based on computational constraints. For example, running apps can accommodate to the real-time computational resources without model re-training. In addition, they can be combined together with our method without sacrificing sample quality. Specifically, Ramachandran et al. (2017) leverages caches to speed up autoregressive sampling, which can be directly applied to our autoregressive model on ordered codes without affecting sample quality. van den Oord et al. (2018) proposes probability density distillation to distill autoregressive models\ninto fast implicit generators. We can apply the same technique on our latent autoregressive model to allow a similar speedup.\nIn order to enable anytime sampling, our method requires learning an ordered latent representation of data by training ordered autoencoders. Rippel et al. (2014) proposes a generalization of Principal Components Analysis to learn an ordered representation. Instead of the uniform distribution over discrete codes in our method, they adopted a geometric distribution over continuous codes during the training of the ordered autoencoders. Because of the difference they require additional tricks such as unit sweeping and adaptive regularization coefficients to stabilize the training, while our method is more stable and scalable. In addition, they only focus on fast retrieval and image compression. By contrast, we further extend our approach to autoencoders with discrete latent codes (e.g., VQ-VAEs) and explore their applications in anytime sampling for autoregressive models. Another work related to our approach is hierarchical nonlinear PCA (Scholz & Vigário, 2002). We generalize their approach to latent spaces of arbitrary dimensionality, and leverage Monte Carlo estimations to improve the efficiency when learning very high dimensional latent representations. The denoising generative models proposed by Song & Ermon (2019); Ho et al. (2020) progressively denoise images into better quality, instead of modeling images from coarse to fine like our methods. This means that interrupting the sampling procedure of diffusion models at an early time might lead to very noisy samples, but in our case it will lead to images with corrector coarse structures and no noise which is arguably more desirable.\nA line of works draw connections between ordered latent codes and the linear autoencoders. Kunin et al. (2019) proves that the principal directions can be deduced from the critical points of L2 regularized linear autoencoders, and Bao et al. (2020) further shows that linear autoencoders can directly learn the ordered, axis-aligned principal components with non-uniform L2 regularization." }, { "heading": "7 CONCLUSION", "text": "Sampling from autoregressive models is an expensive sequential process that can be intractable when on a tight computing budget. To address this difficulty, we consider the novel task of adaptive autoregressive sampling that can naturally trade-off computation with sample quality. Inspired by PCA, we adopt ordered autoencoders, whose latent codes are prioritized based on their importance to reconstruction. We show that it is possible to do anytime sampling for autoregressive models trained on these ordered latent codes—we may stop the sequential sampling process at any step and still obtain a complete sample of reasonable quality by decoding the partial codes.\nWith both theoretical arguments and empirical evidence, we show that ordered autoencoders can induce a valid ordering that facilitates anytime sampling. Experimentally, we test our approach on several image and audio datasets by pairing an ordered VQ-VAE (a powerful autoencoder architecture) and a Transformer (an expressive autoregressive model) on the latent space. We demonstrate that our samples suffer almost no loss of quality (as measured by FID scores) for images when using only 60% to 80% of all code dimensions, and the sample quality degrades gracefully as we gradually reduce the code length." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We are grateful to Shengjia Zhao, Tommi Jaakkola and anonymous reviewers in ICLR for helpful discussion. We would like to thank Tianying Wen for the lovely figure. YX was supported by the HDTV Grand Alliance Fellowship. SE was supported by NSF (#1651565, #1522054, #1733686), ONR (N00014- 19-1-2145), AFOSR (FA9550-19-1-0024), Amazon AWS, and FLI. YS was partially supported by the Apple PhD Fellowship in AI/ML. This work was done in part while AG was a Research Fellow at the Simons Institute for the Theory of Computing." }, { "heading": "A PROOFS", "text": "A.1 PROOF FOR THEOREM 1\nProof. For simplicity we denote the distribution of the stochastic part (first i dimensions) of qeθ (z|x)≤i as qeθ (z≤i|x), and similarly we denote pdφ(x|z)≤i as pdφ(x|z≤i). We first reformulate the objective\nL = −Ex∼pd(x),i∼U{1,K}Ez∼qeθ (z|x)≤i [log pdφ(x|z)≤i]\n= −1 K K∑ i=1 Ex∼pd(x)Ez∼qeθ (z|x)≤i [log pdφ(x|z)≤i]\n= −1 K K∑ i=1 ∫ pd(x)qeθ (z≤i|x) log pdφ(x|z≤i)dxdz≤i\n≥ −1 K K∑ i=1 ∫ pd(x)qeθ (z≤i|x) log qeθ (x|z≤i)dxdz≤i (8)\nThe inequality (8) holds because KL-divergences are non-negative. Under the assumption of an optimal decoder φ, we can achieve the equality in (8), in which case the objective equals\nL = −1 K K∑ i=1 ∫ qeθ (z≤i,x) log qeθ (x, z≤i) qeθ (z≤i) dxdz≤i\nWe can define the following modified objective by adding the data entropy term, since it is a constant independent of θ.\nL = −1 K K∑ i=1 ∫ qeθ (z≤i,x) log qeθ (x, z≤i) qeθ (z≤i)pd(x) dxdz≤i\n= −1 K K∑ i=1 I(x; z≤i) (9)\nIf there exists an integer i ∈ {2, · · · ,K} such that I(zi;x|z≤i−1) > I(zi−1;x|z≤i−2), we can exchange the position of zi and zi−1 to increase the value of objective (9). By the chain rule of mutual information we have:\nI(x; zi, zi−1, z≤i−2) = I(zi−1, z≤i−2;x) + I(zi;x|z≤i−1) = I(zi, z≤i−2;x) + I(zi−1;x|z≤i−2, zi).\nWhen I(zi;x|z≤i−1) > I(zi−1;x|z≤i−2), we can show that I(x; zi, z≤i−2) > I(x; zi−1, z≤i−2):\nI(x; zi, z≤i−2)− I(x; zi−1, z≤i−2) = I(zi;x|z≤i−1)− I(zi−1;x|z≤i−2, zi) > I(zi−1;x|z≤i−2)− I(zi−1;x|z≤i−2, zi) = I(x, zi; zi−1|z≤i−2)− I(x; zi−1|z≤i−2, zi) (10) = I(zi; zi−1|z≤i−2) ≥ 0, (11)\nwhere Eq. (10) holds by the chain rule of mutual information: I(zi−1;x|z≤i−2) = I(x, zi; zi−1|z≤i−2) − I(zi; zi−1|x, z≤i−2), and I(zi; zi−1|x, z≤i−2) = 0 by the conditional independence assumption zi ⊥ zi−1|x, z≤i−2. Hence if we exchange the position of zi and zi−1 of the latent vector z, we can show that the new objective value after the position switch is strictly smaller\nthan the original one:\n−1 K ( i−2∑ k=1 I(x; z≤k) + I(x; z≤i−2, zi) + K∑ k=i I(x; z≤i) )\n< −1 K ( i−2∑ k=1 I(x; z≤k) + I(x; z≤i−2, zi−1) + K∑ k=i I(x; z≤i) ) (12)\n= −1 K K∑ k=1 I(x; z≤k).\nInequality (12) holds by I(x; zi, z≤i−2) > I(x; zi−1, z≤i−2). From above we can conclude that if the encoder is optimal, then the following inequalities must hold:\n∀i ∈ {2, · · · ,K} : I(zi;x|z≤i−1) ≤ I(zi−1;x|z≤i−2).\nOtherwise we can exchange the dimensions to make the objective Eq. (9) smaller, which contradicts the optimally of encoder eθ." }, { "heading": "B AUDIO GENERATION", "text": "Our method can also be applied to audio data. We evaluate anytime autoregressive models on the VCTK dataset (Veaux et al., 2017), which consists of speech recordings from 109 different speakers. The original VQ-VAE uses an autoregressive decoder, which may cost more time than the latent autoregressive model and thus cannot be accelerated by anytime sampling. Instead, we adopt a non-autoregressive decoder inspired by the generator of WaveGan (Donahue et al., 2018). Same as images, we train a Transformer model on the latent codes.\nWe compare waveforms sampled from ordered vs. unordered VQ-VAEs in Fig. 6, and provide links to audio samples in Appendix G.2. By inspecting the waveforms and audio samples, we observe that the generated waveform captures the correct global structure using as few as 6.25% of the full code length, and gradually refines itself as more code dimensions are sampled. In contrast, audio samples from the unordered VQ-VAE contain considerably more noise when using truncated codes." }, { "heading": "C ADDITIONAL EXPERIMENTAL RESULTS", "text": "C.1 ORDERED VERSUS UNORDERED CODES ON VAE\nTo better understand the effect of order-inducing objective, we disentangle the order-inducing objective Eq. (6) with the specific implementation on VQ-VAE, such as stop_gradient operator and quantization. We adopt a similar order-inducing objective on standard VAE:\nEx∼pd(x)\n[ 1\nK K∑ i=1 [Ez∼qeθ (z|x)≤i [− log pdφ(x|z)≤i] + KL(qeθ (z|x)≤i||p(z)≤i)]\n] , (13)\nwhere qeθ (z | x)≤i denotes the distribution of (z1, z2, · · · , zi, 0, · · · , 0)T ∈ RK , z ∼ qeθ (z | x), and pdφ(x | z)≤i represents the distribution of pdφ(x | (z1, z2, · · · , zi, 0, · · · , 0)T ∈ RK). We set the prior p(z)≤i as the i dimensional unit normal distribution.\nWe repeat the experiments on CIFAR-10 and observe similar experimental results between VQ-VAE and standard VAE. Fig. 7(a) shows that the standard VAE has irregular ∆(i), while the ordered VAE has much better ordering on latent codes. Fig. 7(b) further shows that the ordered VAE has better reconstructions under different truncated code lengths. The experimental results are consistent with Fig. 3(a) and Fig. 3(b)..\nC.2 ABLATION STUDY ON AUTOREGRESSIVE MODEL\nWe study the performance of different autoregressive model on ordered codes. We compare Transformer (Vaswani et al., 2017), PixelCNN (Oord et al., 2016b) and LSTM (Hochreiter & Schmidhuber, 1997). PixelCNN adopts standard 2D convolutional layers to model the conditional dependence. In order to enlarge the receptive field, PixelCNN stacks many convolutional layers. Transformer applies attention mechanism and feed forward network to model the conditional dependence of 1D sequences. LSTM uses four different types of gates to improve the long-term modeling power of the recurrent models on 1D sequences.\nTransformer and LSTM can be naturally applied to the 1D channel-wise quantized codes. Since PixelCNN operates on 2D data, we reshape the 1D codes into 2D tensors. More specifically, for 70-dimensional 1D codes on CIFAR-10, we firstly pad the codes into 81 dimensions then reshape it into 9× 9 tensors. Fig. 8(a) shows the FID scores of different autoregressive model on CIFAR-10 dataset. The results show that PixelCNN has inferior performance in all cases except when used with 0.2 fractions of full code length. This is because PixelCNN works well only when the input has strong local spatial correlations, but there is no spatial correlation for channel-wise quantized codes. In contrast, autoregressive models tailored for 1D sequences work better on channel-wise dequantized codes, as they have uniformly better FID scores when using 0.4/0.6/0.8/1.0 fractions of full code length.\nC.3 ABLATION STUDY ON SAMPLING DISTRIBUTION\nWe study the effect of different sampling distributions in order-inducing objective Eq. (6). We compare the adopted uniform distribution with the geometric distribution used in Rippel et al. (2014) on CIFAR-10, as shown in Fig. 8(b). We normalize the geometric distribution on finite indices with length K, i.e. Pr(i = k) = (1−p)\nk−1p 1−(1−p)K , i ∈ {1, 2, . . . ,K}, and denote the normalized geometric\ndistribution as Geo(p). Note that when p→ 0, Geo(p) recovers the uniform distribution. Fig. 8(c) shows that OVQ-VAEs trained with all the different distributions can trade off the quality and computational budget. We find that OVQ-VAE is sensitive to the parameter of the geometric distribution. The performance of OVQ-VAE with Geo(0.03) is marginally worse than the uniform distribution. But when changing the distribution to Geo(0.1), the FID scores become much worse with large code length (0.6/0.8/1.0 fractions of the full code length). Since for i ∼ Geo(p), Pr(i ≥ t) = (1−p)t−1(1− (1−p)K−t+1)/(1− (1−p)K) ≤ (1−p)t−1, which indicates that the geometric distribution allocates exponentially smaller probability to code with higher index.\nC.4 ON THE REGULARIZATION EFFECT OF ORDERED CODES\nImposing an order on latent codes improves the inductive bias for the autoregressive model to learn the codes. When using full codes on CIFAR-10 dataset, even though the OVQ-VAE has higher training error than unordered VQ-VAE, a better FID score is achieved by the ordered model. This validates the intuition that it is easier to model an image from coarse to fine. These results are corroborated by lower FID scores for the anytime model with full length codes under different number of training samples. As shown in Fig. 9(a), the ordered model has an increasingly larger FID improvement over the unordered model when the dataset becomes increasingly smaller. These results indicate that training on ordered codes has a regularization effect. We hypothesize that ordered codes capture the inductive bias of coarse-to-fine image modeling better.\nC.5 COMPARISON TO TALORED VQ-VAE\nWe compare the anytime sampling to the unordered VQ-VAE with tailored latent space (Tailored VQ-VAE) on CIFAR-10. The Tailored VQ-VAE has a pre-specified latent size, using the same computational budget as truncated codes of anytime sampling. For a fair comparison, we experiment with transformers on the latent space of Tailored VQ-VAE. Fig. 9(b) shows that anytime sampling always has better FID scores than Tailored VQ-VAE, except when the code is very short. We hypothesize that the learning signals from training on the full length codes with OVQ-VAE improves the quality when codes are shorter, thus demonstrating a FID improvement over the Tailored VQ-VAE. Moreover, OVQ-VAE has the additional benefit of allowing anytime sampling when the computational budget is not known in advance." }, { "heading": "D TRAIN AUTOREGRESSIVE MODEL ON PCA REPRESENTATION", "text": "An alternative way to induce order on the latent space is by projecting data onto the PCA representation. However, we encounter limited success in training the autoregressive model on the top of PCA representation.\nWhen training an autoregressive model on PCA-represented data, we observe inferior log-likelihoods. We first prepare the data by uniformly dequantizing the discrete pixel values to continuous ones. Then we project these continuous data into the PCA space by using the orthogonal projection matrix composed of singular vectors of the data covariance matrix. Note that this projection preserves the volume of the original pixel space since the projection matrix’s determinant is 1, so log-likelihoods of a model trained on the raw continuous data space and the PCA projected space are comparable. We report the bits/dim (lower is better), which is computed by dividing the negative log-likelihood (log base 2) by the dimension of data. We train transformer models on the projected space and the raw data space. Surprisingly, on MNIST, the transformer model obtains 1.22 bits/dim on the projected space versus 0.80 bits/dim on the raw data, along with inferior sample quality. We hypothesize two reasons. First, models have been tuned with respect to inputs that are unlike the PCA representation, but rather on inputs such as raw pixel data. Second, PCA does not capture multiple data modalities well, unlike OVQ-VAE. Moreover, autoregressive models typically do not perform well on continuous data. In contrast, our Transformer model operates on discrete latent codes of the VQ-VAE." }, { "heading": "E EXTRA IMPLEMENTATION DETAILS", "text": "Our code is released via the anonymous link https://anonymous.4open.science/r/3946e9c8-8f98-4836abc1-0f711244476d/ and included in the supplementary material as well. Below we introduce more details on network architectures and the training processes.\nE.1 NETWORKS\nFor MNIST dataset, the encoder has 3 convolutional layers with filter size (4,4,3) and stride (2,2,1) respectively. For CelebA and CIFAR-10 datasets, the encoder has 4 convolutional layers with filter size (4,4,4,3) and stride (2,2,2,1) respectively. The decoders for dataset above are the counterpart of the corresponding encoders. For VCTK dataset, we use a encoder that has 5 convolutional layers with a filter size of 25 and stride of 4. The activation functions are chosen to be LeakyRelu-0.2. We adopt a decoder architecture which has 4 convolutional and upsampling layers. The architecture is the same with the generator architecture in Donahue et al. (2018), except for the number of layers.\nFor all the datasets, we use a 6-layer Transformer decoder with an embedding size of 512, latent size of 2048, and dropout rate of 0.1. We use 8 heads in multi-head self-attention layers.\nE.2 IMAGE GENERATION\nThe FID scores are computed using the official code from TTUR (Heusel et al., 2017)1 authors. We compute FID scores on CIFAR-10 and CelebA based on a total of 50000 samples and 100000 samples respectively.\nWe pre-train the VQ-VAE models with full code lengths for 200 epochs. Then we train the VQ-VAE models with the new objective Eq. (6) for 200 more epochs. We use the Adam optimizer with learning rate 1.0× 10−3 for training. We train the autoregressive model for 50 epochs on both MNIST and CIFAR-10, and 100 epochs on CelebA. We use the Adam optimizer with a learning rate of 2.0×10−3 for the Transformer decoder. We select the checkpoint with the smallest validation loss on every epoch. The batch size is fixed to be 128 during all training processes.\nE.3 AUDIO GENERATION\nWe randomly subsample all the data points in VCTK dataset to make all audios have the same length (15360). The VQ-VAE models are pre-trained with full code length for 20 epochs, and then fine-tuned with our objective Eq. (6) for 20 more epochs. We use the Adam optimizer with learning rate 2.0× 10−4 for training the VQ-VAE model. We train the Transformer for 50 epochs on VCTK, use the Adam optimizer with a learning rate of 2.0×10−3. We select the checkpoint with the smallest validation loss on every epoch. The batch size is fixed to be 8 for the VQ-VAE model and 128 for the Transformer during training." }, { "heading": "F SAMPLES ON THE SAME PRIORITY CODE", "text": "As further illustration of the ordered encoding, we show in Fig. 10 the result of full code length sampling when the first (highest priority) discrete latent code is fixed. The fixing of the first latent code causes anytime sampling to produce a wide variety of samples that share high-level global similarities." }, { "heading": "G EXTRA SAMPLES", "text": "G.1 IMAGE SAMPLES\nWe show extended samples from ordered VQ-VAEs in Fig. 11, Fig. 12 and Fig. 13. For comparison, we also provide samples from unordered VQ-VAEs in Fig. 14, Fig. 15 and Fig. 16.\nG.2 AUDIO SAMPLES\nWe include the audio samples that are sampled from our anytime sampler in the supplementary material. The audio / audio_baseline directory contains 90 samples from ordered / unordered VQ-VAEs respectively. The fractions of full code length (0.0625, 0.25 and 1.0) used for generation are included in the names of .wav files.\n1https://github.com/bioinf-jku/TTUR" } ]
2,021
null
SP:a4cda983cb5a670c3ad7054b9cd7797107af64b1
[ "This paper presents a one-class classification method using a fully convolutional model and directly using the output map as an explanation map. The method is dubbed FCDD for fully convolutional data descriptor. FCDD uses a hypersphere classifier combined with a pseudo-Huber loss. FCDD is trained using outliers exposure (OE) from a different but related dataset. The empirical study consists of 3 parts:" ]
Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away. Because this transformation is highly non-linear, finding interpretations poses a significant challenge. In this paper we present an explainable deep one-class classification method, Fully Convolutional Data Description (FCDD), where the mapped samples are themselves also an explanation heatmap. FCDD yields competitive detection performance and provides reasonable explanations on common anomaly detection benchmarks with CIFAR-10 and ImageNet. On MVTec-AD, a recent manufacturing dataset offering ground-truth anomaly maps, FCDD sets a new state of the art in the unsupervised setting. Our method can incorporate ground-truth anomaly explanations during training and using even a few of these (∼ 5) improves performance significantly. Finally, using FCDD’s explanations, we demonstrate the vulnerability of deep one-class classification models to spurious image features such as image watermarks.1
[ { "affiliations": [], "name": "Philipp Liznerski" }, { "affiliations": [], "name": "Lukas Ruff" }, { "affiliations": [], "name": "Robert A. Vandermeulen" }, { "affiliations": [], "name": "Billy Joe Franks" }, { "affiliations": [], "name": "Marius Kloft" }, { "affiliations": [], "name": "Klaus-Robert Müller" } ]
[ { "authors": [ "C.J. Anders", "P. Pasliev", "A.-K. Dombrowski", "K.-R. Müller", "P. Kessel" ], "title": "Fairwashing explanations with off-manifold detergent", "venue": "In ICML,", "year": 2020 }, { "authors": [ "V. Barnett", "T. Lewis" ], "title": "Outliers in Statistical Data", "venue": "Wiley, 3rd edition,", "year": 1994 }, { "authors": [ "L. Bergman", "Y. Hoshen" ], "title": "Classification-based anomaly detection for general data", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "P. Bergmann", "M. Fauser", "D. Sattlegger", "C. Steger" ], "title": "MVTec AD–A comprehensive real-world dataset for unsupervised anomaly detection", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "F. Berkenkamp", "M. Turchetta", "A. Schoellig", "A. Krause" ], "title": "Safe model-based reinforcement learning with stability guarantees", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "L. Bottou" ], "title": "Large-scale machine learning with stochastic gradient descent", "venue": "In Proceedings of COMPSTAT’2010,", "year": 2010 }, { "authors": [ "R. Chalapathy", "A.K. Menon", "S. Chawla" ], "title": "Anomaly detection using one-class neural networks", "venue": "arXiv preprint arXiv:1802.06360,", "year": 2018 }, { "authors": [ "V. Chandola", "A. Banerjee", "V. Kumar" ], "title": "Anomaly detection: A survey", "venue": "ACM Computing Surveys,", "year": 2009 }, { "authors": [ "G. Cohen", "S. Afshar", "J. Tapson", "A. Van Schaik" ], "title": "EMNIST: Extending MNIST to handwritten letters", "venue": "In IJCNN,", "year": 2017 }, { "authors": [ "D. Dehaene", "O. Frigo", "S. Combrexelle", "P. Eline" ], "title": "Iterative energy-based projection on a normal data manifold for anomaly localization", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "M. Everingham", "L. Van Gool", "C.K. Williams", "J. Winn", "A. Zisserman" ], "title": "The pascal visual object classes (VOC) challenge", "venue": "International Journal of Computer Vision,", "year": 2010 }, { "authors": [ "I. Golan", "R. El-Yaniv" ], "title": "Deep anomaly detection using geometric transformations", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "S. Goyal", "A. Raghunathan", "M. Jain", "H.V. Simhadri", "P. Jain" ], "title": "DROCC: Deep robust one-class classification", "venue": "In ICML,", "year": 2020 }, { "authors": [ "A. Gupta", "J. Johnson", "L. Fei-Fei", "S. Savarese", "A. Alahi" ], "title": "Social GAN: Socially acceptable trajectories with generative adversarial networks", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "S. Hawkins", "H. He", "G. Williams", "R. Baxter" ], "title": "Outlier Detection Using Replicator Neural Networks", "venue": "In DaWaK,", "year": 2002 }, { "authors": [ "D. Hendrycks", "M. Mazeika", "T.G. Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "D. Hendrycks", "M. Mazeika", "S. Kadavath", "D. Song" ], "title": "Using self-supervised learning can improve model robustness and uncertainty", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "P.J. Huber" ], "title": "Robust estimation of a location parameter", "venue": "The Annals of Mathematical Statistics,", "year": 1964 }, { "authors": [ "M.H. Jarrahi" ], "title": "Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making", "venue": "Business Horizons,", "year": 2018 }, { "authors": [ "G. Katz", "C. Barrett", "D.L. Dill", "K. Julian", "M.J. Kochenderfer" ], "title": "Reluplex: An efficient SMT solver for verifying deep neural networks", "venue": "In International Conference on Computer Aided Verification,", "year": 2017 }, { "authors": [ "J. Kauffmann", "K.-R. Müller", "G. Montavon" ], "title": "Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "A. Krizhevsky", "G. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "S. Lapuschkin", "A. Binder", "G. Montavon", "K.-R. Muller", "W. Samek" ], "title": "Analyzing classifiers: Fisher vectors and deep neural networks", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "S. Lapuschkin", "S. Wäldchen", "A. Binder", "G. Montavon", "W. Samek", "K.-R. Müller" ], "title": "Unmasking clever hans predictors and assessing what machines really learn", "venue": "Nature Communications,", "year": 2019 }, { "authors": [ "Y. LeCun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Z. Li", "N. Li", "K. Jiang", "Z. Ma", "X. Wei", "X. Hong", "Y. Gong" ], "title": "Superpixel masking and inpainting for selfsupervised anomaly detection", "venue": "In BMVC,", "year": 2020 }, { "authors": [ "W. Liu", "R. Li", "M. Zheng", "S. Karanam", "Z. Wu", "B. Bhanu", "R.J. Radke", "O. Camps" ], "title": "Towards visually explaining variational autoencoders", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "J. Long", "E. Shelhamer", "T. Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "W. Luo", "Y. Li", "R. Urtasun", "R. Zemel" ], "title": "Understanding the effective receptive field in deep convolutional neural networks", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "G. Montavon", "S. Lapuschkin", "A. Binder", "W. Samek", "K.-R. Müller" ], "title": "Explaining nonlinear classification decisions with deep Taylor decomposition", "venue": "Pattern Recognition,", "year": 2017 }, { "authors": [ "G. Montavon", "W. Samek", "K.-R. Müller" ], "title": "Methods for interpreting and understanding deep neural networks", "venue": "Digital Signal Processing,", "year": 2018 }, { "authors": [ "M.M. Moya", "M.W. Koch", "L.D. Hostetler" ], "title": "One-class classifier networks for target recognition applications", "venue": "In World Congress on Neural Networks,", "year": 1993 }, { "authors": [ "P. Napoletano", "F. Piccoli", "R. Schettini" ], "title": "Anomaly detection in nanofibrous materials by CNN-based selfsimilarity", "venue": null, "year": 2018 }, { "authors": [ "H. Noh", "S. Hong", "B. Han" ], "title": "Learning deconvolution network for semantic segmentation", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "G. Quellec", "M. Lamard", "M. Cozic", "G. Coatrieux", "G. Cazuguel" ], "title": "Multiple-instance learning for anomaly detection in digital mammography", "venue": "IEEE Transactions on Medical Imaging,", "year": 2016 }, { "authors": [ "M.T. Ribeiro", "S. Singh", "C. Guestrin" ], "title": "Why should i trust you?” Explaining the predictions of any classifier", "venue": "In KDD,", "year": 2016 }, { "authors": [ "M.T. Ribeiro", "S. Singh", "C. Guestrin" ], "title": "Anchors: High-precision model-agnostic explanations", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "L. Ruff", "R.A. Vandermeulen", "N. Görnitz", "L. Deecke", "S.A. Siddiqui", "A. Binder", "E. Müller", "M. Kloft" ], "title": "Deep one-class classification", "venue": "In ICML,", "year": 2018 }, { "authors": [ "L. Ruff", "Y. Zemlyanskiy", "R. Vandermeulen", "T. Schnake", "M. Kloft" ], "title": "Self-attentive, multi-context one-class classification for unsupervised anomaly detection on text", "venue": "In ACL,", "year": 2019 }, { "authors": [ "L. Ruff", "R.A. Vandermeulen", "B.J. Franks", "K.-R. Müller", "M. Kloft" ], "title": "Rethinking assumptions in deep anomaly detection", "venue": "arXiv preprint arXiv:2006.00339,", "year": 2020 }, { "authors": [ "L. Ruff", "R.A. Vandermeulen", "N. Görnitz", "A. Binder", "E. Müller", "K.-R. Müller", "M. Kloft" ], "title": "Deep semi-supervised anomaly detection", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "L. Ruff", "J.R. Kauffmann", "R.A. Vandermeulen", "G. Montavon", "W. Samek", "M. Kloft", "T.G. Dietterich", "K.-R. Müller" ], "title": "A unifying review of deep and shallow anomaly detection", "venue": "Proceedings of the IEEE,", "year": 2021 }, { "authors": [ "M. Sabokrou", "M. Fayyaz", "M. Fathy", "Z. Moayed", "R. Klette" ], "title": "Deep-anomaly: Fully convolutional neural network for fast anomaly detection in crowded scenes", "venue": "Computer Vision and Image Understanding,", "year": 2018 }, { "authors": [ "M. Sakurada", "T. Yairi" ], "title": "Anomaly detection using autoencoders with nonlinear dimensionality reduction", "venue": "In Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis,", "year": 2014 }, { "authors": [ "W. Samek", "G. Montavon", "S. Lapuschkin", "C.J. Anders", "K.-R. Müller" ], "title": "Toward interpretable machine learning: Transparent deep neural networks and beyond", "venue": "arXiv preprint arXiv:2003.07631,", "year": 2020 }, { "authors": [ "T. Schlegl", "P. Seeböck", "S.M. Waldstein", "U. Schmidt-Erfurth", "G. Langs" ], "title": "Unsupervised anomaly detection with generative adversarial networks to guide marker discovery", "venue": "In International conference on information processing in medical imaging,", "year": 2017 }, { "authors": [ "K. Simonyan", "A. Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "K. Simonyan", "A. Vedaldi", "A. Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "venue": "arXiv preprint arXiv:1312.6034,", "year": 2013 }, { "authors": [ "K.A. Spackman" ], "title": "Signal detection theory: Valuable tools for evaluating inductive learning", "venue": "In Proceedings of the Sixth International Workshop on Machine Learning,", "year": 1989 }, { "authors": [ "M. Sundararajan", "A. Taly", "Q. Yan" ], "title": "Axiomatic attribution for deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "I. Sutskever", "J. Martens", "G. Dahl", "G. Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In ICML,", "year": 2013 }, { "authors": [ "D.M.J. Tax" ], "title": "One-class classification", "venue": "PhD thesis, Delft University of Technology,", "year": 2001 }, { "authors": [ "D.M.J. Tax", "R.P.W. Duin" ], "title": "Support Vector Data Description", "venue": "Machine Learning,", "year": 2004 }, { "authors": [ "A. Torralba", "R. Fergus", "W.T. Freeman" ], "title": "80 million tiny images: A large data set for nonparametric object and scene recognition", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1958 }, { "authors": [ "S. Venkataramanan", "K.-C. Peng", "R.V. Singh", "A. Mahalanobis" ], "title": "Attention guided anomaly detection and localization in images", "venue": null, "year": 1911 }, { "authors": [ "H. Xiao", "K. Rasul", "R. Vollgraf" ], "title": "Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Y. Zhao", "B. Deng", "C. Shen", "Y. Liu", "H. Lu", "X.-S. Hua" ], "title": "Spatio-temporal autoencoder for video anomaly detection", "venue": "In Proceedings of the 25th ACM International Conference on Multimedia,", "year": 2017 }, { "authors": [ "C. Zhou", "R.C. Paffenroth" ], "title": "Anomaly detection with robust deep autoencoders", "venue": "In KDD,", "year": 2017 }, { "authors": [ "K. Zhou", "Y. Xiao", "J. Yang", "J. Cheng", "W. Liu", "W. Luo", "Z. Gu", "J. Liu", "S. Gao" ], "title": "Encoding structure-texture relation with p-net for anomaly detection in retinal images", "venue": "In ECCV,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Anomaly detection (AD) is the task of identifying anomalies in a corpus of data (Edgeworth, 1887; Barnett and Lewis, 1994; Chandola et al., 2009; Ruff et al., 2021). Powerful new anomaly detectors based on deep learning have made AD more effective and scalable to large, complex datasets such as high-resolution images (Ruff et al., 2018; Bergmann et al., 2019). While there exists much recent work on deep AD, there is limited work on making such techniques explainable. Explanations are needed in industrial applications to meet safety and security requirements (Berkenkamp et al., 2017; Katz et al., 2017; Samek et al., 2020), avoid unfair social biases (Gupta et al., 2018), and support human experts in decision making (Jarrahi, 2018; Montavon et al., 2018; Samek et al., 2020). One typically makes anomaly detection explainable by annotating pixels with an anomaly score and, in some applications, such as finding tumors in cancer detection (Quellec et al., 2016), these annotations are the primary goal of the detector.\nOne approach to deep AD, known as Deep Support Vector Data Description (DSVDD) (Ruff et al., 2018), is based on finding a neural network that transforms data such that nominal data is concentrated to a predetermined center and anomalous data lies elsewhere. In this paper we present Fully Convolutional Data Description (FCDD), a modification of DSVDD so that the transformed samples are themselves an image corresponding to a downsampled anomaly heatmap. The pixels in this heatmap that are far from the center correspond to anomalous regions in the input image. FCDD does this by only using convolutional and pooling layers, thereby limiting the receptive field of each output pixel. Our method is based on the one-class classification paradigm (Moya et al., 1993; Tax, 2001; Tax and Duin, 2004; Ruff et al., 2018), which is able to naturally incorporate known anomalies Ruff et al. (2021), but is also effective when simply using synthetic anomalies. ∗equal contribution 1Our code is available at: https://github.com/liznerski/fcdd\nWe show that FCDD’s anomaly detection performance is close to the state of the art on the standard AD benchmarks with CIFAR-10 and ImageNet while providing transparent explanations. On MVTecAD, an AD dataset containing ground-truth anomaly maps, we demonstrate the accuracy of FCDD’s explanations (see Figure 1), where FCDD sets a new state of the art. In further experiments we find that deep one-class classification models (e.g. DSVDD) are prone to the “Clever Hans” effect (Lapuschkin et al., 2019) where a detector fixates on spurious features such as image watermarks. In general, we find that the generated anomaly heatmaps are less noisy and provide more structure than the baselines, including gradient-based methods (Simonyan et al., 2013; Sundararajan et al., 2017) and autoencoders (Sakurada and Yairi, 2014; Bergmann et al., 2019)." }, { "heading": "2 RELATED WORK", "text": "Here we outline related works on deep AD focusing on explanation approaches. Classically deep AD used autoencoders (Hawkins et al., 2002; Sakurada and Yairi, 2014; Zhou and Paffenroth, 2017; Zhao et al., 2017). Trained on a nominal dataset autoencoders are assumed to reconstruct anomalous samples poorly. Thus, the reconstruction error can be used as an anomaly score and the pixel-wise difference as an explanation (Bergmann et al., 2019), thereby naturally providing an anomaly heatmap. Recent works have incorporated attention into reconstruction models that can be used as explanations (Venkataramanan et al., 2019; Liu et al., 2020). In the domain of videos, Sabokrou et al. (2018) used a pre-trained fully convolutional architecture in combination with a sparse autoencoder to extract 2D features and provide bounding boxes for anomaly localization. One drawback of reconstruction methods is that they offer no natural way to incorporate known anomalies during training.\nMore recently, one-class classification methods for deep AD have been proposed. These methods attempt to separate nominal samples from anomalies in an unsupervised manner by concentrating nominal data in feature space while mapping anomalies to distant locations (Ruff et al., 2018; Chalapathy et al., 2018; Goyal et al., 2020). In the domain of NLP, DSVDD has been successfully applied to text, which yields a form of interpretation using attention mechanisms (Ruff et al., 2019). For images, Kauffmann et al. (2020) have used a deep Taylor decomposition (Montavon et al., 2017) to derive relevance scores.\nSome of the best performing deep AD methods are based on self-supervision. These methods transform nominal samples, train a network to predict which transformation was used on the input, and\nprovide an anomaly score via the confidence of the prediction (Golan and El-Yaniv, 2018; Hendrycks et al., 2019b). Hendrycks et al. (2019a) have extended this to incorporate known anomalies as well. No explanation approaches have been considered for these methods so far.\nFinally, there exists a great variety of explanation methods in general, for example model-agnostic methods (e.g. LIME (Ribeiro et al., 2016)) or gradient-based techniques (Simonyan et al., 2013; Sundararajan et al., 2017). Relating to our work, we note that fully convolutional architectures have been used for supervised segmentation tasks where target segmentation maps are required during training (Long et al., 2015; Noh et al., 2015)." }, { "heading": "3 EXPLAINING DEEP ONE-CLASS CLASSIFICATION", "text": "We review one-class classification and fully convolutional architectures before presenting our method.\nDeep One-Class Classification Deep one-class classification (Ruff et al., 2018; 2020b) performs anomaly detection by learning a neural network to map nominal samples near a center c in output space, causing anomalies to be mapped away. For our method we use a Hypersphere Classifier (HSC) (Ruff et al., 2020a), a recently proposed modification of Deep SAD (Ruff et al., 2020b), a semi-supervised version of DSVDD (Ruff et al., 2018). Let X1, . . . , Xn denote a collection of samples and y1, . . . , yn be labels where yi = 1 denotes an anomaly and yi = 0 denotes a nominal sample. Then the HSC objective is\nmin W,c\n1\nn n∑ i=1 (1− yi)h(φ(Xi;W)− c)− yi log (1− exp (−h(φ(Xi;W)− c))) , (1)\nwhere c ∈ Rd is the center, and φ : Rc×h×w → Rd a neural network with weightsW . Here h is the pseudo-Huber loss (Huber et al., 1964), h(a) = √ ‖a‖22 + 1 − 1, which is a robust loss that interpolates from quadratic to linear penalization. The HSC loss encourages φ to map nominal samples near c and anomalous samples away from the center c. In our implementation, the center c corresponds to the bias term in the last layer of our networks, i.e. is included in the network φ, which is why we omit c in the FCDD objective below.\nFully Convolutional Architecture Our method uses a fully convolutional network (FCN) (Long et al., 2015; Noh et al., 2015) that maps an image to a matrix of features, i.e. φ : Rc×h×w → R1×u×v by using alternating convolutional and pooling layers only, and does not contain any fully connected layers. In this context, pooling can be seen as a special kind of convolution with fixed parameters.\nA core property of a convolutional layer is that each pixel of its output only depends on a small region of its input, known as the output pixel’s receptive field. Since the output of a convolution is produced by moving a filter over the input image, each output pixel has the same relative position as its associated receptive field in the input. For instance, the lower-left corner of the output representation has a corresponding receptive field in the lower-left corner of the input image, etc. (see Figure 2 left side). The outcome of several stacked convolutions also has receptive fields of limited size and consistent relative position, though their size grows with the amount of layers. Because of this an FCN preserves spatial information.\nFully Convolutional Data Description Here we introduce our novel explainable AD method Fully Convolutional Data Description (FCDD). By taking advantage of FCNs along with the HSC above, we propose a deep one-class method where the output features preserve spatial information and also serve as a downsampled anomaly heatmap. For situations where one would like to have a full-resolution heatmap, we include a methodology for upsampling the low-resolution heatmap based on properties of receptive fields.\nFCDD is trained using samples that are labeled as nominal or anomalous. As before, let X1, . . . , Xn denote a collection of samples with labels y1, . . . , yn where yi = 1 denotes an anomaly and yi = 0 denotes a nominal sample. Anomalous samples can simply be a collection of random images which are not from the nominal collection, e.g. one of the many large collections of images which are freely available like 80 Million Tiny Images (Torralba et al., 2008) or ImageNet (Deng et al., 2009). The use of such an auxiliary corpus has been recommended in recent works on deep AD, where it is termed Outlier Exposure (OE) (Hendrycks et al., 2019a;b). When one has access to “true” examples of the anomalous dataset, i.e. something that is likely to be representative of what will be seen at test time, we find that even using a few examples as the corpus of labeled anomalies performs exceptionally well. Furthermore, in the absence of any sort of known anomalies, one can generate synthetic anomalies, which we find is also very effective.\nWith an FCN φ : Rc×h×w → Ru×v the FCDD objective utilizes a pseudo-Huber loss on the FCN output matrix A(X) = (√ φ(X;W)2 + 1− 1 ) , where all operations are applied element-wise. The\nFCDD objective is then defined as (cf., (1)):\nmin W\n1\nn n∑ i=1 (1− yi) 1 u · v ‖A(Xi)‖1 − yi log ( 1− exp ( − 1 u · v ‖A(Xi)‖1 )) . (2)\nHere ‖A(X)‖1 is the sum of all entries in A(X), which are all positive. FCDD is the utilization of an FCN in conjunction with the novel adaptation of the HSC loss we propose in (2). The objective maximizes ‖A(X)‖1 for anomalies and minimizes it for nominal samples, thus we use ‖A(X)‖1 as the anomaly score. Entries of A(X) that contribute to ‖A(X)‖1 correspond to regions of the input image that add to the anomaly score. The shape of these regions depends on the receptive field of the FCN. We include a sensitivity analysis on the size of the receptive field in Appendix A, where we find that performance is not strongly affected by the receptive field size. Note that A(X) has spatial dimensions u × v and is smaller than the original image dimensions h × w. One could use A(X) directly as a low-resolution heatmap of the image, however it is often desirable to have full-resolution heatmaps. Because we generally lack ground-truth anomaly maps in an AD setting during training, it is not possible to train an FCN in a supervised way to upsample the low-resolution heatmap A(X) (e.g. as in (Noh et al., 2015)). For this reason we introduce an upsampling scheme based on the properties of receptive fields.\nAlgorithm 1 Receptive Field Upsampling Input: A ∈ Ru×v (low-res anomaly heatmap) Output: A′ ∈ Rh×w (full-res anomaly heatmap) Define: [G2(µ, σ)]x,y , 12πσ2 exp ( − (x−µ1) 2+(y−µ2)2 2σ2\n) A′ ← 0 for all output pixels a in A do\nf ← receptive field of a c← center of field f A′ ← A′ + a ·G2(c, σ)\nend for return A′ Heatmap Upsampling Since we generally do not have access to ground-truth pixel annotations in anomaly detection during training, we cannot learn how to upsample using a deconvolutional type of structure. We derive a principled way to upsample our lower resolution anomaly heatmap instead. For every output pixel in A(X) there is a unique input pixel which lies at the center of its receptive field. It has been observed before that the effect of the receptive field for an output pixel decays in a Gaussian manner as one moves away from the center of the receptive field (Luo et al., 2016). We use this fact to upsample A(X) by using a strided transposed convolution with a fixed Gaussian kernel (see Figure 2 right side). We describe this operation and procedure in Algorithm 1 which simply corresponds to a strided transposed convolution. The kernel size is set to the receptive field range of FCDD and the stride to the cumulative stride of FCDD. The variance of the distribution can be picked empirically (see Appendix B for details). Figure 3 shows a complete overview of our FCDD method and the process of generating full-resolution anomaly heatmaps." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we experimentally evaluate the performance of FCDD both quantitatively and qualitatively. For a quantitative evaluation, we use the Area Under the ROC Curve (AUC) (Spackman, 1989) which is the commonly used measure in AD. For a qualitative evaluation, we compare the heatmaps produced by FCDD to existing deep AD explanation methods. As baselines, we consider gradient-based methods (Simonyan et al., 2013) applied to hypersphere classifier (HSC) models (Ruff et al., 2020a) with unrestricted network architectures (i.e. networks that also have fully connected layers) and autoencoders (Bergmann et al., 2019) where we directly use the pixel-wise reconstruction error as an explanation heatmap. We slightly blur the heatmaps of the baselines with the same Gaussian kernel we use for FCDD, which we found results in less noisy, more interpretable heatmaps. We include heatmaps without blurring in Appendix G. We adjust the contrast of the heatmaps per method to highlight interesting features; see Appendix C for details. For our experiments we don’t consider model-agnostic explanations, such as LIME (Ribeiro et al., 2016) or anchors (Ribeiro et al., 2018), because they are not tailored to the AD task and performed poorly." }, { "heading": "4.1 STANDARD ANOMALY DETECTION BENCHMARKS", "text": "We first evaluate FCDD on the Fashion-MNIST, CIFAR-10, and ImageNet datasets. The common AD benchmark is to utilize these classification datasets in a one-vs-rest setup where the “one” class is used as the nominal class and the rest of the classes are used as anomalies at test time. For training, we only use nominal samples as well as random samples from some auxiliary Outlier Exposure (OE) (Hendrycks et al., 2019a) dataset, which is separate from the ground-truth anomaly classes following Hendrycks et al. (2019a;b). We report the mean AUC over all classes for each dataset.\nFashion-MNIST We consider each of the ten Fashion-MNIST (Xiao et al., 2017) classes in a one-vs-rest setup. We train Fashion-MNIST using EMNIST (Cohen et al., 2017) or grayscaled CIFAR-100 (Krizhevsky et al., 2009) as OE. We found that the latter slightly outperforms the former (∼3 AUC percent points). On Fashion-MNIST, we use a network that consists of three convolutional layers with batch normalization, separated by two downsampling pooling layers.\nCIFAR-10 We consider each of the ten CIFAR-10 (Krizhevsky et al., 2009) classes in a one-vs-rest setup. As OE we use CIFAR-100, which does not share any classes with CIFAR-10. We use a model similar to LeNet-5 (LeCun et al., 1998), but decrease the kernel size to three, add batch normalization, and replace the fully connected layers and last max-pool layer with two further convolutions.\nImageNet We consider 30 classes from ImageNet1k (Deng et al., 2009) for the one-vs-rest setup following Hendrycks et al. (2019a). For OE we use ImageNet22k with ImageNet1k classes removed (Hendrycks et al., 2019a). We use an adaptation of VGG11 (Simonyan and Zisserman, 2015) with batch normalization, suitable for inputs resized to 224×224 (see Appendix D for model details).\nState-of-the-art Methods We report results from state-of-the-art deep anomaly detection methods. Methods that do not incorporate known anomalies are the autoencoder (AE), DSVDD (Ruff et al., 2018), Geometric Transformation based AD (GEO) (Golan and El-Yaniv, 2018), and a variant of GEO by Hendrycks et al. (2019b) (GEO+). Methods that use OE are a Focal loss classifier (Hendrycks et al., 2019b), also GEO+, Deep SAD (Ruff et al., 2020b), and HSC (Ruff et al., 2020a).\nQuantitative Results The mean AUC detection performance on the three AD benchmarks are reported in Table 1. We can see that FCDD, despite using a restricted FCN architecture to improve explainability, achieves a performance that is close to state-of-the-art methods and outperforms autoencoders, which yield a detection performance close to random on more complex datasets. We provide detailed results for all individual classes in Appendix F.\nQualitative Results Figures 4 and 5 show the heatmaps for Fashion-MNIST and ImageNet respectively. For a Fashion-MNIST model trained on the nominal class “trousers,” the heatmaps show that FCDD correctly highlights horizontal elements as being anomalous, which makes sense since trousers are vertically aligned. For an ImageNet model trained on the nominal class “acorns,” we observe that colors seem to be fairly relevant features with green and brown areas tending to be seen as more nominal, and other colors being deemed anomalous, for example the red barn or the white snow. Nonetheless, the method also seems capable of using more semantic features, for example it recognizes the green caterpillar as being anomalous and it distinguishes the acorn to be nominal despite being against a red background.\nFigure 6 shows heatmaps for CIFAR-10 models with varying amount of OE, all trained on the nominal class “airplane.” We can see that, as the number of OE samples increases, FCDD tends to concentrate\nthe explanations more on the primary object in the image, i.e. the bird, ship, and truck. We provide further heatmaps for additional classes from all datasets in Appendix G.\nBaseline Explanations We found the gradient-based heatmaps to mostly produce centered blobs which lack spatial context (see Figure 6) and thus are not useful for explaining. The AE heatmaps, being directly tied to the reconstruction error anomaly score, look reasonable. We again note, however, that it is not straightforward how to include auxiliary OE samples or labeled anomalies into an AE approach, which leaves them with a poorer detection performance (see Table 1). Overall we find that the proposed FCDD anomaly heatmaps yield a good and consistent visual interpretation." }, { "heading": "4.2 EXPLAINING DEFECTS IN MANUFACTURING", "text": "Here we compare the performance of FCDD on the MVTec-AD dataset of defects in manufacturing (Bergmann et al., 2019). This datasets offers annotated ground-truth anomaly segmentation maps for testing, thus allowing a quantitative evaluation of model explanations. MVTec-AD contains 15 object classes of high-resolution RGB images with up to 1024×1024 pixels, where anomalous test samples are further categorized in up to 8 defect types, depending on the class. We follow Bergmann et al. (2019) and compute an AUC from the heatmap pixel scores, using the given (binary) anomaly segmentation maps as ground-truth pixel labels. We then report the mean over all samples of this “explanation” AUC for a quantitative evaluation. For FCDD, we use a network that is based on a VGG11 network pre-trained on ImageNet, where we freeze the first ten layers, followed by additional fully convolutional layers that we train.\nFigure 7: Confetti noise.\nSynthetic Anomalies OE with a natural image dataset like ImageNet is not informative for MVTec-AD since anomalies here are subtle defects of the nominal class, rather than being out of class (see Figure 1). For this reason, we generate synthetic anomalies using a sort of “confetti noise,” a simple noise model that inserts colored blobs into images and reflects the local nature of anomalies. See Figure 7 for an example.\nSemi-Supervised FCDD A major advantage of FCDD in comparison to reconstruction-based methods is that it can be readily used in a semi-supervised AD setting (Ruff et al., 2020b). To see the effect of having even only a few labeled anomalies and their corresponding ground-truth anomaly maps available for training, we pick for each MVTec-AD class just one true anomalous sample per defect type at random and add it to the training set. This results in only 3–8 anomalous training samples. To also take advantage of the ground-truth heatmaps, we train a model on a pixel level. Let X1, . . . , Xn again denote a batch of inputs with corresponding ground-truth heatmaps Y1, . . . , Yn, each having m = h · w number of pixels. Let A(X) also again denote the corresponding output anomaly heatmap of X . Then, we can formulate a pixel-wise objective by the following:\nmin W\n1\nn n∑ i=1 1 m m∑ j=1 (1− (Yi)j)A′(Xi)j − log 1− exp − 1 m m∑ j=1 (Yi)jA ′(Xi)j . (3) Results Figure 1 in the introduction shows heatmaps of FCDD trained on MVTec-AD. The results of the quantitative explanation are shown in Table 2. We can see that FCDD outperforms its competitors in the unsupervised setting and sets a new state of the art of 0.92 pixel-wise mean AUC. In the semi-supervised setting —using only one anomalous sample with corresponding anomaly map per defect class— the explanation performance improves further to 0.96 pixel-wise mean AUC. FCDD also has the most consistent performance across classes." }, { "heading": "4.3 THE CLEVER HANS EFFECT", "text": "Lapuschkin et al. (2016; 2019) revealed that roughly one fifth of all horse images in PASCAL VOC (Everingham et al., 2010) contain a watermark in the lower left corner. They showed that a classifier recognizes this as the relevant class pattern and fails if the watermark is removed. They call this the\n“Clever Hans” effect in memory of the horse Hans, who could correctly answer math problems by reading its master2. We adapt this experiment to one-class classification by swapping our standard setup and train FCDD so that the “horse” class is anomalous and use ImageNet as nominal samples. We choose this setup so that one would expect FCDD to highlight horses in its heatmaps and so that any other highlighting makes FCDD reveal a Clever Hans effect.\nFigure 8 (b) shows that a one-class model is indeed also vulnerable to learning a characterization based on spurious features: the watermarks in the lower left corner which have high scores whereas other regions have low scores. We also observe that the model yields high scores for bars, grids, and fences in Figure 8 (a). This is due to many images in the dataset containing horses jumping over bars or being in fenced areas. In both cases, the horse features themselves do not attain the highest scores because the model has no way of knowing that the spurious features, while providing good discriminative power at training time, would not be desirable upon deployment/test time. In contrast to traditional black-box models, however, transparent detectors like FCDD enable a practitioner to recognize and remedy (e.g. by cleaning or extending the training data) such behavior or other undesirable phenomena (e.g. to avoid unfair social bias)." }, { "heading": "5 CONCLUSION", "text": "In conclusion we find that FCDD, in comparison to previous methods, performs well and is adaptable to both semantic detection tasks (Section 4.1) and more subtle defect detection tasks (Section 4.2). Finally, directly tying an explanation to the anomaly score should make FCDD less vulnerable to attacks (Anders et al., 2020) in contrast to a posteriori explanation methods. We leave an analysis of this phenomenon for future work." }, { "heading": "ACKNOWLEDGEMENTS", "text": "MK, PL, and BJF acknowledge support by the German Research Foundation (DFG) award KL 2698/2- 1 and by the German Federal Ministry of Science and Education (BMBF) awards 01IS18051A, 031B0770E, and 01MK20014U. LR acknowledges support by the German Federal Ministry of Education and Research (BMBF) in the project ALICE III (01IS18049B). RV acknowledges support by the Berlin Institute for the Foundations of Learning and Data (BIFOLD) sponsored by the German Federal Ministry of Education and Research (BMBF). KRM was supported in part by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korea Government (No. 2017-0-00451 and 2019-0-00079) and was partly supported by the German Federal Ministry of Education and Research (BMBF) for the Berlin Center for Machine Learning (01IS18037A-I) and under the Grants 01IS14013A-E, 01GQ1115, 01GQ0850, 01IS18025A, and 031L0207A-D; the German Research Foundation (DFG) under Grant Math+, EXC 2046/1, Project ID 390685689. Finally, we thank all reviewers for their constructive feedback, which helped to improve this work.\n2https://en.wikipedia.org/wiki/Clever_Hans" }, { "heading": "A RECEPTIVE FIELD SENSITIVITY ANALYSIS", "text": "The receptive field has an impact on both detection performance and explanation quality. Here we provide some heatmaps and AUC scores for networks with different receptive field sizes. We observe that the detection performance is only minimally affected, but larger receptive fields cause the explanation heatmap to become less concentrated and more “blobby.” For MVTec-AD we see that this can also negatively affect pixel-wise AUC scores, see Table 4.\nCIFAR-10 For CIFAR-10 we create eight different network architectures to study the impact of the receptive field size. Each architecture has four convolutional layers and two max-pool layers. To change the receptive field we vary the kernel size of the first convolutional layer between 3 and 17. When this kernel size is 3 then the receptive field contains approximately one quarter of the image; for a kernel size of 17 the receptive field is the entire image. Table 3 shows the detection performance of the networks. Figure 9 contains example heatmaps.\nTable 3: Mean AUC (over all classes and 5 seeds per class) for CIFAR-10 and neural networks with varying receptive field size.\nReceptive field size 18 20 22 24 26 28 30 32\nAUC 0.9328 0.9349 0.9344 0.9320 0.9303 0.9283 0.9257 0.9235\nFigure 9: Anomaly heatmaps for three anomalous test samples on CIFAR-10 models trained on nominal class “airplane.” We grow the receptive field size from 18 (left) to 32 (right).\nMVTec-AD We create six different network architectures for MVTec-AD. They have six convolutional layers and three max-pool layers. We vary the kernel size for all of the convolutional layers between 3 and 13, which corresponds to a receptive field containing 1/16 of the image to the full image respectively. Table 4 shows the explanation performance of the networks in terms of pixel-wise mean AUC. Figure 10 contains some example heatmaps. We observe that a smaller receptive field yields better explanation performance.\nB IMPACT OF THE GAUSSIAN VARIANCE\nUsing the proposed heatmap upsampling in Section 3 FCDD provides full-resolution anomaly heatmaps. However, this upsampling involves the choice of σ for the Gaussian kernel. In this section, we demonstrate the effect of this hyperparameter on the explanation performance of FCDD on MVTec-AD. Table 5 shows the pixel-wise mean AUC, Figure 11 corresponding heatmaps." }, { "heading": "C ANOMALY HEATMAP VISUALIZATION", "text": "For anomaly heatmap visualization, the FCDD anomaly scores A′(X) need to be rescaled to values in [0, 1]. Instead of applying standard min-max scaling that would divide all heatmap entries by maxA′(X), we use anomaly score quantiles to adjust the contrast in the heatmaps. For a collection of inputs X = {X1, . . . , Xn} with corresponding full-resolution anomaly heatmaps\nA = {A′(X1), . . . , A′(Xn)}, the normalized heatmap I(X) for some A′(X) is computed as\nI(X)j = min\n{ A′(X)j −min(A)\nqη({A′ −min(A) | A′ ∈ A}) , 1\n} ,\nwhere j denotes the j-th pixel and qη the η-th percentile over all pixels and examples in A. The subtraction and min operation are applied on a pixel level, i.e. the minimum is extracted over all pixels and all samples of A and subtraction is then applied elementwise. Using the η-th percentile might leave some of the values above 1, which is why we finally clamp the pixels at 1.\nThe specific choice of η and set of samples X differs per figure. We select them to highlight different properties of the heatmaps. In general, the lower η the more red (anomalous) regions we have in the heatmaps because more values are left above one (before clamping to 1) and vice versa. The choice of X ranges from just one sample X , such that A′(X) is normalized only w.r.t. to its own scores (highlighting the most anomalous regions within the image), to the complete dataset (highlighting which regions look anomalous compared to the whole dataset). For the latter visualization we rebalance the dataset so that X contains an equal amount of nominal and anomalous images to maintain consistent scaling. The choice of η and X is consistent per figure. In the following we list the choices made for the individual figures.\nMVTec-AD Figures 1, 10, and 11 use η = 0.97 and set X to X for each heatmap I(X) to show relative anomalies. So each image is normalized with respect to itself only.\nFashion-MNIST Figure 4 uses η = 0.85 and sets X to the complete balanced test set.\nCIFAR-10 Figures 6 and 9 use η = 0.85 and set X to X for each heatmap I(X) to show relative anomalies. So each image is normalized with respect to itself only.\nImageNet Figure 5 uses η = 0.97 and sets X to the complete balanced test set.\nPascal VOC Figure 8 uses η = 0.99 and sets X to the complete balanced test set.\nHeatmap Upsampling For the Gaussian kernel heatmap upsampling described in Algorithm 1, we set σ to 1.2 for CIFAR-10 and Fashion-MNIST, to 8 for ImageNet and Pascal VOC, and to 12 for MVTec-AD." }, { "heading": "D DETAILS ON THE NETWORK ARCHITECTURES", "text": "Here we provide the complete FCDD network architectures we used on the different datasets.\nFashion-MNIST\n---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 128, 28, 28] 3,328\nBatchNorm2d-2 [-1, 128, 28, 28] 256 LeakyReLU-3 [-1, 128, 28, 28] 0 MaxPool2d-4 [-1, 128, 14, 14] 0\nConv2d-5 [-1, 128, 14, 14] 409,728 MaxPool2d-6 [-1, 128, 7, 7] 0\nConv2d-7 [-1, 1, 7, 7] 129 ================================================================ Total params: 413,441 Trainable params: 413,441 Non-trainable params: 0 Receptive field (pixels): 16 x 16 ----------------------------------------------------------------\nCIFAR-10\n---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 128, 32, 32] 3,584\nBatchNorm2d-2 [-1, 128, 32, 32] 256 LeakyReLU-3 [-1, 128, 32, 32] 0 MaxPool2d-4 [-1, 128, 16, 16] 0 Conv2d-5 [-1, 256, 16, 16] 295,168 BatchNorm2d-6 [-1, 256, 16, 16] 512\nLeakyReLU-7 [-1, 256, 16, 16] 0 Conv2d-8 [-1, 256, 16, 16] 590,080 BatchNorm2d-9 [-1, 256, 16, 16] 512 LeakyReLU-10 [-1, 256, 16, 16] 0 MaxPool2d-11 [-1, 256, 8, 8] 0\nConv2d-12 [-1, 128, 8, 8] 295,040 Conv2d-13 [-1, 1, 8, 8] 129\n================================================================ Total params: 1,185,281 Trainable params: 1,185,281 Non-trainable params: 0 Receptive field (pixels): 22 x 22 ----------------------------------------------------------------\nImageNet, MVTec-AD, and Pascal VOC\n---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 224, 224] 1,792\nBatchNorm2d-2 [-1, 64, 224, 224] 128 ReLU-3 [-1, 64, 224, 224] 0\nMaxPool2d-4 [-1, 64, 112, 112] 0 Conv2d-5 [-1, 128, 112, 112] 73,856 BatchNorm2d-6 [-1, 128, 112, 112] 256 ReLU-7 [-1, 128, 112, 112] 0\nMaxPool2d-8 [-1, 128, 56, 56] 0 Conv2d-9 [-1, 256, 56, 56] 295,168 BatchNorm2d-10 [-1, 256, 56, 56] 512 ReLU-11 [-1, 256, 56, 56] 0 Conv2d-12 [-1, 256, 56, 56] 590,080 BatchNorm2d-13 [-1, 256, 56, 56] 512\nReLU-14 [-1, 256, 56, 56] 0 MaxPool2d-15 [-1, 256, 28, 28] 0\nConv2d-16 [-1, 512, 28, 28] 1,180,160 BatchNorm2d-17 [-1, 512, 28, 28] 1,024\nReLU-18 [-1, 512, 28, 28] 0 Conv2d-19 [-1, 512, 28, 28] 2,359,808\nBatchNorm2d-20 [-1, 512, 28, 28] 1,024 ReLU-21 [-1, 512, 28, 28] 0\nConv2d-22 [-1, 1, 28, 28] 513 ================================================================ Total params: 4,504,833 Trainable params: 4,504,833 Non-trainable params: 0 Receptive field (pixels): 62 x 62 ----------------------------------------------------------------" }, { "heading": "E TRAINING AND OPTIMIZATION", "text": "Here we provide the training and optimization details for the individual experiments from Section 4.\nWe apply common pre-processing (e.g. data normalization) and data augmentation steps in our data loading pipeline. To sample auxiliary anomalies in an online manner during training, each nominal sample of a batch has a 50% chance of being replaced by a randomly picked auxiliary anomaly. This leads to balanced training batches for sufficiently large batch sizes. One epoch in our implementation still refers to the original nominal data training set size, so that approximately 50% of the nominal samples have been seen per training epoch. Below, we list further details for the specific datasets.\nFashion-MNIST We train for 400 epochs using a batch size of 128 samples. We optimize the network parameters using SGD (Bottou, 2010) with Nesterov momentum (µ = 0.9) (Sutskever et al., 2013), weight decay of 10−6 and an initial learning rate of 0.01, which decreases the previous\nlearning rate per epoch by a factor of 0.98. The pre-processing pipeline is: (1) Random crop to size 28 with beforehand zero-padding of 2 pixels on all sides (2) random horizontal flipping with a chance of 50% (3) data normalization.\nCIFAR-10 We train for 600 epochs using a batch size of 200 samples. We optimize the network using Adam (Kingma and Ba, 2015) (β = (0.9, 0.999)) with weight decay 10−6 and an initial learning rate of 0.001 which is decreased by a factor of 10 at epoch 400 and 500. The pre-processing pipeline is: (1) Random color jitter with all parameters3 set to 0.01 (2) random crop to size 32 with beforehand zero-padding of 4 pixels on all sides (3) random horizontal flipping with a chance of 50% (4) additive Gaussian noise with σ = 0.001 (5) data normalization.\nImageNet We use the same setup as in CIFAR-10, but resize all images to size 256×256 before forwarding them through the pipeline and change the random crop to size 224 with no padding. Test samples are center cropped to a size of 224 before being normalized.\nPascal VOC We use the same setup as in CIFAR-10, but resize all images to size 224×224 before forwarding them through the pipeline and remove the Random Crop step.\nMVTec-AD For MVTec-AD we redefine an epoch to be ten times an iteration of the full dataset because this improves the computational performance of the data pipeline. We train for 200 epochs using SGD with Nesterov momentum (µ = 0.9), weight decay 10−4, and an initial learning rate of 0.001, which decreases per epoch by a factor of 0.985. The pre-processing pipeline is: (1) Resize to 240×240 pixels (2) random crop to size 224 with no padding (3) random color jitter with either all parameters set to 0.04 or 0.0005, randomly chosen (4) 50% chance to apply additive Gaussian noise (5) data normalization." }, { "heading": "F QUANTITATIVE DETECTION RESULTS FOR INDIVIDUAL CLASSES", "text": "Table 6 shows the class-wise results on Fashion-MNIST for AE, Deep Support Vector Data Description (DSVDD) (Ruff et al., 2018; Bergman and Hoshen, 2020) and Geometric Transformation based AD (GEO) (Golan and El-Yaniv, 2018).\nIn Table 7 the class-wise results for CIFAR-10 are reported. Competitors without OE are AE (Ruff et al., 2018), DSVDD (Ruff et al., 2018), GEO (Golan and El-Yaniv, 2018) and an adaptation of GEO (GEO+) (Hendrycks et al., 2019b). Competitors with OE are the focal loss classifier (Hendrycks et al., 2019b), again GEO+ (Hendrycks et al., 2019b), Deep Semi-supervised Anomaly Detection (Deep SAD) (Ruff et al., 2020b;a) and the hypersphere Classifier (Ruff et al., 2020a).\nIn Table 8 the class-wise results for Imagenet are shown, where competitors are the AE, the focal loss classifier (Hendrycks et al., 2019b), Geo+ (Hendrycks et al., 2019b), Deep SAD (Ruff et al., 2020b) and HSC (Ruff et al., 2020a). Results from the literature are marked with an asterisk.\n3 https://pytorch.org/docs/1.4.0/torchvision/transforms.html#torchvision.transforms.ColorJitter" }, { "heading": "G FURTHER QUALITATIVE ANOMALY HEATMAP RESULTS", "text": "In this section we report some further anomaly heatmaps, unblurred baseline heatmaps, as well as class-wise heatmaps for all datasets.\nUnblurred Anomaly Heatmap Baselines Here we show unblurred baseline heatmaps for the figures in Section 4.1. Figures 12, 13, and 14 show the unblurred heatmaps for Fashion-MNIST, ImageNet, and CIFAR-10 respectively.\nClass-wise Anomaly Heatmaps Due to space restrictions we have only shown heatmaps for some of the classes in the main paper. Here we also report a collection of heatmaps for all classes.\nWe show heatmaps with adjusted contrast curves by setting X to the balanced set of all samples for all datasets in this section. Further, we set η = 0.85 for Fashion-MNIST and CIFAR-10, η = 0.99 for MVTec-AD, and η = 0.97 for ImageNet. Note that, to keep the heatmaps for different classes comparable, we use a unified normalization for all heatmaps in one figure. However, since for each class a separate anomaly detector is trained, this yields suboptimal visualizations for some of the classes (for example, the “toothbrush” images for MVTec-AD in Figure 18 where the heatmaps just show a huge red blob). Tweaking the normalization for such classes reveals that the heatmaps actually tend to mark the correct anomalous regions, which in the case of “toothbrushes” can be seen in the explanation performance evaluation in Table 2.\nThe rows in all heatmaps show the following: (1) Input samples (2) FCDD heatmaps (3) gradient heatmaps with HSC (4) autoencoder reconstruction heatmaps. Heatmaps for MVTec-AD add a fifth row containing the ground-truth anomaly map.\nHeatmaps for Fashion-MNIST using auxiliary anomalies from CIFAR-100 are in Figure 15, using EMNIST for OE instead are in Figure 16. CIFAR-10 heatmaps are in Figure 17, and heatmaps for all classes of MVTec-AD are in Figure 18. Finally, we present ImageNet heatmaps in Figures 19 and 20." } ]
2,021
EXPLAINABLE DEEP ONE-CLASS CLASSIFICATION
SP:1d4d75e1bbb4e58273bc027f004aa986a587a6dd
[ "This paper proposes an approach to training deep latent variable models on data that is missing not at random. To learn the parameters of deep latent variable models, the paper adopts importance-weighted variational inference techniques. Experiments on a variety of datasets show that the proposed approach is effective by explicitly modeling missing not at random data." ]
When a missing process depends on the missing values themselves, it needs to be explicitly modelled and taken into account while doing likelihood-based inference. We present an approach for building and fitting deep latent variable models (DLVMs) in cases where the missing process is dependent on the missing data. Specifically, a deep neural network enables us to flexibly model the conditional distribution of the missingness pattern given the data. This allows for incorporating prior information about the type of missingness (e.g. self-censoring) into the model. Our inference technique, based on importance-weighted variational inference, involves maximising a lower bound of the joint likelihood. Stochastic gradients of the bound are obtained by using the reparameterisation trick both in latent space and data space. We show on various kinds of data sets and missingness patterns that explicitly modelling the missing process can be invaluable.
[ { "affiliations": [], "name": "Niels Bruun Ipsen" }, { "affiliations": [], "name": "Pierre-Alexandre Mattei" }, { "affiliations": [], "name": "Jes Frellsen" } ]
[ { "authors": [ "Alberto Bietti", "Julien Mairal" ], "title": "Invariance and stability of deep convolutional representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Benjamin Bloem-Reddy", "Yee Whye Teh" ], "title": "Probabilistic symmetries and invariant neural networks", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Yuri Burda", "Roger Grosse", "Ruslan Salakhutdinov" ], "title": "Importance weighted autoencoders", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Stef van Buuren", "Karin Groothuis-Oudshoorn" ], "title": "mice: Multivariate imputation by chained equations in R", "venue": "Journal of Statistical Software, pp", "year": 2010 }, { "authors": [ "Taco S. Cohen", "Mario Geiger", "Maurice Weiler" ], "title": "A general theory of equivariant CNNs on homogeneous spaces", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mark Collier", "Alfredo Nazabal", "Chris Williams" ], "title": "VAEs in the presence of missing data", "venue": "In the First ICML Workshop on The Art of Learning with Missing Values Artemiss (ARTEMISS),", "year": 2020 }, { "authors": [ "Arthur P. Dempster", "Nan M. Laird", "Donald B. Rubin" ], "title": "Maximum likelihood from incomplete data via the EM algorithm", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1977 }, { "authors": [ "Justin Domke", "Daniel Sheldon" ], "title": "Importance weighting and varational inference", "venue": "In Advances in Neural Information Processing Signals,", "year": 2018 }, { "authors": [ "Marco Doretti", "Sara Geneletti", "Elena Stanghellini" ], "title": "Missing data: a unified taxonomy guided by conditional independence", "venue": "International Statistical Review,", "year": 2018 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Michael Figurnov", "Shakir Mohamed", "Andriy Mnih" ], "title": "Implicit reparameterization gradients", "venue": "Advances in Neural Information Processing Signals,", "year": 2018 }, { "authors": [ "Andrew Gelman", "John B. Carlin", "Hal S. Stern", "David B. Dunson", "Aki Vehtari", "Donald B. Rubin" ], "title": "Bayesian data analysis", "venue": "Chapman and Hall/CRC,", "year": 2013 }, { "authors": [ "Zoubin Ghahramani", "Michael I Jordan" ], "title": "Supervised learning from incomplete data via an EM approach", "venue": "In Advances in Neural Information Processing Systems,", "year": 1994 }, { "authors": [ "Zoubin Ghahramani", "Michael I. Jordan" ], "title": "Learning from incomplete data", "venue": "Technical Report AIM1509CBCL-108, Massachusetts Institute of Technology,", "year": 1995 }, { "authors": [ "Sahra Ghalebikesabi", "Rob Cornish", "Luke J. Kelly", "Chris Holmes" ], "title": "Deep generative pattern-set mixture models for nonignorable missingness", "venue": "arXiv preprint arXiv:2103.03532,", "year": 2021 }, { "authors": [ "Peter W. Glynn" ], "title": "Importance sampling for Monte Carlo estimation of quantiles", "venue": "Proceedings of the 2nd St. Petersburg Workshop on Simulation,", "year": 1996 }, { "authors": [ "Yu Gong", "Hossein Hajimirsadeghi", "Jiawei He", "Megha Nawhal", "Thibaut Durand", "Greg Mori" ], "title": "Variational selective autoencoder", "venue": "In Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference,", "year": 2020 }, { "authors": [ "Xiangnan He", "Tat-Seng Chua" ], "title": "Neural factorization machines for sparse predictive analytics", "venue": "In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval,", "year": 2017 }, { "authors": [ "José Miguel Hernández-Lobato", "Neil Houlsby", "Zoubin Ghahramani" ], "title": "Probabilistic matrix factorization with non-random missing data", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Alexander Ilin", "Tapani Raiko" ], "title": "Practical approaches to principal component analysis in the presence of missing values", "venue": "Journal of Machine Learning Research,", "year": 1957 }, { "authors": [ "Niels Bruun Ipsen", "Pierre-Alexandre Mattei", "Jes Frellsen" ], "title": "How to deal with missing data in supervised deep learning", "venue": "In the First ICML Workshop on The Art of Learning with Missing Values Artemiss (ARTEMISS),", "year": 2020 }, { "authors": [ "Oleg Ivanov", "Michael Figurnov", "Dmitry Vetrov" ], "title": "Variational autoencoder with arbitrary conditioning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with Gumbel-softmax", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In International Conference on Learning Representations,", "year": 2013 }, { "authors": [ "Yehuda Koren", "Robert Bell", "Chris Volinsky" ], "title": "Matrix factorization techniques for recommender systems", "venue": null, "year": 2009 }, { "authors": [ "Dawen Liang", "Rahul G. Krishnan", "Matthew D. Hoffman", "Tony Jebara" ], "title": "Variational autoencoders for collaborative filtering", "venue": "In Proceedings of the 2018 World Wide Web Conference,", "year": 2018 }, { "authors": [ "David K. Lim", "Naim U. Rashid", "Junier B. Oliva", "Joseph G. Ibrahim" ], "title": "Handling non-ignorably missing features in electronic health records data using importance-weighted autoencoders", "venue": "arXiv preprint arXiv:2101.07357,", "year": 2021 }, { "authors": [ "Roderick J.A. Little", "Donald B. Rubin" ], "title": "Statistical analysis with missing data", "venue": null, "year": 2002 }, { "authors": [ "Chao Ma", "Wenbo Gong", "José Miguel Hernández-Lobato", "Noam Koenigstein", "Sebastian Nowozin", "Cheng Zhang" ], "title": "Partial VAE for hybrid recommender system", "venue": "In NIPS Workshop on Bayesian Deep Learning,", "year": 2018 }, { "authors": [ "Chao Ma", "Sebastian Tschiatschek", "Konstantina Palla", "Jose Miguel Hernandez-Lobato", "Sebastian Nowozin", "Cheng Zhang" ], "title": "EDDI: Efficient dynamic discovery of high-value information with partial VAE", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Wei Ma", "George H. Chen" ], "title": "Missing not at random in matrix completion: The effectiveness of estimating missingness probabilities under a low nuclear norm assumption", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Chris J. Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Benjamin M. Marlin", "Richard S. Zemel" ], "title": "Collaborative prediction and ranking with non-random missing data", "venue": "In Proceedings of the third ACM conference on Recommender systems,", "year": 2009 }, { "authors": [ "Benjamin M Marlin", "Richard S Zemel", "Sam Roweis", "Malcolm Slaney" ], "title": "Collaborative filtering and the missing at random assumption", "venue": "In Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence,", "year": 2007 }, { "authors": [ "Pierre-Alexandre Mattei", "Jes Frellsen" ], "title": "Leveraging the exact likelihood of deep latent variable models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Pierre-Alexandre Mattei", "Jes Frellsen" ], "title": "Refit your encoder when new data comes by", "venue": "In 3rd NeurIPS workshop on Bayesian Deep Learning,", "year": 2018 }, { "authors": [ "Pierre-Alexandre Mattei", "Jes Frellsen" ], "title": "MIWAE: Deep generative modelling and imputation of incomplete data sets", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Andriy Mnih", "Russ R. Salakhutdinov" ], "title": "Probabilistic matrix factorization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2008 }, { "authors": [ "Shakir Mohamed", "Mihaela Rosca", "Michael Figurnov", "Andriy Mnih" ], "title": "Monte carlo gradient estimation in machine learning", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Karthika Mohan", "Judea Pearl" ], "title": "Graphical models for processing missing data", "venue": "Journal of American Statistical Association (in press),", "year": 2021 }, { "authors": [ "Geert Molenberghs", "Caroline Beunckens", "Cristina Sotto", "Michael G. Kenward" ], "title": "Every missingness not at random model has a missingness at random counterpart with equal fit", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2008 }, { "authors": [ "Razieh Nabi", "Rohit Bhattacharya", "Ilya Shpitser" ], "title": "Full law identification in graphical models of missing data: Completeness results", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Alfredo Nazabal", "Pablo M. Olmos", "Zoubin Ghahramani", "Isabel Valera" ], "title": "Handling incomplete heterogeneous data using VAEs", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS 2011 Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Christian Robert" ], "title": "The Bayesian choice: from decision-theoretic foundations to computational implementation", "venue": "Springer Science & Business Media,", "year": 2007 }, { "authors": [ "Sam T. Roweis" ], "title": "EM algorithms for PCA and SPCA", "venue": "In Advances in neural information processing systems,", "year": 1998 }, { "authors": [ "Donald B. Rubin" ], "title": "Formalizing subjective notions about the effect of nonrespondents in sample surveys", "venue": "Journal of the American Statistical Association,", "year": 1977 }, { "authors": [ "Donald B. Rubin" ], "title": "Multiple imputation after 18+ years", "venue": "Journal of the American statistical Association,", "year": 1996 }, { "authors": [ "Mauricio Sadinle", "Jerome P. Reiter" ], "title": "Sequential identification of nonignorable missing data mechanisms", "venue": "Statistica Sinica,", "year": 2018 }, { "authors": [ "Tobias Schnabel", "Adith Swaminathan", "Ashudeep Singh", "Navin Chandak", "Thorsten Joachims" ], "title": "Recommendations as treatments: Debiasing learning and evaluation", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Shaun Seaman", "John Galati", "Dan Jackson", "John Carlin" ], "title": "What is meant by “missing at random”", "venue": "Statistical Science,", "year": 2013 }, { "authors": [ "Suvash Sedhain", "Aditya Krishna Menon", "Scott Sanner", "Lexing Xie" ], "title": "AutoRec: Autoencoders meet collaborative filtering", "venue": "In Proceedings of the 24th international conference on World Wide Web,", "year": 2015 }, { "authors": [ "Aude Sportisse", "Claire Boyer", "Julie Josse" ], "title": "Imputation and low-rank estimation with missing not at random data", "venue": "Statistics and Computing,", "year": 2020 }, { "authors": [ "Aude Sportisse", "Claire Boyer", "Julie Josse" ], "title": "Estimation and imputation in probabilistic principal component analysis with missing not at random data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Harald Steck" ], "title": "Evaluation of recommendations: rating-prediction and ranking", "venue": "In Proceedings of the 7th ACM conference on Recommender systems,", "year": 2013 }, { "authors": [ "Daniel J. Stekhoven", "Peter Bühlmann" ], "title": "MissForest—non-parametric missing value imputation for mixed-type data", "venue": null, "year": 2012 }, { "authors": [ "Niansheng Tang", "Yuanyuan Ju" ], "title": "Statistical inference for nonignorable missing-data problems: a selective review", "venue": "Statistical Theory and Related Fields,", "year": 2018 }, { "authors": [ "Michael E. Tipping", "Christopher M. Bishop" ], "title": "Probabilistic principal component analysis", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 1999 }, { "authors": [ "Xiaojie Wang", "Rui Zhang", "Yu Sun", "Jianzhong Qi" ], "title": "Doubly robust joint learning for recommendation on data missing not at random", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Samuel Wiqvist", "Pierre-Alexandre Mattei", "Umberto Picchini", "Jes Frellsen" ], "title": "Partially exchangeable networks and architectures for learning summary statistics in approximate Bayesian computation", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela Van Der Schaar" ], "title": "GAIN: Missing data imputation using generative adversarial nets", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2018 }, { "authors": [ "Ma" ], "title": "2018) with an embedding size of 20 and a code size of 50, along with a linear mapping to a latent space of size 30. In the Gaussian observation model, the decoder is a linear mapping and there is a sigmoid activation of the mean in data space, scaled to match the scale", "venue": null, "year": 2018 }, { "authors": [ "CPT-v: Marlin" ], "title": "show that a multinomial mixture model with a Conditional Probability", "venue": null, "year": 2007 }, { "authors": [ "MF-MNAR: Hernández-Lobato" ], "title": "2014) extended probabilistic matrix factorization to include", "venue": null, "year": 2014 }, { "authors": [ "MF-IPS: Schnabel" ], "title": "2016) applied propensity-based methods from causal inference", "venue": null, "year": 2016 }, { "authors": [ "MF-DR-JL", "NFM-DR-JL: Wang" ], "title": "2019) combines the propensity-scoring approach", "venue": null, "year": 2019 }, { "authors": [ "Schnabel" ], "title": "2016) with an error-imputation approach by Steck (2013) to obtain a doubly ro", "venue": null, "year": 2013 }, { "authors": [ "Schnabel" ], "title": "2016), 5% of the MCAR test-set is used to learn", "venue": null, "year": 2017 }, { "authors": [ "Cohen" ], "title": "Again, such a setting can appear when there is strong geometric structure in the data (e.g. with images or proteins). Invariance or equivariance can be built in the architecture of πφ(x) by leveraging the quite large body of work on invariant/equivariant convolutional neural networks, see e.g. Bietti & Mairal", "venue": "Wiqvist et al", "year": 2019 }, { "authors": [ "D therein" ], "title": "THEORETICAL PROPERTIES OF THE NOT-MIWAE BOUND The properties of the not-MIWAE bound are directly inherited from the ones of the usual IWAE bound. Indeed, as we will see, the not-MIWAE bound is a particular instance of IWAE bound with an extended latent space composed of both the code and the missing values", "venue": "More specifically,", "year": 2020 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nz\nx\ns\nθ\nφ\nγ\nN\n(a)\nPPCA not-MIWAE PPCA\n(b)\nFigure 1: (a) Graphical model of the not-MIWAE. (b) Gaussian data with MNAR values. Dots are fully observed, partially observed data are displayed as black crosses. A contour of the true distribution is shown together with directions found by PPCA and not-MIWAE with a PPCA decoder.\nMissing data often constitute systemic issues in real-world data analysis, and can be an integral part of some fields, e.g. recommender systems. This requires the analyst to take action by either using methods and models that are applicable to incomplete data or by performing imputations of the missing data before applying models requiring complete data. The expected model performance (often measured in terms of imputation error or innocuity of missingness on the inference results) depends on the assumptions made about the missing mechanism and how well those assumptions match the true missing mechanism. In a seminal paper, Rubin (1976) introduced a formal probabilistic framework to assess missing mechanism assumptions and their consequences. The most commonly used assumption, either implicitly or explicitly, is that a part of the data is missing at random\n(MAR). Essentially, the MAR assumption means that the missing pattern does not depend on the missing values. This makes it possible to ignore the missing data mechanism in likelihood-based inference by marginalizing over the missing data. The often implicit assumption made in nonprobabilistic models and ad-hoc methods is that the data are missing completely at random (MCAR). MCAR is a stronger assumption than MAR, and informally it means that both observed and missing data do not depend on the missing pattern. More details on these assumptions can be found in the monograph of Little & Rubin (2002); of particular interest are also the recent revisits of Seaman et al. (2013) and Doretti et al. (2018). In this paper, our goal is to posit statistical models that leverage deep learning in order to break away from these assumptions. Specifically, we propose a general ∗Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark †Université Côte d’Azur, Inria (Maasai team), Laboratoire J.A. Dieudonné, UMR CNRS 7351, France ‡Equal contribution\nrecipe for dealing with cases where there is prior information about the distribution of the missing pattern given the data (e.g. self-censoring).\nThe MAR and MCAR assumptions are violated when the missing data mechanism is dependent on the missing data themselves. This setting is called missing not at random (MNAR). Here the missing mechanism cannot be ignored, doing so will lead to biased parameter estimates. This setting generally requires a joint model for data and missing mechanism.\nDeep latent variable models (DLVMs, Kingma & Welling, 2013; Rezende et al., 2014) have recently been used for inference and imputation in missing data problems (Nazabal et al., 2020; Ma et al., 2018; 2019; Ivanov et al., 2019; Mattei & Frellsen, 2019). This led to impressive empirical results in the MAR and MCAR case, in particular for high-dimensional data." }, { "heading": "1.1 CONTRIBUTIONS", "text": "We introduce the not-missing-at-random importance-weighted autoencoder (not-MIWAE) which allows for the application of DLVMs to missing data problems where the missing mechanism is MNAR. This is inspired by the missing data importance-weighted autoencoder (MIWAE, Mattei & Frellsen, 2019), a framework to train DLVMs in MAR scenarios, based itself on the importanceweighted autoencoder (IWAE) of Burda et al. (2016). The general graphical model for the notMIWAE is shown in figure 1a. The first part of the model is simply a latent variable model: there is a stochastic mapping parameterized by θ from a latent variable z ∼ p(z) to the data x ∼ pθ(x|z), and the data may be partially observed. The second part of the model, which we call the missing model, is a stochastic mapping from the data to the missing mask s ∼ pφ(s|x). Explicit specification of the missing model pφ(s|x) makes it possible to address MNAR issues. The model can be trained efficiently by maximising a lower bound of the joint likelihood (of the observed features and missing pattern) obtained via importance weighted variational inference (Burda et al., 2016). A key difference with the MIWAE is that we use the reparameterization trick in the data space, as well as in the code space, in order to get stochastic gradients of the lower bound.\nMissing processes affect data analysis in a wide range of domains and often the MAR assumption does not hold. We apply our method to censoring in datasets from the UCI database, clipping in images and the issue of selection bias in recommender systems." }, { "heading": "2 BACKGROUND", "text": "Assume that the complete data are stored within a data matrix X = (x1, . . . ,xn)ᵀ ∈ Xn that contain n i.i.d. copies of the random variable x ∈ X , where X = X1 × · · · × Xp is a p-dimensional feature space. For simplicity, xij refers to the j’th feature of xi, and xi refers to the i’th sample in the data matrix. Throughout the text, we will make statements about the random variable x, and only consider samples xi when necessary. In a missing data context, each sample can be split into an observed part and a missing part, xi = (xoi ,x m i ). The pattern of missingness is individual to each copy of x and described by a corresponding mask random variable s ∈ {0, 1}p. This leads to a mask matrix S = (s1, . . . , sn)ᵀ ∈ {0, 1}n×p verifying sij = 1 if xij is observed and sij = 0 if xij is missing.\nWe wish to construct a parametric model pθ,φ(x, s) for the joint distribution of a single data point x and its mask s, which can be factored as\npθ,φ(x, s) = pθ(x)pφ(s|x). (1)\nHere pφ(s|x) = pφ(s|xo,xm) is the conditional distribution of the mask, which may depend on both the observed and missing data, through its own parameters φ. The three assumptions from the framework of Little & Rubin (2002) (see also Ghahramani & Jordan, 1995) pertain to the specific form of this conditional distribution:\n• MCAR: pφ(s|x) = pφ(s), • MAR: pφ(s|x) = pφ(s|xo), • MNAR: pφ(s|x) may depend on both xo and xm.\nTo maximize the likelihood of the parameters (θ, φ), based only on observed quantities, the missing data is integrated out from the joint distribution\npθ,φ(x o, s) = ∫ pθ(x o,xm)pφ(s|xo,xm) dxm. (2)\nIn both the MCAR and MAR cases, inference for θ using the full likelihood becomes proportional to pθ,φ(xo, s) ∝ pθ(xo), and the missing mechanism can be ignored while focusing only on pθ(xo). In the MNAR case, the missing mechanism can depend on both observed and missing data, offering no factorization of the likelihood in equation (2). The parameters of the data generating process and the parameters of the missing data mechanism are tied together by the missing data." }, { "heading": "2.1 PPCA EXAMPLE", "text": "A linear DLVM with isotropic noise variance can be used to recover a model similar to probabilistic principal component analysis (PPCA, Roweis, 1998; Tipping & Bishop, 1999). In figure 1b, a dataset affected by an MNAR missing process is shown together with two fitted PPCA models, regular PPCA and the not-MIWAE formulated as a PPCA-like model. Data is generated from a multivariate normal distribution and an MNAR missing process is imposed by setting the horizontal coordinate to missing when it is larger than its mean, i.e. it becomes missing because of the value it would have had, had it been observed. Regular PPCA for missing data assumes that the missing mechanism is MAR so that the missing process is ignorable. This introduces a bias, both in the estimated mean and in the estimated principal signal direction of the data. The not-MIWAE PPCA assumes the missing mechanism is MNAR so the data generating process and missing data mechanism are modelled jointly as described in equation (2)." }, { "heading": "2.2 PREVIOUS WORK", "text": "In (Rubin, 1976) the appropriateness of ignoring the missing process when doing likelihood based or Bayesian inference was introduced and formalized. The introduction of the EM algorithm (Dempster et al., 1977) made it feasible to obtain maximum likelihood estimates in many missing data settings, see e.g. Ghahramani & Jordan (1994; 1995); Little & Rubin (2002). Sampling methods such as Markov chain Monte Carlo have made it possible to sample a target posterior in Bayesian models, including the missing data, so that parameter marginal distributions and missing data marginal distributions are available directly (Gelman et al., 2013). This is also the starting point of the multiple imputations framework of Rubin (1977; 1996). Here the samples of the missing data are used to provide several realisations of complete datasets where complete-data methods can be applied to get combined mean and variability estimates.\nThe framework of Little & Rubin (2002) is instructive in how to handle MNAR problems and a recent review of MNAR methods can be found in (Tang & Ju, 2018). Low rank models were used for estimation and imputation in MNAR settings by Sportisse et al. (2020a). Two approaches were taken to fitting models, 1) maximising the joint distribution of data and missing mask using an EM algorithm, and 2) implicitly modelling the joint distribution by concatenating the data matrix and the missing mask and working with this new matrix. This implies a latent representation both giving rise to the data and the mask. An overview of estimation methods for PCA and PPCA with missing data was given by Ilin & Raiko (2010), while PPCA in the presence of an MNAR missing mechanism has been addressed by Sportisse et al. (2020b). There has been some focus on MNAR issues in the form of selection bias within the recommender system community (Marlin et al., 2007; Marlin & Zemel, 2009; Steck, 2013; Hernández-Lobato et al., 2014; Schnabel et al., 2016; Wang et al., 2019) where methods applied range from joint modelling of data and missing model using multinomial mixtures and matrix factorization to debiasing existing methods using propensity based techniques from causality.\nDeep latent variable models are intuitively appealing in a missing context: the generative part of the model can be used to sample the missing part of an observation. This was already utilized by Rezende et al. (2014) to do imputation and denoising by sampling from a Markov chain whose stationary distribution is approximately the conditional distribution of the missing data given the observed. This procedure has been enhanced by Mattei & Frellsen (2018a) using Metropolis-withinGibbs. In both cases the experiments were assuming MAR and a fitted model, based on complete data, was already available.\nApproaches to fitting DLVMs in the presence of missing have recently been suggested, such as the HI-VAE by Nazabal et al. (2020) using an extension of the variational autoencoder (VAE) lower bound, the p-VAE by Ma et al. (2018; 2019) using the VAE lower bound and a permutation invariant encoder, the MIWAE by Mattei & Frellsen (2019), extending the IWAE lower bound (Burda et al., 2016), and GAIN (Yoon et al., 2018) using GANs for missing data imputation. All approaches are assuming that the missing process is MAR or MCAR. In (Gong et al., 2020), the data and missing mask are modelled together, as both being generated by a mapping from the same latent space, thereby tying the data model and missing process together. This gives more flexibility in terms of missing process assumptions, akin to the matrix factorization approach by Sportisse et al. (2020a).\nIn concurrent work, Collier et al. (2020) have developed a deep generative model of the observed data conditioned on the mask random variable, and Lim et al. (2021) apply a model similar to the not-MIWAE to electronic health records data. In forthcoming work, Ghalebikesabi et al. (2021) propose a deep generative model for non-ignorable missingness building on ideas from VAEs and pattern-set mixture models." }, { "heading": "3 INFERENCE IN DLVMS AFFECTED BY MNAR", "text": "In an MNAR setting, the parameters for the data generating process and the missing data mechanism need to be optimized jointly using all observed quantities. The relevant quantity to maximize is therefore the log-(joint) likelihood\n`(θ, φ) = n∑ i=1 log pθ,φ(x o i , si), (3)\nwhere we can rewrite the general contribution of data points log pθ,φ(xo, s) as\nlog ∫ pφ(s|xo,xm)pθ(xo|z)pθ(xm|z)p(z) dz dxm, (4)\nusing the assumption that the observation model is fully factorized pθ(x|z) = ∏ j pθ(xj |z), which implies pθ(x|z) = p(xo|z)pθ(xm|z). The integrals over missing and latent variables make direct maximum likelihood intractable. However, the approach of Burda et al. (2016), using an inference network and importance sampling to derive a more tractable lower bound of `(θ, φ), can be used here as well. The key idea is to posit a conditional distribution qγ(z|xo) called the variational distribution that will play the role of a learnable proposal in an importance sampling scheme.\nAs in VAEs (Kingma & Welling, 2013; Rezende et al., 2014) and IWAEs (Burda et al., 2016), the distribution qγ(z|xo) comes from a simple family (e.g. the Gaussian or Student’s t family) and its parameters are given by the output of a neural network (called inference network or encoder) that takes xo as input. The issue is that a neural net cannot readily deal with variable length inputs (which is the case of xo). This was tackled by several works: Nazabal et al. (2020) and Mattei & Frellsen (2019) advocated simply zero-imputing xo to get inputs with constant length, and Ma et al. (2018; 2019) used a permutation-invariant network able to deal with inputs with variable length.\nIntroducing the variational distribution, the contribution of a single observation is equal to\nlog pθ,φ(x o, s) = log\n∫ pφ(s|xo,xm)pθ(xo|z)p(z)\nqγ(z|xo) qγ(z|xo)pθ(xm|z) dxm dz (5)\n= logEz∼qγ(z|xo),xm∼pθ(xm|z)\n[ pφ(s|xo,xm)pθ(xo|z)p(z)\nqγ(z|xo)\n] . (6)\nThe main idea of importance weighed variational inference and of the IWAE is to replace the expectation inside the logarithm by a Monte Carlo estimate of it (Burda et al., 2016). This leads to the objective function\nLK(θ, φ, γ) = n∑ i=1 E log 1 K K∑ k=1 wki , (7) where, for all k ≤ K, i ≤ n,\nwki = pφ(si|xoi ,xmki)pθ(xoi |zki)p(zki)\nqγ(zki|xoi ) , (8)\nand (z1i,xm1i), . . . , (zKi,x m Ki) are K i.i.d. samples from qγ(z|xoi )pθ(xm|z), over which the expectation in equation (7) is taken. The unbiasedness of the Monte Carlo estimates ensures (via Jensen’s inequality) that the objective is indeed a lower-bound of the likelihood. Actually, under the moment conditions of (Domke & Sheldon, 2018, Theorem 3), which we detail in Appendix D, it is possible to show that the sequence (LK(θ, φ, γ))K≥1 converges monotonically (Burda et al., 2016, Theorem 1) to the likelihood:\nL1(θ, φ, γ) ≤ . . . ≤ LK(θ, φ, γ) −−−−→ K→∞ `(θ, φ). (9)\nProperties of the not-MIWAE objective The bound LK(θ, φ, γ) has essentially the same properties as the (M)IWAE bounds, see Mattei & Frellsen, 2019, Section 2.4 for more details. The key difference is that we are integrating over both the latent space and part of the data space. This means that, to obtain unbiased estimates of gradients of the bound, we will need to backpropagate through samples from qγ(z|xoi )pθ(xm|z). A simple way to do this is to use the reparameterization trick both for qγ(z|xoi ) and pθ(xm|z). This is the approach that we chose in our experiments. The main limitation is that pθ(x|z) has to belong to a reparameterizable family, like Gaussians or Student’s t distributions (see Figurnov et al., 2018 for a list of available distributions). If the distribution is not readily reparametrisable (e.g. if the data are discrete), several other options are available, see e.g. the review of Mohamed et al. (2020), and, in the discrete case, the continuous relaxations of Jang et al. (2017) and Maddison et al. (2017).\nImputation When the model has been trained, it can be used to impute missing values. If our performance metric is a loss function L(xm, x̂m), optimal imputations x̂m minimise Exm [L(xm, x̂m)|xo, s]. When L is the squared error, the optimal imputation is the conditional mean that can be estimated via self-normalised importance sampling (Mattei & Frellsen, 2019), see appendix B for more details." }, { "heading": "3.1 USING PRIOR INFORMATION VIA THE MISSING DATA MODEL", "text": "The missing data mechanism can both be known/decided upon in advance (so that the full relationship pφ(s|x) is fixed and no parameters need to be learned) or the type of missing mechanism can be known (but the parameters need to be learnt) or it can be unknown both in terms of parameters and model. The more we know about the nature of the missing mechanism, the more information we can put into designing the missing model. This in turn helps inform the data model how its parameters should be modified so as to accommodate the missing model. This is in line with the findings of Molenberghs et al. (2008), who showed that, for MNAR modelling to work, one has to leverage prior knowledge about the missing process. A crucial issue is under what model assumptions the full data distribution can be recovered from incomplete sample. Indeed, some general missing models may lead to inconsistent statistical estimation (see e.g. Mohan & Pearl, 2021; Nabi et al., 2020).\nThe missing model is essentially solving a classification problem; based on the observed data and the output from the data model filling in the missing data, it needs to improve its “accuracy” in predicting the mask. A Bernoulli distribution is used for the probability of the mask given both observed and missing data\npφ(s|xo,xm) = pφ(s|x) = Bern(s|πφ(x)) = ∏p j=1 πφ,j(x) sj (1− πφ,j(x))1−sj . (10)\nHere πj is the estimated probability of being observed for that particular observation for feature j. The mapping πφ,j(x) from the data to the probability of being observed for the j’th feature can be as general or specific as needed. A simple example could be that of self-masking or self-censoring, where the probability of the j’th feature being observed is only dependent on the feature value, xj . Here the mapping can be a sigmoid on a linear mapping of the feature value, πφ,j(x) = σ(axj + b). The missing model can also be based on a group theoretic approach, see appendix C." }, { "heading": "4 EXPERIMENTS", "text": "In this section we apply the not-MIWAE to problems with values MNAR: censoring in multivariate datasets, clipping in images and selection bias in recommender systems. Implementation details and a link to source code can be found in appendix A." }, { "heading": "4.1 EVALUATION METRICS", "text": "Model performance can be assessed using different metrics. A first metric would be to look at how well the marginal distribution of the data has been inferred. This can be assessed, if we happen to have a fully observed test-set available. Indeed, we can look at the test log-likelihood of this fully observed test-set as a measure of how close pθ(x) and the true distribution of x are. In the case of a DLVM, performance can be estimated using importance sampling with the variational distribution as proposal (Rezende et al., 2014). Since the encoder is tuned to observations with missing data, it should be retrained (while keeping the decoder fixed) as suggested by Mattei & Frellsen (2018b).\nAnother metric of interest is the imputation error. In experimental settings where the missing mechanism is under our control, we have access to the actual values of the missing data and the imputation error can be found directly as an error measure between these and the reconstructions from the model. In real-world datasets affected by MNAR processes, we cannot use the usual approach of doing a train-test split of the observed data. As the test-set is biased by the same missing mechanism as the training-set it is not representative of the full population. Here we need a MAR data sample to evaluate model performance (Marlin et al., 2007)." }, { "heading": "4.2 SINGLE IMPUTATION IN UCI DATA SETS AFFECTED BY MNAR", "text": "We compare different imputation techniques on datasets from the UCI database (Dua & Graff, 2017), where in an MCAR setting the MIWAE has shown state of the art performance (Mattei & Frellsen, 2019). An MNAR missing process is introduced by self-masking in half of the features: when the feature value is higher than the feature mean it is set to missing. The MIWAE and not-MIWAE, as well as their linear PPCA-like versions, are fitted to the data with missing values. For the notMIWAE three different approaches to the missing model are used: 1) agnostic where the data model output is mapped to logits for the missing process via a single dense linear layer, 2) self-masking where logistic regression is used for each feature and 3) self-masking known where the sign of the weights in the logistic regression is known.\nWe compare to the low-rank approximation of the concatenation of data and mask by Sportisse et al. (2020a) that is implicitly modelling the data and mask jointly. Furthermore we compare to mean imputation, missForest (Stekhoven & Bühlmann, 2012) and MICE (Buuren & GroothuisOudshoorn, 2010) using Bayesian Ridge regression. Similar settings are used for the MIWAE and not-MIWAE, see appendix A. Results over 5 runs are seen in table 1. Results for varying missing rates are in appendix E.\nThe low-rank joint model is almost always better than PPCA, missForest, MICE and mean, i.e. all M(C)AR approaches, which can be attributed to the implicit modelling of data and mask together. At the same time the not-MIWAE PPCA is always better than the corresponding low-rank joint model, except for the agnostic missing model on the Yeast dataset. Supplying the missing model with more knowledge of the missing process (that it is self-masking and the direction of the missing mechanism) improves performance. The not-MIWAE performance is also improved with more knowledge in the missing model. The agnostic missing process can give good performance, but is\noften led astray by an incorrectly learned missing model. This speaks to the trade-off between data model flexibility and missing model flexibility. The not-MIWAE PPCA has huge inductive bias in the data model and so we can employ a more flexible missing model and still get good results. For the not-MIWAE having both a flexible data model and a flexible missing model can be detrimental to performance. One way to asses the learnt missing processes is the mask classification accuracy on fully observed data. These are reported in table A1 and show that the accuracy increases as more information is put into the missing model." }, { "heading": "4.3 CLIPPING IN SVHN IMAGES", "text": "We emulate the clipping phenomenon in images on the street view house numbers dataset (SVHN, Netzer et al., 2011). Here we introduce a self-masking missing mechanism that is identical for all pixels. The missing data is Bernoulli sampled with probability\nPr(sij = 1|xij) = 1\n1 + e−logits , logits =W (xij − b), (11)\nwhere W = −50 and b = 0.75. This mimmicks a clipping process where 0.75 is the clipping point (the data is converted to gray scale in the [0, 1] range). For this experiment we use the true missing process as the missing model in the not-MIWAE.\nTable 2 shows model performance in terms of imputation RMSE and test-set log likelihood as estimated with 10k importance samples. The not-MIWAE outperforms the MIWAE both in terms of test-set log likelihood and imputation RMSE. This is further illustrated in the imputations shown in figure 3. Since the MIWAE is only fitting the observed data, the range of pixel values in the imputations is limited compared to the true range. The not-MIWAE is forced to push some of the data-distribution towards higher pixel values, in order to get a higher likelihood in the logistic regression in the missing model. In figures 2a–2c, histograms over the imputation values are shown together with the true pixel values of the missing data. Here we see that the not-MIWAE puts a considerable amount of probability mass above the clipping value." }, { "heading": "4.4 SELECTION BIAS IN THE YAHOO! R3 DATASET", "text": "The Yahoo! R3 dataset (webscope.sandbox.yahoo.com) contains ratings on a scale from 1–5 of songs in the database of the Yahoo! LaunchCast internet radio service and was first presented in (Marlin et al., 2007). It consists of two datasets with the same 1,000 songs selected randomly from\nthe LaunchCast database. The first dataset is considered an MNAR training set and contains selfselected ratings from 15,400 users. In the second dataset, considered an MCAR test-set, 5,400 of these users were asked to rate exactly 10 randomly selected songs. This gives a unique opportunity to train a model on a real-world MNAR-affected dataset while being able to get an unbiased estimate of the imputation error, due to the availability of MCAR ratings. The plausibility that the set of selfselected ratings was subject to an MNAR missing process was explored and substantiated by Marlin et al. (2007). The marginal distributions of samples from the self-selected dataset and the randomly selected dataset can be seen in figures 4a and 4b.\nWe train the MIWAE and the not-MIWAE on the MNAR ratings and evaluate the imputation error on the MCAR ratings. Both a gaussian and a categorical observation model is explored. In order to get reparameterized samples in the data space for the categorical observation model, we use the Gumbel-Softmax trick (Jang et al., 2017) with a temperature of 0.5. The missing model is a logistic regression for each item/feature, with a shared weight across features and individual biases. A description of competitors can be found in appendix A.3 and follows the setup in (Wang et al., 2019). The results are grouped in table 3, from top to bottom, according to models not including the missing process (MAR approaches), models using propensity scoring techniques to debias training losses, and finally models learning a data model and a missing model jointly, without the use of propensity estimates.\nThe not-MIWAE shows state of the art performance, also compared to models based on propensity scores. The propensity based techniques need access to a small sample of MCAR data, i.e. a part of the test-set, to estimate the propensities using Naive Bayes, though they can be estimated using logistic regression if covariates are available (Schnabel et al., 2016) or using a nuclear-norm-constrained matrix factorization of the missing mask itself (Ma & Chen, 2019). We stress that the not-MIWAE does not need access to similar unbiased data in order to learn the missing model. However, the missing model in the not-MIWAE can take available information into account, e.g. we could fit a continuous mapping to the propensities and use this as the missing model, if propensities were available. Histograms over imputations for the missing data in the MCAR test-set can be seen for the MIWAE and notMIWAE in figures 4c and 4d. The marginal distribution of the not-MIWAE imputations are seen to match that of the MCAR test-set better than the marginal distribution of the MIWAE imputations." }, { "heading": "5 CONCLUSION", "text": "The proposed not-MIWAE is versatile both in terms of defining missing mechanisms and in terms of application area. There is a trade-off between data model complexity and missing model complexity. In a parsimonious data model a very general missing process can be used while in flexible data\nmodel the missing model needs to be more informative. Specifically, any knowledge about the missing process should be incorporated in the missing model to improve model performance. Doing so using recent advances in equivariant/invariant neural networks is an interesting avenue for future research (see appendix C). Recent developments on the subject of recoverability/identifiability of MNAR models (Sadinle & Reiter, 2018; Mohan & Pearl, 2021; Nabi et al., 2020; Sportisse et al., 2020b) could also be leveraged to design provably idenfiable not-MIWAE models.\nSeveral extensions of the graphical models used here could be explored. For example, one could break off the conditional independence assumptions, in particular the one of the mask given the data. This could, for example, be done by using an additional latent variable pointing directly to the mask. Combined with a discriminative classifier, the not-MIWAE model could also be used in supervised learning with input values missing not at random following the techniques by Ipsen et al. (2020)." }, { "heading": "ACKNOWLEDGMENTS", "text": "The Danish Innovation Foundation supported this work through Danish Center for Big Data Analytics driven Innovation (DABAI). JF acknowledge funding from the Independent Research Fund Denmark (grant number 9131-00082B) and the Novo Nordisk Foundation (grant numbers NNF20OC0062606 and NNF20OC0065611)." }, { "heading": "C MISSING MODEL, GROUP THEORETIC APPROACH", "text": "A more complex form of prior information that can be used to choose the form of πφ(x) is grouptheoretic. For example, we may know a priori that pφ(s|x) is invariant to a certain group action g ·x on the data space:\n∀g, pφ(s|x) = pφ(s|g · x). (20)\nThis would for example be the case, if the data sets were made of images whose class is invariant to translations (which is the case of most image data sets, like MNIST or SVHN), and with a missing model only dependent on the class. Similarly, one may know that the missing process is equivariant:\n∀g, pφ(g · s|x) = pφ(s|g−1 · x). (21)\nAgain, such a setting can appear when there is strong geometric structure in the data (e.g. with images or proteins). Invariance or equivariance can be built in the architecture of πφ(x) by leveraging the quite large body of work on invariant/equivariant convolutional neural networks, see e.g. Bietti & Mairal (2017); Cohen et al. (2019); Zaheer et al. (2017); Wiqvist et al. (2019); Bloem-Reddy & Teh (2020), and references therein." }, { "heading": "D THEORETICAL PROPERTIES OF THE NOT-MIWAE BOUND", "text": "The properties of the not-MIWAE bound are directly inherited from the ones of the usual IWAE bound. Indeed, as we will see, the not-MIWAE bound is a particular instance of IWAE bound with an extended latent space composed of both the code and the missing values. More specifically, recall the definition of the not-MIWAE bound\nLK(θ, φ, γ) = n∑ i=1 E log 1 K K∑ k=1 wki , with wki = pθ(xoi |zki)pφ(si|xoi ,xmki)p(zki) qγ(zki|xoi ) . (22)\nEach ith term of the sum can be seen as an IWAE bound with extended latent variable (zki,xmki), whose prior is pθ(xmki|zki)p(zki). The related importance sampling proposal of the ith term is pθ(x m ki|zki)qγ(zki|xoi ), and the observation model is pφ(si|xoi ,xmki)pθ(xoi |zki).\nSince all n terms of the sum are IWAE bounds, Theorem 1 from Burda et al. (2016) directly gives the monotonicity property:\nL1(θ, φ, γ) ≤ . . . ≤ LK(θ, φ, γ). (23)\nRegarding convergence of the bound to the true likelihood, we can use Theorem 3 of Domke & Sheldon (2018) for each term of the sum to get the following result. Theorem. Assuming that, for all i ∈ {1, ..., n},\n• there exists αi > 0 such that E [ |w1i − pθ,φ(xoi , si)|2+αi ] <∞,\n• lim supK−→∞ E [ K/(w1i + ...+ wKi) ] <∞,\nthe not-MIWAE bound converges to the true likelihood at rate 1/K:\n`(θ, φ)− LK(θ, φ, γ) ∼ K→∞\n1\nK n∑ i=1 Var[w1i] 2pθ,φ(x o i , si) 2 . (24)\nE VARYING MISSING RATE (UCI)\nThe UCI experiments use a self-masking missing process in half the features: when the feature value is higher than the feature mean it is set to missing. In order to investigate varying missing rates we change the cutoff point from the mean to the mean plus an offset. The offsets used are {0, 0.25, 0.5, 0.75, 1.0}, so that the largest cutoff point will be the mean plus one standard deviation. Increasing the cutoff point further results in mainly imputing outliers. Results for PPCA and notMIWAE PPCA using the agnostic missing model are seen in figure 5 and using the self-masking model with known sign of the weights are seen in figure 6. Figure 7 shows the results for MIWAE and not-MIWAE using self-masking with known sign of the weights." } ]
2,021
null
SP:da630280f443afedfacaf7ad1abe20d97ebb60f2
[ "In this work generative models using a GP as prior and a deep network as likelihood (GP-DGMs) are considered. In the VAE formalism for inference, the novelty of this paper is located in the encoder: It is sparse and the posterior can be computed even when part of the observations are missing. Sparsity is obtained using inducing inputs and the missing observations are handled through the use of deep sets, i.e. the observations aren't given as a vector, but as a permutation-invariant set of (index, value) pairs." ]
Large, multi-dimensional spatio-temporal datasets are omnipresent in modern science and engineering. An effective framework for handling such data are Gaussian process deep generative models (GP-DGMs), which employ GP priors over the latent variables of DGMs. Existing approaches for performing inference in GP-DGMs do not support sparse GP approximations based on inducing points, which are essential for the computational efficiency of GPs, nor do they handle missing data – a natural occurrence in many spatio-temporal datasets – in a principled manner. We address these shortcomings with the development of the sparse Gaussian process variational autoencoder (SGP-VAE), characterised by the use of partial inference networks for parameterising sparse GP approximations. Leveraging the benefits of amortised variational inference, the SGP-VAE enables inference in multi-output sparse GPs on previously unobserved data with no additional training. The SGP-VAE is evaluated in a variety of experiments where it outperforms alternative approaches including multi-output GPs and structured VAEs.
[]
[ { "authors": [ "Mauricio A Álvarez", "Neil D Lawrence" ], "title": "Computationally efficient convolved multiple output Gaussian processes", "venue": "The Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Gowtham Atluri", "Anuj Karpatne", "Vipin Kumar" ], "title": "Spatio-temporal data mining: A survey of problems and methods", "venue": "ACM Computing Surveys (CSUR),", "year": 2018 }, { "authors": [ "Edwin V Bonilla", "Kian M Chai", "Christopher Williams" ], "title": "Multi-task Gaussian process prediction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2008 }, { "authors": [ "Thang D Bui", "Josiah Yan", "Richard E Turner" ], "title": "A unifying framework for Gaussian process pseudo-point approximations using power expectation propagation", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Alex Campbell", "Pietro Liò" ], "title": "tvGP-VAE: Tensor-variate Gaussian process prior variational autoencoder", "venue": "arXiv preprint arXiv:2006.04788,", "year": 2020 }, { "authors": [ "Francesco P Casale", "Adrian Dalca", "Luca Saglietti", "Jennifer Listgarten", "Nicolo Fusi" ], "title": "Gaussian process prior variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Andreas Damianou", "Neil D Lawrence" ], "title": "Deep Gaussian processes", "venue": "In Artificial Intelligence and Statistics,", "year": 2013 }, { "authors": [ "Vincent Fortuin", "Dmitry Baranchuk", "Gunnar Rätsch", "Stephan Mandt" ], "title": "GP-VAE: Deep probabilistic time series imputation", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Pierre Goovaerts" ], "title": "Geostatistics for natural resources evaluation", "venue": "Oxford University Press on Demand,", "year": 1997 }, { "authors": [ "Matthew J Johnson", "David K Duvenaud", "Alex Wiltschko", "Ryan P Adams", "Sandeep R Datta" ], "title": "Composing graphical models with neural networks for structured representations and fast inference", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Neil D Lawrence" ], "title": "Gaussian process latent variable models for visualisation of high dimensional data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2004 }, { "authors": [ "Neil D Lawrence", "Andrew J Moore" ], "title": "Hierarchical Gaussian process latent variable models", "venue": "In Proceedings of the 24th International Conference on Machine Learning,", "year": 2007 }, { "authors": [ "Wu Lin", "Nicolas Hubacher", "Mohammad Emtiyaz Khan" ], "title": "Variational message passing with structured inference networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Chao Ma", "Sebastian Tschiatschek", "Konstantina Palla", "José Miguel Hernández-Lobato", "Sebastian Nowozin", "Cheng Zhang" ], "title": "EDDI: efficient dynamic discovery of high-value information with partial VAE", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Alfredo Nazabal", "Pablo M Olmos", "Zoubin Ghahramani", "Isabel Valera" ], "title": "Handling incomplete heterogeneous data using VAEs", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "Manfred Opper", "Cédric Archambeau" ], "title": "The variational Gaussian approximation revisited", "venue": "Neural computation,", "year": 2009 }, { "authors": [ "Michael Pearce" ], "title": "The Gaussian process prior VAE for interpretable latent dynamics from pixels", "venue": "In Symposium on Advances in Approximate Bayesian Inference,", "year": 2020 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "PointNet: Deep learning on point sets for 3D classification and segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Siddharth Ramchandran", "Gleb Tikhonov", "Miika Koskinen", "Harri Lähdesmäki" ], "title": "Longitudinal variational autoencoder", "venue": "arXiv preprint arXiv:2006.09763,", "year": 2020 }, { "authors": [ "James Requeima", "William Tebbutt", "Wessel Bruinsma", "Richard E Turner" ], "title": "The Gaussian process autoregressive regression model (GPAR)", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Yee Whye Teh", "Matthias W Seeger", "Michael I Jordan" ], "title": "Semiparametric latent factor models", "venue": "In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics,", "year": 2005 }, { "authors": [ "Michael E Tipping", "Christopher M Bishop" ], "title": "Probabilistic principal component analysis", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 1999 }, { "authors": [ "Michalis Titsias" ], "title": "Variational learning of inducing variables in sparse Gaussian processes", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Rich E Turner", "Maneesh Sahani" ], "title": "Two problems with variational expectation maximisation for time-series models", "venue": "Bayesian Time series models,", "year": 2011 }, { "authors": [ "Ramakrishna Vedantam", "Ian Fischer", "Jonathan Huang", "Kevin Murphy" ], "title": "Generative models of visually grounded imagination", "venue": "arXiv preprint arXiv:1705.10762,", "year": 2017 }, { "authors": [ "Byron M Yu", "John P Cunningham", "Gopal Santhanam", "Stephen I Ryu", "Krishna V Shenoy", "Maneesh Sahani" ], "title": "Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity", "venue": "In Advances in Neural Information Processing Systems,", "year": 2009 } ]
[ { "heading": "1 INTRODUCTION", "text": "Increasing amounts of large, multi-dimensional datasets that exhibit strong spatio-temporal dependencies are arising from a wealth of domains, including earth, social and environmental sciences (Atluri et al., 2018). For example, consider modelling daily atmospheric measurements taken by weather stations situated across the globe. Such data are (1) large in number; (2) subject to strong spatio-temporal dependencies; (3) multi-dimensional; and (4) non-Gaussian with complex dependencies across outputs. There exist two venerable approaches for handling these characteristics: Gaussian process (GP) regression and deep generative models (DGMs). GPs provide a framework for encoding high-level assumptions about latent processes, such as smoothness or periodicity, making them effective in handling spatio-temporal dependencies. Yet, existing approaches do not support the use of flexible likelihoods necessary for modelling complex multi-dimensional outputs. In contrast, DGMs support the use of flexible likelihoods; however, they do not provide a natural route through which spatio-temporal dependencies can be encoded. The amalgamation of GPs and DGMs, GP-DGMs, use latent functions drawn independently from GPs, which are then passed through a DGM at each input location. GP-DGMs combine the complementary strengths of both approaches, making them naturally suited for modelling spatio-temporal datasets.\nIntrinsic to the application of many spatio-temporal datasets is the notion of tasks. For instance: medicine has individual patients; each trial in a scientific experiment produces an individual dataset; and, in the case of a single large dataset, it is often convenient to split it into separate tasks to improve computational efficiency. GP-DGMs support the presence of multiple tasks in a memory efficient way through the use of amortisation, giving rise to the Gaussian process variational autoencoder (GP-VAE), a model that has recently gained considerable attention from the research community (Pearce, 2020; Fortuin et al., 2020; Casale et al., 2018; Campbell & Liò, 2020; Ramchandran et al., 2020). However, previous work does not support sparse GP approximations based on inducing points, a necessity for modelling even moderately sized datasets. Furthermore, many spatio-temporal datasets contain an abundance of missing data: weather measurements are often absent due to sensor failure, and in medicine only single measurements are taken at any instance. Handling partial observations in a principled manner is essential for modelling spatio-temporal data, but is yet to be considered.\nOur key technical contributions are as follows:\ni) We develop the sparse GP-VAE (SGP-VAE), which uses inference networks to parameterise multi-output sparse GP approximations.\nii) We employ a suite of partial inference networks for handling missing data in the SGP-VAE. iii) We conduct a rigorous evaluation of the SGP-VAE in a variety of experiments, demonstrat-\ning excellent performance relative to existing multi-output GPs and structured VAEs." }, { "heading": "2 A FAMILY OF SPATIO-TEMPORAL VARIATIONAL AUTOENCODERS", "text": "Consider the multi-task regression problem in which we wish to model T datasets D = {D(t)}Tt=1, each of which comprises input/output pairsD(t) = {x(t)n ,y(t)n }Ntn=1, x (t) n ∈ RD and y(t)n ∈ RP . Further, let any possible permutation of observed values be potentially missing, such that each observation y(t)n = yon\n(t) ∪ yun(t) contains a set of observed values yon(t) and unobserved values yun(t), with O(t)n denoting the index set of observed values. For each task, we model the distribution of each observation y(t)n , conditioned on a corresponding latent variable f (t)n ∈ RK , as a fully-factorised Gaussian distribution parameterised by passing f (t)n through a decoder deep neural network (DNN) with parameters θ2. The elements of f (t)n correspond to the evaluation of a K-dimensional latent function f (t) = (f\n(t) 1 , f (t) 2 , . . . , f (t) K ) at input x (t) n . That is, f (t)n = f (t)(x (t) n ). Each latent function f (t) is\nmodelled as being drawn from K independent GP priors with hyper-parameters θ1 = {θ1,k}Kk=1, giving rise to the complete probabilistic model:\nf (t) ∼ K∏ k=1 GP ( 0, kθ1,k (x,x ′) )︸ ︷︷ ︸\npθ1 (f (t) k )\ny(t)|f (t) ∼ Nt∏ n=1 N ( µoθ2(f (t) n ), diag σ o θ2 2(f (t)n ) )\n︸ ︷︷ ︸ pθ2 (y o n (t)|f(t),x(t)n ,O(t)n )\n(1)\nwhere µoθ2(f (t) n ) and σ o θ2 2(f (t)n ) are the outputs of the decoder indexed byO (t) n . We shall refer to the set θ = {θ1, θ2} as the model parameters, which are shared across tasks. The probabilistic model in equation 1 explicitly accounts for dependencies between latent variables through the GP prior. The motive of the latent structure is twofold: to discover a simpler representation of each observation, and to capture the dependencies between observations at different input locations." }, { "heading": "2.1 MOTIVATION FOR SPARSE APPROXIMATIONS AND AMORTISED INFERENCE", "text": "The use of amortised inference in DGMs and sparse approximations in GPs enables inference in these respective models to scale to large quantities of data. To ensure the same for the GP-DGM described in equation 1, we require the use of both techniques. In particular, amortised inference is necessary to prevent the number of variational parameters scaling with O (∑T t=1N (t) )\n. Further, the inference network can be used to condition on previously unobserved data without needing to learn new variational parameters. Similarly, sparse approximations are necessary to prevent the computational complexity increasing cubically with the size of each task O (∑T t=1N (t)3 ) .\nUnfortunately, it is far from straightforward to combine sparse approximations and amortised inference in a computationally efficient way. To see this, consider the standard form for the sparse GP approximate posterior, q(f) = pθ1(f\\u|u)q(u) where q(u) = N (u; m, S) with m, S and Z, the inducing point locations, being the variational parameters. q(u) does not decompose into a product over N (t) factors and is therefore not amendable to per-datapoint amortisation. That is, m and S must be optimised as free-form variational parameters. A naı̈ve approach to achieving per-datapoint amortisation is to decompose q(u) into the prior pθ1(u) multiplied by the product of approximate likelihoods, one for each inducing point. Each approximate likelihood is itself equal to the product of per-datapoint approximate likelihoods, which depend on both the observation yon and the distance of the input xn to that of the inducing point. An inference network which takes these two values of inputs can be used to obtain the parameters of the approximate likelihood factors. Whilst we found that this approach worked, it is somewhat unprincipled. Moreover, it requires passing each datapoint/inducing point pair through an inference network, which scales very poorly. In the following\nsection, we introduce a theoretically principled decomposition of q(f) we term the sparse structured approximate posterior which will enable efficient amortization." }, { "heading": "2.2 THE SPARSE STRUCTURED APPROXIMATE POSTERIOR", "text": "By simultaneously leveraging amortised inference and sparse GP approximations, we can perform efficient and scalable approximate inference. We specify the sparse structured approximate posterior, q(f (t)), which approximates the intractable true posterior for task t:\npθ(f (t)|y(t),X(t)) = 1\nZp pθ1(f\n(t)) Nt∏ n=1 pθ2(y o n (t)|f (t),x(t)n ,O(t)n )\n≈ 1 Zq pθ1(f (t)) Nt∏ n=1 lφl(u; y o n (t),x(t)n ,Z) = q(f (t)).\n(2)\nAnalogous to its presence in the true posterior, the approximate posterior retains the GP prior, yet replaces each non-conjugate likelihood factor with an approximate likelihood, lφl(u; y o n (t),x (t) n ,Z), over a set ofKM ‘inducing points’, u = ∪Kk=1∪Mm=1umk, at ‘inducing locations’, Z = ∪Kk=1∪Mm=1 zmk. For tractability, we restrict the approximate likelihoods to be Gaussians factorised across each latent dimension, parameterised by passing each observation through a partial inference network:\nlφl(uk; y o n (t),x(t)n ,Zk) = N ( µφl,k(y o n (t)); k\nf (t) nkuk\nK−1ukukuk, σ 2 φl,k (yon (t)) )\n(3)\nwhere φl denotes the weights and biases of the partial inference network, whose outputs are shown in red. This form is motivated by the work of Bui et al. (2017), who demonstrate the optimality of approximate likelihoods of the form N ( gn; kf(t)nkuk K−1ukukuk, vn ) , a result we prove in Appendix A.1. Whilst, in general, the optimal free-form values of gn and vn depend on all of the data points, we make the simplifying assumption that they depend only on yon\n(t). For GP regression with Gaussian noise, this assumption holds true as gn = yn and vn = σ2y (Bui et al., 2017).\nThe resulting approximate posterior can be interpreted as the exact posterior induced by a surrogate regression problem, in which ‘pseudo-observations’ gn are produced from a linear transformation of inducing points with additive ‘pseudo-noise’ vn, gn = kf(t)nkuk K−1ukukuk + √ vn . The inference network learns to construct this surrogate regression problem such that it results in a posterior that is close to our target posterior.\nBy sharing variational parameters φ = {φl, Z} across tasks, inference is amortised across both datapoints and tasks. The approximate posterior for a single task corresponds to the product of K independent GPs, with mean and covariance functions\nm̂ (t) k (x) = kf(t)k uk Φ (t) k Kukf (t)k Σ (t) φl,k\n−1 µ\n(t) φl,k\nk̂ (t) k (x,x ′) = k f (t) k f ′ k (t) − kf(t)k ukK −1 ukuk kukf ′k (t) + k f (t) k uk Φ (t) k kukf ′k (t)\n(4)\nwhere Φ(t)k −1\n= Kukuk + Kukf (t)k Σ\n(t) φl,k\n−1 K\nf (t) k uk\n, [ µ\n(t) φl,k ] i = µφl,k(y o i (t)) and [ Σ (t) φl,k ] ij =\nδijσ 2 φl,k (yoi (t)). See Appendix A.2 for a complete derivation. The computational complexity associated with evaluating the mean and covariance functions is O ( KM2N (t) ) , a significant improve-\nment over the O ( P 3N (t) 3 )\ncost associated with exact multi-output GPs for KM2 P 3N (t)2. We refer to the combination of the aforementioned probabilistic model and sparse structured approximate posterior as the SGP-VAE.\nThe SGP-VAE addresses three major shortcomings of existing sparse GP frameworks. First, the inference network can be used to condition on previously unobserved data without needing to learn new variational parameters. Suppose we use the standard sparse GP variational approximation q(f) = pθ1(f\\u|u)q(u) where q(u) = N (u; m, S). If more data are observed, m and S have to be re-optimised. When an inference network is used to parameterise q(u), the approximate posterior\nis ‘automatically’ updated by mapping from the new observations to their corresponding approximate likelihood terms. Second, the complexity of the approximate posterior can be modified as desired with no changes to the inference network, or additional training, necessary: any changes in the morphology of inducing points corresponds to a deterministic transformation of the inference network outputs. Third, if the inducing point locations are fixed, then the number of variational parameters does not depend on the size of the dataset, even as more inducing points are added. This contrasts with the standard approach, in which new variational parameters are appended to m and S." }, { "heading": "2.3 TRAINING THE SGP-VAE", "text": "Learning and inference in the SGP-VAE are concerned with determining the model parameters θ and variational parameters φ. These objectives can be attained simultaneously by maximising the evidence lower bound (ELBO), given by LELBO = ∑T t=1 L (t) ELBO where\nL(t)ELBO = Eq(f(t)) [ pθ(y (t), f (t))\nq(f (t))\n] = Eq(f (t)) [ log pθ(y (t)|f (t)) ] − KL ( q(t)(u) ‖ pθ1(u) ) (5)\nand q(t)(u) ∝ pθ1(u) ∏Nt n=1 lφl(u; y o n (t),x (t) n ,Z). Fortunately, since both q(t)(u) and pθ1(u) are multivariate Gaussians, the final term, and its gradients, has an analytic solution. The first term amounts to propagating a Gaussian through a non-linear DNN, so must be approximated using a Monte Carlo estimate. We employ the reparameterisation trick (Kingma & Welling, 2014) to account for the dependency of the sampling procedure on both θ and φ when estimating its gradients.\nWe mini-batch over tasks, such that only a single L(t)ELBO is computed per update. Importantly, in combination with the inference network, this means that we avoid having to retain the O ( TM2 ) terms associated with T Cholesky factors if we were to use a free-form q(u) for each task. Instead, the memory requirement is dominated by the O ( KM2 +KNM + |φl| ) terms associated with\nstoring Kukuk , Kukf (t)k and φl, as instantiating µ (t) φl,k and Σ(t)φl,k involves onlyO (KN) terms. 1 This corresponds to a considerable reduction in memory. See Appendix C for a thorough comparison of memory requirements." }, { "heading": "2.4 PARTIAL INFERENCE NETWORKS", "text": "Partially observed data is regularly encountered in spatio-temporal datasets, making it necessary to handle it in a principled manner. Missing data is naturally handled by Bayesian inference. However, for models using inference networks, it necessitates special treatment. One approach to handling partially observed data is to impute missing values with zeros (Nazabal et al., 2020; Fortuin et al., 2020). Whilst simple to implement, zero imputation is theoretically unappealing as the inference network can no longer distinguish between a missing value and a true zero.\nInstead, we turn towards the ideas of Deep Sets (Zaheer et al., 2017). By coupling the observed value with dimension index, we may reinterpret each partial observation as a permutation invariant set. We define a family of permutation invariant partial inference networks2 as\n( µφ(y o n), logσ 2 φ(y o n) ) = ρφ2 ∑ p∈On hφ1(snp) (6) where hφ1 : R2 → RR and ρφ2 : RR → R2P are DNN mappings with parameters φ1 and φ2, respectively. snp denotes the couples of observed value ynp and corresponding dimension index p. The formulation in equation 6 is identical to the partial variational autoencoder (VAE) framework established by Ma et al. (2019). There are a number of partial inference networks which conform to this general framework, three of which include:\n1This assumes input locations are shared across tasks, which is true for all experiments we considered. 2Whilst the formulation in equation 6 can describe any permutation invariant set function, there is a caveat:\nboth hφ1 and ρφ2 can be infinitely complex, hence linear complexity is not guaranteed.\nPointNet Inspired by the PointNet approach of Qi et al. (2017) and later developed by Ma et al. (2019) for use in partial VAEs, the PointNet specification uses the concatenation of dimension index with observed value: snp = (p, ynp). This specification treats the dimension indices as continuous variables. Thus, an implicit assumption of PointNet is the assumption of smoothness between values of neighbouring dimensions. Although valid in a computer vision application, it is ill-suited for tasks in which the indexing of dimensions is arbitrary.\nIndexNet Alternatively, one may use the dimension index to select the first DNN mapping: hφ1(snp) = hφ1,p(ynp). Whereas PointNet treats dimension indices as points in space, this specification retains their role as indices. We refer to it as the IndexNet specification.\nFactorNet A special case of IndexNet, first proposed by Vedantam et al. (2017), uses a separate inference network for each observation dimension. The approximate likelihood is factorised into a product of Gaussians, one for each output dimension: lφl(uk; y\no n,xn,Zk) =∏ p∈On N ( µφl,pk(ynp); kfnkukK −1 ukuk uk, σ 2 φl,pk (ynp) ) . We term this approach FactorNet.\nSee Appendix G for corresponding computational graphs. Note that FactorNet is equivalent to IndexNet with ρφ2 defined by the deterministic transformations of natural parameters of Gaussian distributions. Since IndexNet allows this transformation to be learnt, we might expect it to always produce a better partial inference network for the task at hand. However, it is important to consider the ability of inference networks to generalise. Although more complex inference networks will better approximate the optimal non-amortised approximate posterior on training data, they may produce poor approximations to it on the held-out data.3 In particular, FactorNet is constrained to consider the individual contribution of each observation dimension, whereas the others are not. Doing so is necessary for generalising to different quantities and patterns of missingness, hence we anticipate FactorNet to perform better in such settings." }, { "heading": "3 RELATED WORK", "text": "We focus our comparison on approximate inference techniques; however, Appendix D presents a unifying view of GP-DGMs.\nStructured Variational Autoencoder Only recently has the use of structured latent variable priors in VAEs been considered. In their seminal work, Johnson et al. (2016) investigate the combination of probabilistic graphical models with neural networks to learn structured latent variable representations. The authors consider a two stage iterative procedure, whereby the optimum of a surrogate objective function — containing approximate likelihoods in place of true likelihoods — is found and substituted into the original ELBO. The resultant structured VAE (SVAE) objective is then optimised. In the case of fixed model parameters θ, the SVAE objective is equivalent to optimising the ELBO using the structured approximate posterior over latent variables q(z) ∝ pθ(z)lφ(z|y). Accordingly, the SGP-VAE can be viewed as an instance of the SVAE. Lin et al. (2018) build upon the SVAE, proposing a structured approximate posterior of the form q(z) ∝ qφ(z)lφ(z|y). The authors refer to the approximate posterior as the structured inference network (SIN). Rather than using the latent prior pθ(z), SIN incorporates the model’s latent structure through qφ(z). The core advantage of SIN is its extension to more complex latent priors containing non-conjugate factors — qφ(z) can replace them with their nearest conjugate approximations whilst retaining a similar latent structure. Although the frameworks proposed by Johnson et al. and Lin et al. are more general than ours, the authors only consider Gaussian mixture model and linear dynamical system (LDS) latent priors.\nGaussian Process Variational Autoencoders The earliest example of combining VAEs with GPs is the GP prior VAE (GPPVAE) (Casale et al., 2018). There are significant differences between our work and the GPPVAE, most notably in the GPPVAE’s use of a fully-factorised approximate posterior — an approximation that is known to perform poorly in time-series and spatial settings (Turner & Sahani, 2011). Closely related to the GPPVAE is Ramchandran et al.’s (2020) longitudinal VAE, which also adopts a fully-factorised approximate posterior, yet uses additive covariance functions for heterogeneous input data. Fortuin et al. (2020) consider the use of a Gaussian ap-\n3This kind of ‘overfitting’ is different to the classical notion of overfitting model parameters. It refers to how well optimal non-amortised approximate posteriors are approximated on the training versus test data.\nproximate posterior with a tridiagonal precision matrix Λ, q(f) = N ( f ; m, Λ−1 ) , where m and Λ are parameterised by an inference network. Whilst this permits computational efficiency, the parameterisation is only appropriate for regularly spaced temporal data and neglects rigorous treatment of long term dependencies. Campbell & Liò (2020) employ an equivalent sparsely structured variational posterior as that used by Fortuin et al., extending the framework to handle more general spatio-temporal data. Their method is similarly restricted to regularly spaced spatio-temporal data. A fundamental difference between our framework and that of Fortuin et al. and Campbell & Liò is the inclusion of the GP prior in the approximate posterior. As shown by Opper & Archambeau (2009), the structured approximate posterior is identical in form to the optimum Gaussian approximation to the true posterior. Most similar to ours is the approach of Pearce (2020), who considers the structured approximate posterior q(f) = 1Zq pθ1(f) ∏N n=1 lφl(fn; yn). We refer to this as the GP-VAE. Pearce’s approach is a special case of the SGP-VAE for u = f and no missing data. Moreover, Pearce only considers the application to modelling pixel dynamics and the comparison to the standard VAE. See Appendix B for further details." }, { "heading": "4 EXPERIMENTS", "text": "We investigate the performance of the SGP-VAE in illustrative bouncing ball experiments, followed by experiments in the small and large data regimes. The first bouncing ball experiment provides a visualisation of the mechanics of the SGP-VAE, and a quantitative comparison to other structured VAEs. The proceeding small-scale experiments demonstrate the utility of the GP-VAE and show that amortisation, especially in the presence of partially observed data, is not at the expense of predictive performance. In the final two experiments, we showcase the efficacy of the SGP-VAE on large, multi-output spatio-temporal datasets for which the use of amortisation is necessary. Full experimental details are provided in Appendix E." }, { "heading": "4.1 SYNTHETIC BOUNCING BALL EXPERIMENT", "text": "The bouncing ball experiment — first introduced by Johnson et al. (2016) for evaluating the SVAE and later considered by Lin et al. (2018) for evaluating SIN — considers a sequence of onedimensional images of height 10 representing a ball bouncing under linear dynamics, (x(t)n ∈ R1, y (t) n ∈ R10). The GP-VAE is able to significantly outperform both the SVAE and SIN in the original experiment, as shown in Figure 1a. To showcase the versatility of the SGP-VAE, we extend the complexity of the original experiment to consider a sequence of images of height 100, y(t)n ∈ R100, representing two bouncing balls: one under linear dynamics and another under gravity. Furthermore, the images are corrupted by removing 25% of the pixels at random. The dataset consists of T = 80 noisy image sequences, each of length N = 500, with the goal being to predict the trajectory of the ball given a prefix of a longer sequence.\nUsing a two-dimensional latent space with periodic kernels, Figure 1b compares the posterior latent GPs and the mean predictive distribution with the ground truth for a single image sequence. Observe that the SGP-VAE has ‘disentangled’ the dynamics of each ball, using a single latent dimension to\nmodel each. The SGP-VAE reproduces the image sequences with impressive precision, owing in equal measure to (1) the ability of the GPs prior to model the latent dynamics and (2) the flexibility of the likelihood function to map to the high-dimensional observations." }, { "heading": "4.2 SMALL-SCALE EXPERIMENTS", "text": "EEG Adopting the experimental procedure laid out by Requeima et al. (2019), we consider an EEG dataset consisting of N = 256 measurements taken over a one second period. Each measurement comprises voltage readings taken by seven electrodes, FZ and F1-F6, positioned on the patient’s scalp (xn ∈ R1, yn ∈ R7). The goal is to predict the final 100 samples for electrodes FZ, F1 and F2 having observed the first 156 samples, as well as all 256 samples for electrodes F3-F6.\nJura The Jura dataset is a geospatial dataset comprised of N = 359 measurements of the topsoil concentrations of three heavy metals — Cadmium Nickel and Zinc — collected from a 14.5km2 region of the Swiss Jura (xn ∈ R2, yn ∈ R3) (Goovaerts, 1997). Adopting the experimental procedure laid out by others (Goovaerts, 1997; Álvarez & Lawrence, 2011; Requeima et al., 2019), the dataset is divided into a training set consisting of Nickel and Zinc measurements for all 359 locations and Cadmium measurements for just 259 locations. Conditioned on the observed training set, the goal is to predict the Cadmium measurements at the remaining 100 locations.\nTable 1 compares the performance of the GP-VAE using the three partial inference networks presented in Section 2.4, as well as zero imputation (ZI), with independent GPs (IGP) and the GP autoregressive regression model (GPAR), which, to our knowledge, has the strongest published performance on these datasets. We also give the results for the best performing GP-VAE4 using a non-amortised, or ‘free-form’ (FF), approximate posterior, with model parameters θ kept fixed to the optimum found by the amortised GP-VAE and variational parameters initialised to the output of the optimised inference network. All GP-VAE models use a two- and three-dimensional latent space for EEG and Jura, respectively, with squared exponential (SE) kernels. The results highlight the poor performance of independent GPs relative to multi-output GPs, demonstrating the importance of modelling output dependencies. The GP-VAE achieves impressive SMSE and MAE5 on the EEG and Jura datasets using all partial inference networks except for PointNet. In Appendix F we demonstrate superior performance of the GP-VAE relative to the GPPVAE, which can be attributed to the use of the structured approximate posterior over the mean-field approximate posterior used by the GPPVAE. Importantly, the negligible difference between the results using free-form and amortised approximate posteriors indicates that amortisation is not at the expense of predictive performance.\nWhilst GPAR performs as strongly as the GP-VAE in the small-scale experiments above, it has two key limitations which severely limit the types of applications where it can be used. First, it can only be used with specific patterns of missing data and not when the pattern of missingness is arbitrary. Second, it is not scalable and would require further development to handle the large datasets considered in this paper. In contrast, the SGP-VAE is far more flexible: it handles arbitrary patterns of missingness, and scales to large number of datapoints and tasks. A distinct advantage of the SGPVAE is that it models P outputs with just K latent GPs. This differs from GPAR, which uses P\n4i.e. using IndexNet for EEG and FactorNet for Jura. 5The two different performance metrics are adopted to enable a comparison to the results of Requeima et al..\nGPs. Whilst this is not an issue for the small-scale experiments, it quickly becomes computationally burdensome when P becomes large. The true efficacy of the SGP-VAE is demonstrated in the following three experiments, where the number of datapoints and tasks is large, and the patterns of missingness are random." }, { "heading": "4.3 LARGE-SCALE EEG EXPERIMENT", "text": "We consider an alternative setting to the original small-scale EEG experiment, in which the datasets are formed from T = 60 recordings of length N = 256, each with 64 observed voltage readings (yn ∈ R64). For each recording, we simulated electrode ‘blackouts’ by removing consecutive samples at random. We consider two experiments: in the first, we remove equal 50% of data from both the training and test datasets; in the second, we remove 10% of data from the training dataset and 50% from the test dataset. Both experiments require the partial inference network to generalise to different patterns of missingness, with the latter also requiring generalisation to different quantities of missingness. Each model is trained on 30 recordings, with the predictive performance assessed on the remaining 30 recordings. Figure 2 compares the performance of the SGP-VAE with that of independent GPs as the number of inducing points varies, with M = 256 representing use of the GP-VAE. In each case, we use a 10-dimensional latent space with SE kernels. The SGP-VAE using PointNet results in substantially worse performance than the other partial inference networks, achieving an average SMSE and NLL of 1.30 and 4.05 on the first experiment for M = 256. Similarly, using a standard VAE results in poor performance, achieving an average SMSE and NLL of 1.62 and 3.48. These results are excluded from Figure 2 for the sake of readability.\nFor all partial inference networks, the SGP-VAE achieves a significantly better SMSE than independent GPs in both experiments, owing to its ability to model both input and output dependencies. For the first experiment, the performance using FactorNet is noticeably better than using either IndexNet or zero imputation; however, this comes at the cost of a greater computational complexity associated with learning an inference network for each output dimension. Whereas the performance for the SGP-VAE using IndexNet and zero imputation significantly worsens on the second experiment, the performance using FactorNet is comparable to the first experiment. This suggests it is the only partial inference network that is able to accurately quantify the contribution of each output dimension to the latent posterior, enabling it to generalise to different quantities of missing data.\nThe advantages of using a sparse approximation are clear — using M = 128 inducing points results in a slightly worse average SMSE and NLL, yet significantly less computational cost." }, { "heading": "4.4 JAPANESE WEATHER EXPERIMENT", "text": "Finally, we consider a dataset comprised of 731 daily climate reports from 156 Japanese weather stations throughout 1980 and 1981, a total of 114,036 multi-dimensional observations. Weather reports consist of a date and location, including elevation, alongside the day’s maximum, minimum and average temperature, precipitation and snow depth (x(t)n ∈ R4, y(t)n ∈ R5), any number of which is potentially missing. We treat each week as a single task, resulting in T = 105 tasks with\nN = 1092 data points each. The goal is to predict the average temperature for all stations on the middle five days, as illustrated in Figure 3. Each model is trained on all the data available from 1980. For evaluation, we use data from both 1980 and 1981 with additional artificial missingness — the average temperature for the middle five days and a random 25% of minimum and maximum temperature measurements6. Similar to the second large-scale EEG experiment, the test datasets have more missing data than the training datasets. Table 2 compares the performance of the SGPVAE using 100 inducing points to that of a standard VAE using FactorNet and a baseline of mean imputation. All models use a three-dimensional latent space with SE kernels.\nAll models significantly outperform the mean imputation baseline (MI) and are able to generalise inference to the unseen 1981 dataset without any loss in predictive performance. The SGP-VAE achieves better predictive performance than both the standard VAE and independent GPs, showcasing its effectiveness in modelling large spatio-temporal datasets. The SGP-VAE using FactorNet achieves the best predictive performance on both datasets. The results indicate that FactorNet is the only partial inference network capable of generalising to different quantities and patterns of missingness, supporting the hypothesis made in Section 2.4." }, { "heading": "5 CONCLUSION", "text": "The SGP-VAE is a scalable approach to training GP-DGMs which combines sparse inducing point methods for GPs and amortisation for DGMs. The approach is ideally suited to spatio-temporal data with missing observations, where it outperforms VAEs and multi-output GPs. Future research directions include generalising the framework to leverage state-space GP formulations for additional scalability and applications to streaming multi-output data." }, { "heading": "A MATHEMATICAL DERIVATIONS", "text": "" }, { "heading": "A.1 OPTIMALITY OF APPROXIMATE LIKELIHOODS", "text": "To simplify notation, we shall consider the case P = 1 and K = 1. Separately, Opper & Archambeau (2009) considered the problem of performing variational inference in a GP for non-Gaussian likelihoods. They consider a multivariate Gaussian approximate posterior, demonstrating that the optimal approximate posterior takes the form\nq(f) = 1\nZ p(f) N∏ n=1 N (fn; gn, vn) , (7)\nrequiring a total of 2N variational parameters ({gn, vn}Nn=1). In this section, we derive a result that generalises this to inducing point approximations, showing that for fixedM the optimal approximate posterior can be represented by max(M(M+1)/2+M, 2N). Following Titsias (2009), we consider an approximate posterior of the form\nq(f) = q(u)p(f\\u|u) (8) where q(u) = N ( u; m̂u, K̂uu ) is constrained to be a multivariate Gaussian with mean m̂u and\ncovariance K̂uu. The ELBO is given by\nLELBO = Eq(f) [log p(y|f)]− KL (q(u) ‖ p(u)) = Eq(u) [ Ep(f |u) [log p(y|f)] ] − KL (q(u) ‖ p(u))\n= N∑ n=1 Eq(u) [ EN(fn; Anu+an, Kfn|u) [log p(yn|fn] ] − KL (q(u) ‖ p(u))\n(9)\nwhere\nAn = KfnuK −1 uu (10)\nan = mfn −KfnuK −1 uum̂u. (11)\nRecall that for a twice-differentiable scalar function h\n∇ΣEN (u; µ, Σ) [h(u)] = EN (u; µ, Σ) [Hh(u)] (12)\nwhere Hh(u) is the Hessian of h at u. Thus, the gradient of the ELBO with respect to K̂uu can be rewritten as\n∇K̂uuLELBO = N∑ n=1 EN(u; m̂u, K̂uu) [Hhn(u)]− 1 2 Kuu + 1 2 K̂uu (13)\nwhere hn(u) = EN(fn; Anu+an, Kfn|u) [log p(yn|fn].\nTo determine an expression for Hhn , we first consider the gradients of hn. Let\nαn(βn) = EN(fn; βn, Kfn|u) [log p(yn|fn)] (14)\nβn(u) = Anu + an. (15)\nThe partial derivative of hn with respect to the jth element of u can be expressed as\n∂hn ∂uj (u) = ∂αn ∂βn (βn(u)) ∂βn ∂uj (u). (16)\nTaking derivatives with respect to the ith element of u gives\n∂2hn ∂uj∂ui (u) = ∂2αn ∂β2n (βn(u)) ∂βn ∂uj (u) ∂βn ∂ui (u) + ∂αn ∂βn (βn(u)) ∂2βn ∂uj∂ui (u). (17)\nThus, the Hessian is given by\nHhn(u) = ∂2αn ∂β2n\n(βn(u))︸ ︷︷ ︸ R\n∇βn(u)︸ ︷︷ ︸ N×1 [∇βn(u)]T︸ ︷︷ ︸ 1×N + ∂αn ∂βn\n(βn(u))︸ ︷︷ ︸ R\nHβn(u)︸ ︷︷ ︸ N×N . (18)\nSince βn(u) = Anu + an, we have ∇βn(u) = An and Hβn(u) = 0. This allows us to write ∇K̂uuLELBO as\n∇K̂uuLELBO = N∑ n=1 EN(u; m̂u, K̂uu) [ ∂2αn ∂β2n (βn(u)) ] AnA T n − 1 2 Kuu + 1 2 K̂uu. (19)\nThe optimal covariance therefore satisfies\nK̂ −1 uu = K −1 uu − 2 N∑ n=1 EN(u; m̂u, K̂uu) [ ∂2αn ∂β2n (βn(u)) ] AnA T n . (20)\nSimilarly, the gradient of the ELBO with respect to m̂u can be written as\n∇m̂uLELBO = N∑ n=1 ∇m̂uEN(u; m̂u, K̂uu) [hn(u)]−K −1 uu(m̂u −mu)\n= N∑ n=1 EN(u; m̂u, K̂uu) [∇hn(u)]−K −1 uu(m̂u −mu)\n(21)\nwhere we have used the fact that for a differentiable scalar function h\n∇µEN (u; µ, Σ) [g(u)] = EN (u; µ, Σ) [∇g(u)] . (22)\nUsing equation 16 and βn(u) = Anu + an, we get\n∇hn(u) = ∂αn ∂βn (βn(u))An (23)\ngiving\n∇m̂uLELBO = N∑ n=1 EN(u; m̂u, K̂uu) [ ∂αn ∂βn (βn(u)) ] −K−1uu(m̂u −mu). (24)\nThe optimal mean is therefore\nm̂u = mu − N∑ n=1 EN(u; m̂u, K̂uu) [ ∂αn ∂βn (βn(u)) ] KuuAn. (25)\nEquation 20 and equation 25 show that each nth observation contributes only a rank-1 term to the optimal approximate posterior precision matrix, corresponding to an optimum approximate posterior of the form\nq(f) ∝ p(f) N∏ n=1 N ( KfnuK −1 uuu; gn, vn ) (26)\nwhere\ngn = −EN(u; m̂u, K̂uu) [ ∂αn ∂βn (βn(u)) ] vnK̂uu −1 Kuu + A T nmu (27)\n1/vn = −2EN(u; m̂u, K̂uu) [ ∂2αn ∂β2n (βn(u)) ] . (28)\nFor general likelihoods, these expressions cannot be solved exactly so gn and vn are freely optimised as variational parameters. When N = M , the inducing points are located at the observations and AnA T n is zero everywhere except for the n\nth element of its diagonal we recover the result of Opper & Archambeau (2009). Note the key role of the linearity of each βn in this result - without it Hβn would not necessarily be zero everywhere and the contribution of each nth term could have arbitrary rank." }, { "heading": "A.2 POSTERIOR GAUSSIAN PROCESS", "text": "For the sake of notational convenience, we shall assume K = 1. First, the mean and covariance of q(u) = N ( u; m̂u, K̂uu ) ∝ pθ1(u) ∏Nt n=1 lφl(u; y o n,xn,Z) are given by\nm̂u = KuuΦKufΣ −1 φl,k µφl\nK̂uu = KuuΦKuu (29)\nwhere Φ−1 = Kuu + KufΣ−1φl Kfu. The approximate posterior over some latent function value f∗ is obtained by marginalisation of the joint distribution:\nq(f∗) = ∫ pθ1(f∗|u)q(u)du\n= ∫ N ( f∗; kf∗uK −1 uuu, kf∗f∗ − kf∗uK −1 uukuf∗ ) N ( u; m̂u, K̂uu ) du\n= N ( f∗; kf∗uK −1 uum̂u, kf∗f∗ − kf∗uK −1 uukuf∗ + kf∗uK −1 uuK̂uuK −1 uukuf∗ ) (30)\nSubstituting in equation 29 results in a mean and covariance function of the form\nm̂(x) = kfuK −1 uuΦKufΣ −1 φl,k µφl\nk̂(x,x′) = kff ′ − kfuK−1uukuf ′ + kfuΦkuf ′ . (31)" }, { "heading": "B THE GP-VAE", "text": "As discuss in Section 3, the GP-VAE is described by the structured approximate posterior\nq(f) = 1\nZq(θ, φ) pθ1(f) N∏ n=1 lφl(fn; y o n), (32)\nwhere lφl(fn; y o n) = ∏K k=1N ( fn; µφl(y o n), diag σ 2 φl (yon) ) , and corresponding ELBO\nLELBO = Eq(f)\n[ log\npθ1(f)pθ2(y|f) 1\nZq(θ,φ)pθ1(f)lφl(f ; y)\n] = Eq(f) [ log\npθ2(y|f) lφl(f ; y)\n] + logZq(θ, φ). (33)" }, { "heading": "B.1 TRAINING THE GP-VAE", "text": "The final term in equation 33 has the closed-form expression\nZq(θ, φ) = K∏ k=1 K∑ k=1 logN ( µφl,k; 0, Kfkfk + Σφl,k )︸ ︷︷ ︸ logZqk(θ,φ) . (34)\nwhich can be derived by noting that each Zqk(θ, φ) corresponds to the convolution between two multivariate Gaussians:\nZqk(θ, φ) = ∫ N (fk; 0, Kfkfk)N ( µφl,k − fk; 0, Σφl,k ) dfk. (35)\nSimilarly, a closed-form expression for Eq(f) [lφl(f ; y)] exists:\nEq(f) [log lφl(f ; y)] = K∑ k=1 N∑ n=1 Eq(fnk) [log lφl(fnk; y o n)]\n= K∑ k=1 N∑ n=1 Eq(fnk)\n[ − (fnk − µφl,k(y o n)) 2\n2σ2φl,k(y o n)\n− 1 2 log |2πσ2φl,k(y o n)|\n]\n= K∑ k=1 N∑ n=1 −\n[ Σ̂k ] nn + (µ̂k,n − µφl,k(yon))2\n2σ2φl,k(y o n)\n− 1 2 log |2πσ2φl,k(y o n)|\n= K∑ k=1 N∑ n=1 logN ( µ̂k,n; µφl,k(y o n), σ 2 φl,k (yon) ) −\n[ Σ̂k ] nn\n2σ2φl,k(y o n)\n= K∑ k=1 logN ( µ̂k; µφl,k, Σφl,k ) − N∑ n=1\n[ Σ̂k ] nn\n2σ2φl,k(yn) (36)\nwhere Σ̂k = k̂k ( X,X′ ) and µ̂k = m̂k(X), with\nm̂k(x) = kfkuk (Kukuk + Σφl,k) −1 µφl,k\nk̂k(x) = kfkf ′k − kfkuk (Kukuk + Σφl,k) −1 kukfk .\n(37)\nEq(f) [log pθ2(y|f)] is intractable, hence must be approximated by a Monte Carlo estimate. Together with the closed-form expressions for the other two terms we can form an unbiased estimate of the ELBO, the gradients of which can be estimated using the reparameterisation trick (Kingma & Welling, 2014)." }, { "heading": "B.2 AN ALTERNATIVE SPARSE APPROXIMATION", "text": "An alternative approach to introducing a sparse GP approximation is directly parameterise the structured approximate posterior at inducing points u:\nq(f) = 1\nZq(θ, φ) pθ1(f) N∏ n=1 lφl(u; y o n,xn,Z) (38)\nwhere lφl(u; y o n,xn,Z), the approximate likelihood, is a fully-factorised Gaussian distribution parameterised by a partial inference network:\nlφl(u; y o n,xn,Z) = K∏ k=1 M∏ m=1 N ( umk; µφ,k(y o n), σ 2 φ,k(y o n) ) . (39)\nIn general, each factor lφl(umk; y o n, zmk,xn) conditions on data at locations different to that of the inducing point. The strength of the dependence between these values is determined by the two input locations themselves. To account for this, we introduce the use of an inference network that, for each observation/inducing point pair (umk,yn), maps from (zmk,xn,y o n) to parameters of the approximate likelihood factor.\nWhilst this approach has the same first order computational complexity as that used by the SGPVAE, having to making forward and backward passes through the inference networkKNM renders it significantly more computationally expensive for even moderately sized datasets. Whereas the approach adopted by the SGP-VAE employs an deterministic transformation of the outputs of the inference network based on the covariance function, this approach can be interpreted as learning an appropriate dependency between input locations. In practice, we found the use of this approach to result in worse predictive performance." }, { "heading": "C MEMORY REQUIREMENTS", "text": "Assuming input locations and inducing point locations are shared across tasks, we require storing {K\nukf (t) k + Kukuk}Kk=1 and Kf (t)k f (t)k in memory, which is O ( KMN +KM2 +N2 ) .\nFor the SGP-VAE, we also require storing φ and instantiating {µ(t)φl,k,Σ (t) φl,k }Kk=1, which\nis O (|φl|+KMD + 2KN). Collectively, this results in the memory requirement O ( KNM +KM2 +N2 + |φl|+KMD + 2KN ) .\nIf we were to employ the same sparse structured approximate posterior, but replace the output of the inference network with free-form variational parameters, the memory requirement is O ( KNM +KM2 +N2 +KMD + 2TKN ) .7 Alternatively, if we were to let q(u)\nto be parameterised by free-form Cholesky factors and means, the memory requirement is O ( KNM +KM2 +N2 +KMD + TKM(M + 1)/2 + TKM ) . Table 3 compares the first order approximations. Importantly, the use of amortisation across tasks stops the memory scaling with the number of tasks." }, { "heading": "D MULTI-OUTPUT GAUSSIAN PROCESSES", "text": "Through consideration of the interchange of input dependencies and likelihood functions, we can shed light on the relationship between the probabilistic model employed by the SGP-VAE and other multi-output GP models. These relationships are summarised in Figure 4.\n7Note we only require evaluating a single K f (t) k f (t) k at each update." }, { "heading": "SGP-VAE", "text": "" }, { "heading": "VAE", "text": "Linear Multi-Output Gaussian Processes Replacing the likelihood with a linear likelihood function characterises a family of linear multi-output GPs, defined by a linear transformation of K independent latent GPs:\nf ∼ K∏ k=1 GP ( 0, kθ1,k(x,x ′) )\ny|f ∼ N∏ n=1 N (yn; Wfn, Σ) .\n(40)\nThe family includes Teh et al.’s (2005) semiparametric latent factor model, Yu et al.’s (2009) GP factor analysis (GP-FA) and Bonilla et al.’s (2008) class of multi-task GPs. Notably, removing input dependencies by choosing kθ1,k(x,x\n′) = δ(x,x′) recovers factor analysis, or equivalently, probabilistic principal component analysis (Tipping & Bishop, 1999) when Σ = σ2I. Akin to the relationship between factor analysis and linear multi-output GPs, the probabilistic model employed by standard VAEs can be viewed as a special, instantaneous case of the SGP-VAE’s.\nDeep Gaussian Processes Single hidden layer deep GPs (DGPs) (Damianou & Lawrence, 2013) are characterised by the use of a GP likelihood function, giving rise to the probabilistic model\nf ∼ K∏ k=1 GP ( 0, kθ1,k(x,x ′) )\ny|f ∼ P∏ p=1 GP ( 0, kθ2,p(f(x)f(x ′)) ) (41)\nwhere yn = y(xn). The GP latent variable model (GP-LVM) (Lawrence & Moore, 2007) is the special, instantaneous case of single layered DGPs. Multi-layered DGPs are recovered using a hierarchical latent space with conditional GP priors between each layer." }, { "heading": "E EXPERIMENTAL DETAILS", "text": "Whilst the theory outlined in Section 2 describes a general decoder parameterising both the mean and variance of the likelihood, we experienced difficulty training SGP-VAEs using a learnt variance, especially for high-dimensional observations. Thus, for the experiments detailed in this paper we use a shared variance across all observations. We use the Adam optimiser (Kingma & Ba, 2014) with a constant learning rate of 0.001. Unless stated otherwise, we estimate the gradients of the\nELBO using a single sample and the ELBO itself using 100 samples. The predictive distributions are approximated as Gaussian with means and variances estimated by propagating samples from q(f) through the decoder. For each experiment, we normalise the observations using the means and standard deviations of the data in the training set.\nThe computational complexity of performing variational inference (VI) in the full GP-VAE, per update, is dominated by the O ( KN3 ) cost associated with inverting the set of K N ×N matrices,\n{Kfkfk + Σφl,k}Kk=1. This can quickly become burdensome for even moderately sized datasets. A pragmatic workaround is to use a biased estimate of the ELBO using Ñ < N data points:\nL̃ÑELBO = N\nÑ\n[ Eq(f̃) [ log\npθ2(ỹ|f̃) lφ(f̃ |ỹ)\n] + log Z̃q(θ, φ) ] . (42)\nỹ and f̃ denote the mini-batch of Ñ observations and their corresponding latent variables, respectively. The bias is introduced due to the normalisation constant, which does not satisfy N Ñ E [ log Z̃q(θ, φ) ] = E [logZq(θ, φ)]. Nevertheless, the mini-batch estimator will be a reasonable approximation to the full estimator provided the lengthscale of the GP prior is not too large.8 Mini-batching cannot be used to reduce the O ( KN3 ) cost of performing inference at test time, hence sparse approximations are necessary for large datasets." }, { "heading": "E.1 SMALL-SCALE EEG", "text": "For all GP-VAE models, we use a three-dimensional latent space, each using squared exponential (SE) kernels with lengthscales and scales initialised to 0.1 and 1, respectively. All DNNs, except for those in PointNet and IndexNet, use two hidden layers of 20 units and ReLU activation functions. PointNet and IndexNet employ DNNs with a single hidden layer of 20 units and a 20-dimensional intermediate representation. Each model is trained for 3000 epochs using a batch size of 100, with the procedure repeated 15 times. Following (Requeima et al., 2019), the performance of each model is evaluated using the standardised mean squared error (SMSE) and negative log-likelihood (NLL). The mean ± standard deviation of the performance metrics for the 10 iterations with the highest ELBO is reported.9" }, { "heading": "E.2 JURA", "text": "We use a two-dimensional latent space for all GP-VAE models with SE kernels with lengthscales and scales initialised to 1. This permits a fair comparison with other multi-output GP methods which also use two latent dimensions with SE kernels. For all DNNs except for those in IndexNet, we use two hidden layers of 20 units and ReLU activation functions. IndexNet uses DNNs with a single hidden layer of 20 units and a 20-dimensional intermediate representation. Following Goovaerts (1997) and Lawrence (2004), the performance of each model is evaluated using the mean absolute error (MAE) averaged across 10 different initialisations. The 10 different initialisations are identified from a body of 15 as those with the highest training set ELBO. For each initialisation the GP-VAE models are trained for 3000 epochs using a batch size of 100." }, { "heading": "E.3 LARGE-SCALE EEG", "text": "In both experiments, for each trial in the test set we simulate simultaneous electrode ‘blackouts’ by removing any 4 sample period at random with 25% probability. Additionally, we simulate individual electrode ‘blackouts’ by removing any 16 sample period from at random with 50% probability from the training set. For the first experiment, we also remove any 16 sample period at random with 50% probability from the test set. For the second experiment, we remove any 16 sample period at random with 10% probability. All models are trained for 100 epochs, with the procedure repeated five times, and use a 10-dimensional latent space with SE kernels and lengthscales initialised to 1\n8In which case the off-diagonal terms in the covariance matrix will be large making the approximation pθ1(f) = ∏ pθ1(f̃) extremely crude.\n9We found that the GP-VAE occasionally got stuck in very poor local optima. Since the ELBO is calculated on the training set alone, the experimental procedure is still valid.\nand 0.1, respectively. All DNNs, except for those in PointNet and IndexNet, use four hidden layers of 50 units and ReLU activation functions. PointNet and IndexNet employ DNNs with two hidden layers of 50 units and a 50-dimensional intermediate representation." }, { "heading": "E.4 BOUNCING BALL", "text": "To ensure a fair comparison with the SVAE and SIN, we adopt an identical architecture for the inference network and decoder in the original experiment. In particular, we use DNNs with two hidden layers of 50 units and hyperbolic tangent activation functions. Whilst both Johnson et al. and Lin et al. use eight-dimensional latent spaces, we consider a GP-VAE with a one-dimensional latent space and periodic GP kernel. For the more complex experiment, we use a SGP-VAE with fixed inducing points placed every 50 samples. We also increase the number of hidden units in each layer of the DNNs to 256 and use a two-dimensional latent space - one for each ball." }, { "heading": "E.5 WEATHER STATION", "text": "The spatial location of each weather station is determined by its latitude, longitude and elevation above sea level. The rates of missingness in the dataset vary, with 6.3%, 14.0%, 18.9%, 47.3% and 93.2% of values missing for each of the five weather variables, respectively. Alongside the average temperature for the middle five days, we simulate additional missingness from the test datasets by removing 25% of the minimum and maximum temperature values. Each model is trained on the data from 1980 using a single group per update for 50 epochs, with the performance evaluated on the data from both 1980 and 1981 using the root mean squared error (RMSE) and NLL averaged across five runs. We use a three-dimensional latent space with SE kernels and lengthscales initialised to 1. All DNNs, except for those in PointNet and IndexNet, use four hidden layers of 20 units and ReLU activation functions. PointNet and IndexNet employ DNNs with two hidden layers of 20 units and a 20-dimensional intermediate representation. Inducing point locations are initialised using kmeans clustering, and are shared across latent dimensions and groups. The VAE uses FactorNet. We consider independent GPs modelling the seven point time series for each variable and each station, with model parameters shared across groups. No comparison to other sparse GP approaches is made and there is no existing framework for performing approximate inference in sparse GP models conditioned on previously unobserved data." }, { "heading": "F FURTHER EXPERIMENTATION", "text": "" }, { "heading": "F.1 SMALL SCALE EXPERIMENTS", "text": "Table 4 compares the performance of the GP-VAE to that of the GPPVAE, In all cases, FactorNet is used to handle missing data. We emphasise that the GP-VAE and GPPVAE employ identical probabilistic models, with the only difference being the form of the approximate posterior. The superior predictive performance of the GP-VAE can therefore be accredited to the use of the structured approximate posterior as opposed to the mean-field approximate posterior used by the GPPVAE." }, { "heading": "F.2 SYNTHETIC BOUNCING BALL EXPERIMENT", "text": "The original dataset consists of 80 12-dimensional image sequences each of length 50, with the task being to predict the trajectory of the ball given a prefix of a longer sequence. The image sequences are generated at random by uniformly sampling the starting position of the ball whilst keeping the\nbouncing frequency fixed. Figure 5 compares the posterior latent GP and mean of the posterior predictive distribution with the ground truth for a single image sequence using just a single latent dimension. As demonstrated in the more more complex experiment, the GP-VAE is able to recover the ground truth with almost exact precision.\nFollowing Lin et al. (2018), Figure 1a evaluates the τ -steps ahead predictive performance of the GP-VAE using the mean absolute error, defined as\nNtest∑ n=1 T−τ∑ t=1\n1 Ntest(T − τ)d ∥∥y∗n,t+τ − Eq(yn,t+τ |yn,1:t) [yn,t+τ ]∥∥1 (43)\nwhereNtest is the number of test image sequences with T time steps and y∗n,t+τ denotes the noiseless observation at time step t+ τ .\nG PARTIAL INFERENCE NETWORK COMPUTATIONAL GRAPHS" } ]
2,020
null
SP:30ceb5d450760e9954ac86f091fb97cb14a2d092
[ "The paper considers the problem of creating spatial memory representations, which play important roles in robotics and are crucial for real-world applications of intelligent agents. The paper proposes an ego-centric representation that stores depth values and features at each pixel in a panorama. Given the relative pose between frames, the representation from the previous frame is transformed via forward warping (using known depth values) to the viewpoint of the current frame. The proposed approach has no learnable parameters. Experiments on a wide range of tasks show that the proposed approach outperforms baselines such as LSTM and NTM." ]
Spatial memory, or the ability to remember and recall specific locations and objects, is central to autonomous agents’ ability to carry out tasks in real environments. However, most existing artificial memory modules are not very adept at storing spatial information. We propose a parameter-free module, Egospheric Spatial Memory (ESM), which encodes the memory in an ego-sphere around the agent, enabling expressive 3D representations. ESM can be trained end-to-end via either imitation or reinforcement learning, and improves both training efficiency and final performance against other memory baselines on both drone and manipulator visuomotor control tasks. The explicit egocentric geometry also enables us to seamlessly combine the learned controller with other non-learned modalities, such as local obstacle avoidance. We further show applications to semantic segmentation on the ScanNet dataset, where ESM naturally combines image-level and map-level inference modalities. Through our broad set of experiments, we show that ESM provides a general computation graph for embodied spatial reasoning, and the module forms a bridge between real-time mapping systems and differentiable memory architectures. Implementation at: https://github.com/ivy-dl/memory.
[ { "affiliations": [], "name": "Daniel Lenton" }, { "affiliations": [], "name": "Stephen James" }, { "affiliations": [], "name": "Ronald Clark" }, { "affiliations": [], "name": "Andrew J. Davison" } ]
[ { "authors": [ "Michael Bloesch", "Jan Czarnowski", "Ronald Clark", "Stefan Leutenegger", "Andrew J Davison" ], "title": "Codeslam—learning a compact, optimisable representation for dense visual slam", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Jake Bruce", "Niko Sünderhauf", "Piotr Mirowski", "Raia Hadsell", "Michael Milford" ], "title": "Learning deployable navigation policies at kilometer scale from a single traversal", "venue": "arXiv preprint arXiv:1807.05211,", "year": 2018 }, { "authors": [ "Neil Burgess" ], "title": "Spatial memory: how egocentric and allocentric combine", "venue": "Trends in cognitive sciences,", "year": 2006 }, { "authors": [ "Cesar Cadena", "Luca Carlone", "Henry Carrillo", "Yasir Latif", "Davide Scaramuzza", "José Neira", "Ian Reid", "John J Leonard" ], "title": "Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age", "venue": "IEEE Transactions on robotics,", "year": 2016 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Dzmitry Bahdanau", "Yoshua Bengio" ], "title": "On the properties of neural machine translation: Encoder-decoder approaches", "venue": "arXiv preprint arXiv:1409.1259,", "year": 2014 }, { "authors": [ "Cevahir Cigla", "Roland Brockers", "Larry Matthies" ], "title": "Gaussian mixture models for temporal depth fusion", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2017 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "arXiv preprint arXiv:1802.01561,", "year": 2018 }, { "authors": [ "Péter Fankhauser", "Michael Bloesch", "Christian Gehring", "Marco Hutter", "Roland Siegwart" ], "title": "Robotcentric elevation mapping with uncertainty estimates", "venue": "In Mobile Service Robotics,", "year": 2014 }, { "authors": [ "Anthony T Fragoso", "Cevahir Cigla", "Roland Brockers", "Larry H Matthies" ], "title": "Dynamically feasible motion planning for micro air vehicles using an egocylinder", "venue": "In Field and Service Robotics,", "year": 2018 }, { "authors": [ "Alex Graves", "Greg Wayne", "Ivo Danihelka" ], "title": "Neural turing machines", "venue": "arXiv preprint arXiv:1410.5401,", "year": 2014 }, { "authors": [ "Saurabh Gupta", "James Davidson", "Sergey Levine", "Rahul Sukthankar", "Jitendra Malik" ], "title": "Cognitive mapping and planning for visual navigation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Joao F Henriques", "Andrea Vedaldi" ], "title": "Mapnet: An allocentric spatial memory for mapping environments", "venue": "In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "James R Hinman", "G William Chapman", "Michael E Hasselmo" ], "title": "Neuronal representation of environmental boundaries in egocentric coordinates", "venue": "Nature communications,", "year": 2019 }, { "authors": [ "Sepp Hochreiter" ], "title": "The vanishing gradient problem during learning recurrent neural nets and problem solutions", "venue": "International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems,", "year": 1998 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech Marian Czarnecki", "Tom Schaul", "Joel Z Leibo", "David Silver", "Koray Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "arXiv preprint arXiv:1611.05397,", "year": 2016 }, { "authors": [ "Stephen James", "Andrew J Davison", "Edward Johns" ], "title": "Transferring end-to-end visuomotor control from simulation to real world for a multi-stage task", "venue": "arXiv preprint arXiv:1707.02267,", "year": 2017 }, { "authors": [ "Stephen James", "Marc Freese", "Andrew J Davison" ], "title": "Pyrep: Bringing v-rep to deep robot learning", "venue": "arXiv preprint arXiv:1906.11176,", "year": 2019 }, { "authors": [ "Stephen James", "Paul Wohlhart", "Mrinal Kalakrishnan", "Dmitry Kalashnikov", "Alex Irpan", "Julian Ibarz", "Sergey Levine", "Raia Hadsell", "Konstantinos Bousmalis" ], "title": "Sim-to-real via sim-to-sim: Dataefficient robotic grasping via randomized-to-canonical adaptation networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Stephen James", "Zicong Ma", "David Rovick Arrojo", "Andrew J. Davison" ], "title": "Rlbench: The robot learning benchmark & learning environment", "venue": "IEEE Robotics and Automation Letters,", "year": 2020 }, { "authors": [ "Steven Kapturowski", "Georg Ostrovski", "John Quan", "Remi Munos", "Will Dabney" ], "title": "Recurrent experience replay in distributed reinforcement learning", "venue": "In International conference on learning representations,", "year": 2018 }, { "authors": [ "Roberta L Klatzky" ], "title": "Allocentric and egocentric spatial representations: Definitions, distinctions, and interconnections", "venue": "In Spatial cognition,", "year": 1998 }, { "authors": [ "Daniel Lenton", "Fabio Pardo", "Fabian Falck", "Stephen James", "Ronald Clark" ], "title": "Ivy: Templated deep learning for inter-framework portability", "venue": "arXiv preprint arXiv:2102.02886,", "year": 2021 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Chao Liu", "Jinwei Gu", "Kihwan Kim", "Srinivasa G Narasimhan", "Jan Kautz" ], "title": "Neural rgb (r) d sensing: Depth and uncertainty from a video camera", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jan Matas", "Stephen James", "Andrew J Davison" ], "title": "Sim-to-real reinforcement learning for deformable object manipulation", "venue": "arXiv preprint arXiv:1806.07851,", "year": 2018 }, { "authors": [ "John McCormac", "Ankur Handa", "Andrew Davison", "Stefan Leutenegger" ], "title": "Semanticfusion: Dense 3d semantic mapping with convolutional neural networks", "venue": "IEEE International Conference on Robotics and automation (ICRA),", "year": 2017 }, { "authors": [ "Piotr Mirowski", "Matt Grimes", "Mateusz Malinowski", "Karl Moritz Hermann", "Keith Anderson", "Denis Teplyashin", "Karen Simonyan", "Andrew Zisserman", "Raia Hadsell" ], "title": "Learning to navigate in cities without a map", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Richard A Newcombe", "Shahram Izadi", "Otmar Hilliges", "David Molyneaux", "David Kim", "Andrew J Davison", "Pushmeet Kohi", "Jamie Shotton", "Steve Hodges", "Andrew Fitzgibbon" ], "title": "Kinectfusion: Real-time dense surface mapping and tracking", "venue": "In 2011 10th IEEE International Symposium on Mixed and Augmented Reality,", "year": 2011 }, { "authors": [ "Seoung Wug Oh", "Joon-Young Lee", "Ning Xu", "Seon Joo Kim" ], "title": "Video object segmentation using space-time memory networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Emilio Parisotto", "Ruslan Salakhutdinov" ], "title": "Neural map: Structured memory for deep reinforcement learning", "venue": "arXiv preprint arXiv:1702.08360,", "year": 2017 }, { "authors": [ "Emilio Parisotto", "H Francis Song", "Jack W Rae", "Razvan Pascanu", "Caglar Gulcehre", "Siddhant M Jayakumar", "Max Jaderberg", "Raphael Lopez Kaufman", "Aidan Clark", "Seb Noury" ], "title": "Stabilizing transformers for reinforcement learning", "venue": "arXiv preprint arXiv:1910.06764,", "year": 2019 }, { "authors": [ "Richard Alan Peters", "Kimberly E Hambuchen", "Kazuhiko Kawamura", "D Mitchell Wilkes" ], "title": "The sensory ego-sphere as a short-term memory for humanoids", "venue": "In Proceedings of the IEEE-RAS international conference on humanoid robots,", "year": 2001 }, { "authors": [ "Eric Rohmer", "Surya PN Singh", "Marc Freese" ], "title": "V-rep: A versatile and scalable robot simulation framework", "venue": "In International Conference on Intelligent Robots and Systems", "year": 2013 }, { "authors": [ "Fereshteh Sadeghi", "Alexander Toshev", "Eric Jang", "Sergey Levine" ], "title": "Sim2real view invariant visual servoing by recurrent control", "venue": "arXiv preprint arXiv:1712.07642,", "year": 2017 }, { "authors": [ "Sainbayar Sukhbaatar", "Jason Weston", "Rob Fergus" ], "title": "End-to-end memory networks", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Greg Wayne", "Chia-Chun Hung", "David Amos", "Mehdi Mirza", "Arun Ahuja", "Agnieszka GrabskaBarwinska", "Jack Rae", "Piotr Mirowski", "Joel Z Leibo", "Adam Santoro" ], "title": "Unsupervised predictive memory in a goal-directed agent", "venue": "arXiv preprint arXiv:1803.10760,", "year": 2018 }, { "authors": [ "Thomas Whelan", "Stefan Leutenegger", "R Salas-Moreno", "Ben Glocker", "Andrew Davison" ], "title": "Elasticfusion: Dense slam without a pose graph", "venue": "Robotics: Science and Systems,", "year": 2015 }, { "authors": [ "Caiming Xiong", "Stephen Merity", "Richard Socher" ], "title": "Dynamic memory networks for visual and textual question answering", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Mengmi Zhang", "Keng Teck Ma", "Shih-Cheng Yen", "Joo Hwee Lim", "Qi Zhao", "Jiashi Feng" ], "title": "Egocentric spatial memory", "venue": "In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2018 }, { "authors": [ "Shuaifeng Zhi", "Michael Bloesch", "Stefan Leutenegger", "Andrew J Davison" ], "title": "Scenecode: Monocular dense semantic reconstruction using learned encoded scene representations", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Huizhong Zhou", "Benjamin Ummenhofer", "Thomas Brox" ], "title": "Deeptam: Deep tracking and mapping", "venue": "In Proceedings of the European conference on computer vision (ECCV),", "year": 2018 }, { "authors": [ "Yuke Zhu", "Roozbeh Mottaghi", "Eric Kolve", "Joseph J Lim", "Abhinav Gupta", "Li Fei-Fei", "Ali Farhadi" ], "title": "Target-driven visual navigation in indoor scenes using deep reinforcement learning", "venue": "IEEE international conference on robotics and automation (ICRA),", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Egocentric spatial memory is central to our understanding of spatial reasoning in biology (Klatzky, 1998; Burgess, 2006), where an embodied agent constantly carries with it a local map of its surrounding geometry. Such representations have particular significance for action selection and motor control (Hinman et al., 2019). For robotics and embodied AI, the benefits of a persistent local spatial memory are also clear. Such a system has the potential to run for long periods, and bypass both the memory and runtime complexities of large scale world-centric mapping. Peters et al. (2001) propose an EgoSphere as being a particularly suitable representation for robotics, and more recent works have utilized ego-centric formulations for planar robot mapping (Fankhauser et al., 2014), drone obstacle avoidance (Fragoso et al., 2018) and mono-to-depth (Liu et al., 2019).\nIn parallel with these ego-centric mapping systems, a new paradigm of differentiable memory architectures has arisen, where a memory bank is augmented to a neural network, which can then learn read and write operations (Weston et al., 2014; Graves et al., 2014; Sukhbaatar et al., 2015). When compared to Recurrent Neural Networks (RNNs), the persistent memory circumvents issues of vanishing or exploding gradients, enabling solutions to long-horizon tasks. These have also been applied to visuomotor control and navigation tasks (Wayne et al., 2018), surpassing baselines such as the ubiquitous Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997).\nWe focus on the intersection of these two branches of research, and propose Egospheric Spatial Memory (ESM), a parameter-free module which encodes geometric and semantic information about the scene in an ego-sphere around the agent. To the best of our knowledge, ESM is the first end-to-end trainable egocentric memory with a full panoramic representation, enabling direct encoding of the surrounding scene in a 2.5D image.\nWe also show that by propagating gradients through the ESM computation graph we can learn features to be stored in the memory. We demonstrate the superiority of learning features through the ESM module on both target shape reaching and object segmentation tasks. For other visuomotor control tasks, we show that even without learning features through the module, and instead directly projecting image color values into memory, ESM consistently outperforms other memory baselines.\nThrough these experiments, we show that the applications of our parameter-free ESM module are widespread, where it can either be dropped into existing pipelines as a non-learned module, or end-to-end trained in a larger computation graph, depending on the task requirements." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 MAPPING", "text": "Geometric mapping is a mature field, with many solutions available for constructing high quality maps. Such systems typically maintain an allocentric map, either by projecting points into a global world co-ordinate system (Newcombe et al., 2011; Whelan et al., 2015), or by maintaining a certain number of keyframes in the trajectory history (Zhou et al., 2018; Bloesch et al., 2018). If these systems are to be applied to life-long embodied AI, then strategies are required to effectively select the parts of the map which are useful, and discard the rest from memory (Cadena et al., 2016).\nFor robotics applications, prioritizing geometry in the immediate vicinity is a sensible prior. Rather than taking a world-view to map construction, such systems often formulate the mapping problem in a purely ego-centric manner, performing continual re-projection to the newest frame and pose with fixed-sized storage. Unlike allocentric formulations, the memory indexing is then fully coupled to the agent pose, resulting in an ordered representation particularly well suited for downstream egocentric tasks, such as action selection. Peters et al. (2001) outline an EgoSphere memory structure as being suitable for humanoid robotics, with indexing via polar and azimuthal angles. Fankhauser et al. (2014) use ego-centric height maps, and demonstrate on a quadrupedal robot walking over obstacles. Cigla et al. (2017) use per-pixel depth Gaussian Mixture Models (GMMs) to maintain an ego-cylinder of belief around a drone, with applications to collision avoidance (Fragoso et al., 2018). In a different application, Liu et al. (2019) learn to predict depth images from a sequence of RGB images, again using ego reprojections. These systems are all designed to represent only at the level of depth and RGB features. For mapping more expressive implicit features via end-to-end training, a fully differentiable long-horizon computation graph is required. Any computation graph which satisfies this requirement is generally referred to as memory in the neural network literature." }, { "heading": "2.2 MEMORY", "text": "The concept of memory in neural networks is deeply coupled with recurrence. Naive recurrent networks have vanishing and exploding gradient problems (Hochreiter, 1998), which LSTMs (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Units (GRUs) (Cho et al., 2014) mediate using additive gated structures. More recently, dedicated differentiable memory blocks have become a popular alternative. Weston et al. (2014) applied Memory Networks (MemNN) to question answering, using hard read-writes and separate training of components. Graves et al. (2014) and Sukhbaatar et al. (2015) instead made the read and writes ‘soft’ with the proposal of Neural Turing Machines (NTM) and End-to-End Memory Networks (MemN2N) respectively, enabling joint training with the controller. Other works have since conditioned dynamic memory on images, for tasks such as visual question answering (Xiong et al., 2016) and object segmentation (Oh et al., 2019). Another distinct but closely related approach is self attention (Vaswani et al., 2017). These approaches also use key-based content retrieval, but do so on a history of previous observations with adjacent connectivity. Despite the lack of geometric inductive bias, recent results demonstrate the amenability of general memory (Wayne et al., 2018) and attention (Parisotto et al., 2019) to visuomotor control and navigation tasks.\nOther authors have explored the intersection of network memory and spatial mapping for navigation, but have generally been limited to 2D aerial-view maps, focusing on planar navigation tasks. Gupta et al. (2017) used an implicit ego-centric memory which was updated with warping and confidence maps for discrete action navigation problems. Parisotto & Salakhutdinov (2017) proposed a similar setup, but used dedicated learned read and write operations for updates, and tested on simulated Doom environments. Without consideration for action selection, Henriques & Vedaldi (2018) proposed a similar system, but instead used an allocentric formulation, and tested on free-form trajectories of real images. Zhang et al. (2018) also propose a similar system, but with the inclusion of loop closure. Our memory instead focuses on local perception, with the ability to represent detailed 3D geometry in all directions around the agent. The benefits of our module are complementary to existing 2D methods, which instead focus on occlusion-aware planar understanding suitable for navigation." }, { "heading": "3 METHOD", "text": "In this section, we describe our main contribution, the egospheric spatial memory (ESM) module, shown in Figure 1. The module operates as an Extended Kalman Filter (EKF), with an egosphere image µt ∈ Rhs×ws×(2+1+n) and its diagonal covariance Σt ∈ Rhs×ws×(1+n) representing the state. The egosphere image consists of 2 channels for the polar and azimuthal angles, 1 for radial depth, and n for encoded features. The angles are not included in the covariance, as their values are implicit in the egosphere image pixel indices. The covariance only represents the uncertainty in depth and features at these fixed equidistant indices, and diagonal covariance is assumed due to the large state size of the images. Image measurements are assumed to come from projective depth cameras, which similarly store 1 channel for depth and n for encoded features. We also assume incremental agent pose measurements ut ∈ R6 with covariance Σut ∈ R6×6 are available, in the form of a translation and rotation vector. The algorithm overview is presented in Algorithm 1.\nFinally, the update step takes our state prediction µ̄t, Σ̄t and state observation µ̂t, Σ̂t, and fuses them to produce our new state belief µt,Σt. We spend the remainder of this section explaining the form of the constituent functions. All functions in Algorithm 1 involve re-projections across different image frames, using forward warping. Functions fm, Fm, fo and Fo are therefore all built using the same core functions. While the re-projections could be solved using a typical rendering pipeline of mesh construction followed by rasterization, we instead choose a simpler approach and directly quantize the pixel projections with variance-based image smoothing to fill in quantization holes. An overview of the projection and quantization operations for a single ESM update step is shown in Fig. 1." }, { "heading": "3.1 FORWARD WARPING", "text": "Forward warping projects ordered equidistant homogeneous pixel co-ordinates pcf1 from frame f1 to non-ordered non-equidistant homogeneous pixel co-ordinates p̃cf2 in frame f2. We use µ̃f2 = {φ̃f2, θ̃f2, d̃f2, ẽf2} to denote the loss of ordering following projection from µf1 = {φf1, θf2, df1, ef2}, where φ, θ, d and e represent polar angles, azimuthal angles, depth and encoded features respectively. We only consider warping from projective to omni cameras, which corresponds\nto functions fo, Fo, but the omni-to-omni case as in fm, Fm is identical except with the inclusion of another polar co-ordinate transformation.\nThe encoded features are assumed constant during projection ẽf2 = ef1. For depth, we must transform the values to the new frame in polar co-ordinates, which is a composition of a linear transformation and non-linear polar conversion. Using the camera intrinsic matrix K1, the full projection is composed of a scalar multiplication with homogeneous pixel co-ordinates pcf1, transformation by camera inverse matrix K−11 and frame-to-frame T12 matrices, and polar conversion fp:\n{φ̃f2, θ̃f2, d̃f2} = fp(T12K−11 [pcf1 df1]) (1)\nCombined, this provides us with both the forward warped image µ̃f2 = {φ̃f2, θ̃f2, d̃f2, ẽf2}, and the newly projected homogeneous pixel co-ordinates p̃cf2 = {kpprφ̃f2, kppr θ̃f2, 1}, where kppr denotes the pixels-per-radian resolution constant. The variances are also projected using the full analytic Jacobians, which are efficiently implemented as tensor operations, avoiding costly autograd usage.\nˆ̃Σ2 = JV V1J T V + JPP12J T P (2)" }, { "heading": "3.2 QUANTIZATION, FUSION AND SMOOTHING", "text": "Following projection, we first quantize the floating point pixel coordinates p̃cf2 into integer pixel co-ordinates pcf2. This in general leads to quantization holes and duplicates. The duplicates are handled with a variance conditioned depth buffer, such that the closest projected depth is used, provided that it’s variance is lower than a set threshold. This in general prevents highly uncertain close depth values from overwriting highly certain far values. We then perform per pixel fusion based on lines 6 and 7 in Algorithm 1 provided the depths fall within a set relative threshold, otherwise the minimum depth with sufficiently low variance is taken. This again acts as a depth buffer.\nFinally, we perform variance based image smoothing, whereby we treat each N ×N image patch (µk,l)k∈{1,..,N},l∈{1,..,N} as a collection of independent measurements of the central pixel, and combine their variance values based on central limit theory, resulting in smoothed values for each pixel in the image µi,j . Although we use this to update the mean belief, we do not smooth the variance values, meaning projection holes remain at prior variance. This prevents the smoothing from distorting our belief during subsequent projections, and makes the smoothing inherently local to the current frame only. The smoothing formula is as follows, with variance here denoted as σ2:\nµi,j =\n∑ k ∑ l µk,l · σ\n−2 k,l∑\nk ∑ l σ −2 k,l\n(3)\nGiven that the quantization is a discrete operation, we cannot compute it’s analytic jacobian for uncertainty propagation. We therefore approximate the added quantization uncertainty using the numerical pixel gradients of the newly smoothed image Gi,j , and assume additive noise proportional to the x and y quantization distances ∆pci,j :\nΣi,j = Σ̃i,j +Gi,j∆pci,j (4)" }, { "heading": "3.3 NEURAL NETWORK INTEGRATION", "text": "The ESM module can be integrated anywhere into a wider CNN stack, forming an Egospheric Spatial Memory Network (ESMN). Throughout this paper we consider two variants, ESMN and ESMN-RGB, see Figure 2. ESMN-RGB is a special case of ESMN, where RGB features are directly projected into memory, while ESMN projects CNN encoded features into memory. The inclusion of polar angles, azimuthal angles and depth means the full relative polar coordinates are explicitly represented for each pixel in memory. Although the formulation described in Algorithm 1 and Fig 1 allows for m vision sensors, the experiments in this paper all involve only a single acquiring sensor, meaning m = 1. We also only consider cases with constant variance in the acquired images Vt = kvar, and so we omit the variance images from the ESM input in Fig 2 for simplicity. For baseline approaches, we compute an image of camera-relative coordinates via K−1, and then concatenate this to the RGB image along with the tiled incremental poses before input to the networks. All values are normalized to 0− 1 before passing to convolutions, based on the permitted range for each channel." }, { "heading": "4 EXPERIMENTS", "text": "The goal of our experiments is to show the wide applicability of ESM to different embodied 3D learning tasks. We test two different applications:\n1. Image-to-action learning for multi-DOF control (Sec 4.1). Here we consider drone and robot manipulator target reacher tasks using either ego-centric or scene-centric cameras. We then assess the ability for ESMN policies to generalize between these different camera modalities, and assess the utility of the ESM geometry for obstacle avoidance. We train policies both using imitation learning (IL) and reinforcement learning (RL).\n2. Object segmentation (Sec 4.2). Here we explore the task of constructing a semantic map, and the effect of changing the ESM module location in the computation graph on performance." }, { "heading": "4.1 MULTI-DOF VISUOMOTOR CONTROL", "text": "While ego-centric cameras are typically used when learning to navigate planar scenes from images (Jaderberg et al., 2016; Zhu et al., 2017; Gupta et al., 2017; Parisotto & Salakhutdinov, 2017), static scene-centric cameras are the de facto when learning multi-DOF controllers for robot manipulators (Levine et al., 2016; James et al., 2017; Matas et al., 2018; James et al., 2019b). We consider the more challenging and less explored setup of learning multi-DOF visuomotor controllers from ego-centric cameras, and also from moving scene-centric cameras. LSTMs are the de facto memory architecture in the RL literature (Jaderberg et al., 2016; Espeholt et al., 2018; Kapturowski et al., 2018; Mirowski et al., 2018; Bruce et al., 2018), making this a suitable baseline. NTMs represent another suitable baseline, which have outperformed LSTMs on visual navigation tasks (Wayne et al., 2018). Many other works exist which outperform LSTMs for planar navigation in 2D maze-like environments (Gupta et al., 2017; Parisotto & Salakhutdinov, 2017; Henriques & Vedaldi, 2018), but the top-down representation means these methods are not readily applicable to our multi-DOF control tasks. LSTM and NTM are therefore selected as competitive baselines for comparison." }, { "heading": "4.1.1 IMITATION LEARNING", "text": "For our imitation learning experiments, we test the utility of the ESM module on two simulated visual reacher tasks, which we refer to as Drone Reacher (DR) and Manipulator Reacher (MR). Both are implemented using the CoppeliaSim robot simulator (Rohmer et al., 2013), and its Python extension PyRep (James et al., 2019a). We implement DR ourselves, while MR is a modification of the reacher task in RLBench (James et al., 2020). Both tasks consist of 3 targets placed randomly in a simulated arena, and colors are newly randomized for each episode. The targets consist of a cylinder, sphere, and \"star\", see Figure 3.\nIn both tasks, the target locations remain fixed for the duration of an episode, and the agent must continually navigate to newly specified targets, reaching as many as possible in a fixed time frame of 100 steps. The targets are specified to the agent either as RGB color values or shape class id, depending on the experiment. The agent does not know in advance which target will next be specified, meaning a memory of all targets and their location in the scene must be maintained for the full duration of an episode. Both environments have a single bodyfixed camera, as shown in Figure 3, and also an external camera with freeform motion, which we use separately for different experiments.\nFor training, we generate an offline dataset of 100k 16-step sequences from random motions for both environments, and train the agents using imitation learning from known expert actions. Action spaces of joint velocities q̇ ∈ R7 and cartesian velocities ẋ ∈ R6 are used for MR and DR respectively. Expert translations move the end-effector or drone directly towards the target, and expert rotations rotate the egocentric camera towards the target via shortest rotation. Expert joint velocities are calculated for linear end-effector motion via the manipulator Jacobian. For all experiments, we compare to baselines of single-frame, dual-stacked LSTM with and without spatial auxiliary losses, and NTM. We also compare against a network trained on partial oracle omni-directional images, masked at unobserved pixels, which we refer to as Partial-Oracle-Omni (PO2), as well as random and expert policies. PO2 cannot see regions where the monocular camera has not looked, but it maintains a pixel-perfect memory of anywhere it has looked. Full details of the training setups are provided in Appendix A.1. The results for all experiments are presented in Table 1.\nEgo-Centric Observations: In this configuration we take observations from body-mounted cameras. We can see in Table 1 that for both DR and MR, our module significantly outperforms other memory baselines, which do not explicitly incorporate geometric inductive bias. Clearly, the baselines have difficulty in optimally interpreting the stream of incremental pose measurements and depth. In contrast, ESM by design stores the encoded features in memory with meaningful indexing. The ESM structure ensures that the encoded features for each pixel are aligned with the associated relative polar translation, represented as an additional feature in memory. When fed to the post-ESM convolutions, action selection can then in principle be simplified to target feature matching, reading the associated relative translations, and then transforming to the required action space. A collection of short sequences of the features in memory for the various tasks are presented in Figure 4, with (a), (b) and (d) coming from egocentric observations. In all three cases we see the agent reach one target by the third frame, before re-orienting to reach the next.\nWe also observe that ESMN-RGB performs well when the network is conditioned on target color, but fails when conditioned on target shape id. This is to be expected, as the ability to discern shape from the memory is strongly influenced by the ESM resolution, quantization holes, and angular distortion. For example, the \"star\" shape in Figure 4 (a) is not apparent until t5. However, ESMN is able to succeed, and starts motion towards this star at t3. The pre-ESM convolutional encoder enables ESMN to store useful encoded features in the ESM module from monocular images, within which the shape was discernible. Figure 4 (a) shows the 3 most dominant ESM feature channels projected to RGB.\nScene-Centric Observations: Here we explore the ability of ESM to generalize to unseen camera poses and motion, from cameras external to the agent. The poses of these cameras are randomized for each episode during training, and follow random freeform rotations, with a bias to face towards the centre of the scene, and linear translations. Again, we see that the baselines fail to learn successful policies, while ESM-augmented networks are able to solve the task, see Table 1. The memories in these tasks take on a different profile, as can be seen in Fig 4 (c) and (e). While the memories from egocentric observations always contain information in the memory image centre, where the most recent monocular frame projects with high density, this is not the case for projections from arbitrarily positioned cameras which can move far from the agent, resulting in sparse projections into memory. The targets in Fig 4 (e) are all represented by only 1 or 2 pixels. The large apparent area in memory is a result of the variance-based smoothing, where the low-variance colored target pixels are surrounded by high-variance unobserved pixels in the ego-sphere.\nObstacle Avoidance: To further demonstrate the benefits of a local spatial geometric memory, we augment the standard drone reacher task with obstacles, see Figure 5. Rather than learning the avoidance in the policy, we exploit the interpretable geometric structure in ESM, and instead augment the policy output with a local avoidance component. We then compare targets reached and collisions for different avoidance baselines, and test these avoidance strategies on random, ESMN-RGB and expert target reacher policies. We see that the ESM geometry enables superior avoidance over using the most recent depth frame alone. The obstacle avoidance results are presented in Table 2, and further details of the experiment are presented in Appendix A.2.\nCamera Generalization: We now explore the extent to which policies trained from egocentric observations can generalize to cameras moving freely in the scene, and vice-versa. The results of these transfer learning experiments are presented in Table 3. Rows not labelled “Transferred” are taken directly from Table 1, and repeated for clarity. Example image trajectories for both egocentric and free-form observations are presented in Figure 6. The trained networks were not modified in any way, with no further training or fine-tuning applied before evaluation on the new image modality." }, { "heading": "4.1.2 REINFORCEMENT LEARNING", "text": "Assuming expert actions in partially observable (PO) environments is inherently limited. It is not necessarily true that the best action always rotates the camera directly to the next target for example. In general, for finding optimal policies in PO environments, methods such as reinforcement learning (RL) must be used. We therefore train both ESM networks and all the baselines on a simpler variant of the MR-Ego-Color task via DQN (Mnih et al., 2015). The manipulator must reach red, blue and then yellow spherical targets from egocentric observations, after which the episode terminates. We refer to this variant as MR-Seq-Ego-Color, due to the sequential nature. The only other difference to MR is that MR-Seq uses 128× 128 images as opposed to 32× 32. The ESM-integrated networks again outperform all baselines, learning to reach all three targets, while the baseline policies all only succeed in reaching one. Full details of the RL setup and learning curves are given in Appendix A.3." }, { "heading": "4.2 OBJECT SEGMENTATION", "text": "We now explore the suitability of the ESM module for object segmentation. One approach is to perform image-level segmentation in individual monocular frames, and then perform probabilistic fusion when projecting into the map (McCormac et al., 2017). We refer to this approach as Mono. Another approach is to first construct an RGB map, and then pass this map as input to a network. This has the benefit of wider context, but lower resolution is necessary to store a large map in memory, meaning details can be lost. ESMN-RGB adopts this approach. Another approach is to combine monocular predictions with multi-view optimization to gain the benefits of wider surrounding context as in (Zhi et al., 2019). Similarly, the ESMN architecture is able to combine monocular inference with\nthe wider map context, but does so by constructing a network with both image-level and map-level convolutions. These ESM variants adopt the same broad architectures as shown in Fig 2, with the full networks specified in Appendix A.4. We do not attempt to quote state-of-the-art results, but rather to further demonstrate the wide applications of the ESM module, and to explore the effect of placing the ESM module at different locations in the convolutional stack. We evaluate segmentation accuracy based on the predictions projected to the ego-centric map. With fixed network capacity between methods, we see that ESMN outperforms both baselines, see Table 4 for the results, and Figure 7 for example predictions in a ScanNet reconstruction. Further details are given in Appendix A.4." }, { "heading": "5 CONCLUSION", "text": "Through a diverse set of demonstrations, we have shown that ESM represents a widely applicable computation graph and trainable module for tasks requiring general spatial reasoning. When compared to other memory baselines for image-to-action learning, our module outperforms these dramatically when learning both from ego-centric and scene-centric images. One weakness of our method is that is assumes the availability of both incremental pose measurements of all scene cameras, and depth measurements, which is not a constraint of the other memory baselines. However, with the ever increasing ubiquity of commercial depth sensors, and with there being plentiful streams for incremental pose measurements including visual odometry, robot kinematics, and inertial sensors, we argue that such measurements are likely to be available in any real-world application of embodied AI. We leave it to future work to investigate the extent to which ESM performance deteriorates with highly uncertain real-world measurements, but with full uncertainty propagation and probabilistic per pixel fusion, ESM is well suited for accumulating noisy measurements in a principled manner." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 MULTI-DOF IMITATION LEARNING", "text": "" }, { "heading": "A.1.1 OFFLINE DATASETS", "text": "The image sequences for the offline datasets are captured following random motion of the agent in both the DR and MR tasks, but known expert actions for each of the three possible targets in the scene are stored at every timestep. The drone reacher is initialized in random locations at the start of each episode, whereas the manipulator reacher is always started in the same robot configuration overlooking the workspace, as in the original RLBench reacher task.\nFor the scene-centric acquisition, we instantiate three separate scene-centric cameras in the scene. In order to maximise variation in the dataset to encourage network generalization to arbitrary motions at test-time, we reset the pose of each of these scene-centric cameras at every step of the episode, rather than having each camera follow smooth motions. Each new random pose has a rotational bias to face towards the scene-centre, to ensure objects are likely to be seen frequently. By resetting the cameras poses on every time-step, we encourage the networks to learn to make sense of the pose information given to the network, rather than learning policies which fully rely on smoothly and slowly varying images.\nBoth tasks use ego-centric cameras with a wider field of view (FOV) than the scene-centric cameras. This is a common choice in robotics, where wide angle perception is especially necessary for bodymounted cameras. RLBench by default uses ego-centric FOV of 60 degrees and scene-centric FOV of 40 degrees, and we use the same values for our RLBench-derived MR task. For the drone reacher task, we use ego-centric FOV of 90 degrees and scene-centric FOV of 55 degrees, to enable all methods to more quickly explore the perceptual ego-sphere.\nFor the manipulator reacher dataset, we also store a robot mask image, which shows the pixels corresponding to the robot for all ego-centric and scene-centric images acquired. Known robot forward kinematics are used for generating the masking image. All images in the offline dataset are 32× 32 resolution." }, { "heading": "A.1.2 TRAINING", "text": "To maximize the diversity from the offline datasets, a new target is randomly chosen from each of the three possible targets for each successive step in the unrolled time dimension in the training batch. This ensures maximum variation in sequences at train-time, despite a relatively small number of 100k 16-frame sequences stored in the offline dataset. This also strongly encourages each of the networks to learn to remember the location of every target seen so far, because any target can effectively be requested from the training loss at any time.\nSimilarly, for the scene-centric cameras we randomly choose one of the three scene cameras at each successive time-step in the unrolled time dimension for maximum variation. Again this forces the networks to make use of the camera pose information to make sense of the scene, and prevents overfitting on particular repeated sequences in the training set, instead encouraging generalization to fully arbitrary motions. For these experiments, the baseline methods of Mono, LSTM, LSTM-Aux and NTM also receive the full absolute camera pose at each step rather than just incremental poses received by ESM, as we found this to improve the performance of the baselines.\nFor training the manipulator reacher policies, we additionally use the robot mask images to set high variance pixel values before feeding to the ESM module. This prevents the motion of the robot from breaking the static-scene assumption adopted by ESM during re-projections. We also provide the mask image as input to the baselines.\nAll networks use a batch size of 16, an unroll size of 16, and are trained for 250k steps using an ADAM optimizer with 1e−4 learning rate. None of the memory baselines cache the the internal state between training batches, and so the networks must learn to utilize the memory during the 16-frame unroll. 16 frames is on average enough steps to reach all 3 of the targets for both tasks." }, { "heading": "A.1.3 NETWORK ARCHITECTURES", "text": "The network architectures used in the imitation learning experiments are provided in Fig 8. Both LSTM baselines use dual stacked architectures with hidden and cell state sizes of 1024. For NTM we use a similar variant to that used by Wayne et al. (2018), namely, we use sequential writing to the memory, and retroactive updates. Regarding the 16-frame unroll, we again emphasize that 16 steps is on average enough time to reach all targets once. In order to encourage generalization to longer sequences than only 16-steps, we limit the writable memory size to 10, and track the usage of these 10 memory cells with a usage indicator such that subsequent writes can preferentially overwrite the least used of the 10 cells. This again is the same approach used by Wayne et al. (2018), which is one of very few works to successfully apply NTM-style architectures to image-to-action domains. It’s important to note that the use of retroactive updates makes the total memory size actually 20, as half of the cells are always reserved for the retroactive memory updates. Regarding image padding at the borders for input to the convolutions, the Mono and LSTM/NTM baselines use standard zero padding, whereas ESMN-RGB and ESMN pad the outer borders with the wrapped omni-directional image." }, { "heading": "A.1.4 AUXILIARY LOSSES", "text": "Motivated by the fact that many successful applications of LSTMs to image-to-actions learning involve the use of spatial auxiliary losses (Jaderberg et al., 2016; James et al., 2017; Sadeghi et al., 2017; Mirowski et al., 2018), we also compare to an LSTM which uses two such auxiliary proposals, namely the attention loss proposed in (Sadeghi et al., 2017) and a 3-dimensional Euler-based variant\nof the heading loss proposed in (Mirowski et al., 2018), which itself only considers 1D rotations normal to the plane of navigation. Our heading loss does not compute the 1D rotational offset from North, as this is not detectable from the image input. Instead, the networks are trained to predict the 3D Euler offset from the orientation of the first frame in the sequence. The modified LSTM network architecture is presented in Fig 9, and example images and classification targets for the auxiliary attention loss are presented in Fig 10. We emphasize that we did not attempt to tune these auxiliary losses, and applied them unmodified to the total loss function, taking the mean of the cross entropy loss for each, and linearly scaling so that the total auxiliary loss is roughly the same magnitude as the imitation loss at the start of training. Tuning auxiliary losses on different tasks is known to be challenging, and the losses can worsen performance without time-consuming manual tuning, as evidenced in the performance of the UNREAL agent (Jaderberg et al., 2016) compared to a vanilla RL-LSTM network demonstrated in (Wayne et al., 2018). We reproduce this general finding, and see that the untuned auxiliary losses do not improve performance on our reacher tasks. To further investigate the failure mechanism, we plot the two auxiliary losses on the validation set during training for each task in Fig 11. We find that the heading loss over-fits on the training set in all tasks, without learning any useful notion of incremental agent orientation. This is particularly evidenced in the DR tasks, which start each sequence with random agent orientations. In contrast, predicting orientation relative to the first frame on the MR task is much simpler because the starting pose is always constant in the scene, and so cues for the relative orientation are available from individual frames. This is why we observe a lower heading loss for the MR task variants in Fig 11. We do however still observe overfitting in the MR task. This overfitting on all tasks helps to explain why LSTM-Aux performs worse than the vanilla LSTM baseline for some of the tasks in Table 1. In contrast, the ESM module embeds strong spatial inductive bias into the computation graph itself, requires no tuning at all, and consistently leads to successful policies on the different tasks, with no sign of overfitting on any of the datasets, as we further discuss in Section A.1.5." }, { "heading": "A.1.5 FURTHER DISCUSSION OF RESULTS", "text": "The losses for each network evaluated on the training set and validation set during the course of training are presented in Fig 12. We first consider the results for the drone reacher task. Firstly, we can clearly see from the DR-ego-rgb and DR-freeform-rgb tasks that the baselines struggle to interpret the stream of incremental pose measurements and depth, in order to select optimal actions in the training set, and this is replicated in the validation set, and in the final task performance in Tab 1. We can also see that ESMN is able to achieve lower training and validation losses than ESMN-RGB when conditioned on shape in the DR-ego-shape and DR-freeform-shape tasks, and also expectedly achieves higher policy performance, shown in in Tab 1. What we also observe is that the baselines have a higher propensity to over-fit on training data. Both the LSTM and NTM baselines achieve lower training set error than ESMN on the DR-ego-shape task, but not lower validation error. In contrast, all curves for ESM-integrated networks are very similar between the training and validation set. The ESM module by design performs principled spatial computation, and so these networks are inherently much more robust to overfitting.\nLooking to the manipulator reacher task, we first notice that the LSTM and NTM baselines are actually able to achieve lower losses than the ESM-integrated networks on both the training set and validation set for the MR-ego-rgb and MRego-shape tasks. However, this does not translate to higher policy performance in Table 1. The reason for this is that the RLBench\nreacher task always initializes the robot in the same configuration, and so the diversity in the offline dataset is less than that of the drone reacher offline dataset. The scope of possible robot configurations in each 16-step window in the dataset is more limited. In essence, the baselines are achieving well in both training and validation sets as a result of overfitting to the limited data distributions observed. What these curves again highlight is the strong generalization power of ESM-integrated networks. Despite seeing relatively limited robot configurations in the dataset, the ESM policies\ndo not overfit on these, and are still able to use this data to learn general policies which succeed from unseen out-of-distribution images at test-time. We also again observe the same superiority of ESMN over ESMN-RGB when conditioned on shape input in the training and validation losses for the MR-ego-shape task.\nA final observation is that all methods fail to perform well on the MR-freeform-shape task. We investigated this, and the weak performance is a combined result of low-resolution 32× 32 images acquired in the training set and the large distance between the scene-centric cameras and the targets in the scene. The shapes are often difficult to discern from the monocular images acquired, and so little information is available for the methods to successfully learn a policy. We expect that with higher resolution images, or with an average lower distance between the scene-cameras and workspace in the offline dataset, we would again observe the ESM superiority observed for all other tasks.\nA.1.6 IMPLICIT FEATURE ANALYSIS\nHere we briefly explore the nature of the the features which the end-to-end ESM module learns to store in memory for the different reacher tasks. For each task, we perform a Principal Component Analysis (PCA) on the features encoded by the pre-ESM encoders in the ESMN networks. We compute the principal components (PCs) using encoded features from all monocular images in the training dataset. We present example activations for each of the 6 principal components for a sample of monocular images taken from each of the shape conditioned task variations in in Fig 13, with the most dominate principal components shown on the left in green, going through to the least dominant principal component on the right in purple. Each principal component is projected to a different colorspace for better visualization, with plus or minus one standard deviation of the principal component mapping to the full color-space. Lighter colors correspond to higher PC activations.\nWe can see that most dominant PC (shown in green) for the drone reacher tasks predominantly activate for the background, and the third PC (blue) appears to activate most strongly for edges. The principal components also behave similarly on the MR-Ego-Shape task. However, on the MR-Freeform-Shape task, which neither ESMN nor any of the baselines are able to succeed on, the first PC appears to activate strongly on both the arm and the target shapes.\nThe main conclusion which we can draw from Fig 13 is that the pre-ESM encoder does not directly encode shape classes as might be expected. Instead, the encoder learns to store other lower level features into ESM. However, as evidenced in the results in Table 1, the combination of these lower level features in ESM is clearly sufficient for the post-ESM convolutions to infer the shape id for selecting the correct actions in the policy, at least by using a collection of the encoded features within a small receptive field, which is not possible when using pure RGB features." }, { "heading": "A.2 OBSTACLE AVOIDANCE", "text": "For this experiment, we increase the drone reacher environment by 2× in all directions, resulting in an 8× increase in volume. We then also add 20 spherical obstacles into the scene with radius r = 0.1m. For the avoidance, we consider a bubble around the agent with radius R = 0.2, and flag a collision whenever any part of an obstacle enters this bubble. Given the closest depth measurement available dclosest, the avoidance algorithm simply computes an avoidant velocity vector va whose magnitude is inversely proportional to the distance from collision, clipped to the maximum velocity |v|max. Equation 5 shows the calculation for the avoidance vector magnitude. We run the avoidance controller at 10× the rate of the ESM updates.\n|va| = min\n[ 10−3\nmax [dclosest −R, 10−12]2 , |v|max\n] (5)\nIn order to prevent avoidant motion away from the targets to reach, we retrain the ESMN-RGB networks on the drone reacher task, but we train the network to also predict the full relative target location as an additional auxiliary loss. When evaluating on the obstacle avoidance task, we prevent depth values within a fixed distance of this predicted target location from influencing the obstacle avoidance. This has the negative effect of causing extra collisions when the agent erroneously predicts that the target is close, but it enables the agent to approach and reach the target without being pushed away by the avoidance algorithm. Regarding the performance against the baseline, we re-iterate that all monocular images have a large field-of-view of 90 degrees, and yet we still observe significant reductions in collisions when using the full ESM geometry for avoidance, see Tab 2." }, { "heading": "A.3 MULTI-DOF REINFORCEMENT LEARNING", "text": "" }, { "heading": "A.3.1 TRAINING", "text": "For the reinforcement learning experiment, we train both ESMN and ESMN-RGB as well as all baselines on a similar sequential target reacher task as defined in Section 4.1 via DQN (Mnih et al., 2015), where the manipulator must reach red, blue and then yellow targets from egocentric observations. We use (128 × 128) images in this experiment rather than (32 × 32) as used in the imitation learning experiments. We also use an unroll length of 8 rather than 16. We use discrete delta end-effector translations, with ±0.05 meters for each axis, with no rotation (resulting in an action size of 6). We use a shaped reward of r = −len(remaining_targets)−‖e− g‖2, where e and g are the gripper and current target translation respectively." }, { "heading": "A.3.2 NETWORK ARCHITECTURES", "text": "In order to make the RL more tractable, and enable larger batch sizes, we use smaller network than used in the imitation learning experiments. Both methods use a Siamese network to process the RGB and coordinate-image inputs separately, and consist of 2 convolution (conv) layers with 16 and 32 channels (for each branch). We fuse these branches with a 32 channel conv and 1x1 kernel. The remainder of the architecture then follows the same as in the imitation learning experiments, but we instead use channel sizes of 64 throughout. The network outputs 6 values corresponding to the Q-values for each action. All methods use a learning rate of 0.001, target Q-learning τ = 0.001, batch size of 128, and leakyReLU activations. We use an epsilon greedy strategy for exploration that is decayed over 100k training steps from 1 to 0.1. We show the average shaped return during RL training on the sequential reacher task over 5 seeds in Fig 14. Both ESM policies succeed in reaching all 3 targets, whereas all baseline approaches generally only succeed in reaching 1 target. The Partial-Oracle-Omni (PO2) baseline also succeeds in reaching all 3 targets." }, { "heading": "A.4 OBJECT SEGMENTATION", "text": "" }, { "heading": "A.4.1 DATASET", "text": "For the object segmentation experiment, we use downsampled 60× 80 and 120× 160 images from the ScanNet dataset, which we first RGB-Depth align. We use a reduced dataset with frame skip of 30 to maximize diversity whilst minimizing dataset memory. Many sequences contain slow camera motion, resulting in adjacent frames which vary very little. We use the Eigen-13 classification labels as training targets." }, { "heading": "A.4.2 NETWORK", "text": "" }, { "heading": "ARCHITECTURES", "text": "The Mono, ESMN-RGB and ESMN networks all exhibit a U-Net architecture, and output object segmentation predictions in an egosphere map. ESMN-RGB and ESMN do so with a U-Net connecting the ESM\noutput to the final predictions, and Mono does so by projecting and probabilistically fusing the monocular segmentation predictions in a non-learnt manner. The network architectures are all presented in Fig 15. Regarding image padding at the borders for input to the convolutions, the Mono and LSTM/NTM baselines use standard zero padding, whereas ESMN pads the outer borders with the wrapped omni-directional image." }, { "heading": "A.4.3 TRAINING", "text": "For training, all losses are computed in the ego-sphere map frame of reference, either following convolutions for ESMN and ESMN-RGB, or following projection and probabilistic fusion for the Mono case. We compute ground-truth segmentation training target labels by projecting the ground truth monocular frames to form an ego-sphere target segmentation image, see the right-hand-side of\nFig 7 for an example. We chose this approach over computing the ground truth segmentations from the complete ScanNet meshes for implementational simplicity. All experiments use a batch size of 16, unroll size of 8 in the time dimension, and Adam optimizer with learning rate 1e− 4, trained for 250k steps." }, { "heading": "A.5 RUNTIME ANALYSIS", "text": "In this section, we perform a runtime analysis of the ESM memory module. We explore the extent to which inference speed is affected both by monocular resolution and egosphere resolution, as well the differences between CPU and GPU devices, and the choice of machine learning framework. Our ESM module is implemented using Ivy (Lenton et al., 2021), which is a templated deep learning framework supporting multiple backend frameworks. The implementation of our module is therefore jointly compatible with TensorFlow 2.0, PyTorch, MXNet, Jax and Numpy. We analyse the runtimes of both the TensorFlow 2.0 and PyTorch implementations, the results are presented in Tables 5, 6, 7, and 8. All analysis was performed while using ESM with RGB projections to reconstruct ScanNet scene 0002-00 shown in Fig 16. The timing is averaged over the course of the 260 frames in the frame-skipped image sequence, with a frame skip of 30, for this scene. ESM steps with 960 × 1280 monocular images were unable to fit into the 11GB of GPU memory when using the PyTorch implementation, and so these results are omitted in Table 8.\nWhat we see from these runtime results is that the off-the-shelf ESM module is fully compatible as a real-time mapping system. Compared to more computationally intensive mapping and fusion pipelines, the simplicity of ESM makes it particularly suitable for applications where depth and pose measurements are available, and highly responsive computationally cheap local mapping is a strong requirement, such as on-board mapping for drones." } ]
2,021
END-TO-END EGOSPHERIC SPATIAL MEMORY
SP:0cde0537137f3eef6c9c0d6d580a610a07112a39
[ "This paper introduces an algorithm for training neural networks in a way that parameters preserve a given property. The optimization is based on using a transformation R that perturbs parameters in a way that the desired property is preserved. Instead of directly optimizing the parameters of the network, the optimization is carried out on the parameters B of the auxiliary transformation R. " ]
Many types of neural network layers rely on matrix properties such as invertibility or orthogonality. Retaining such properties during optimization with gradientbased stochastic optimizers is a challenging task, which is usually addressed by either reparameterization of the affected parameters or by directly optimizing on the manifold. This work presents a novel approach for training invertible linear layers. In lieu of directly optimizing the network parameters, we train rank-one perturbations and add them to the actual weight matrices infrequently. This PInv update allows keeping track of inverses and determinants without ever explicitly computing them. We show how such invertible blocks improve the mixing and thus the mode separation of the resulting normalizing flows. Furthermore, we outline how the P concept can be utilized to retain properties other than invertibility.
[]
[ { "authors": [ "P.A. Absil", "R. Mahony", "R. Sepulchre" ], "title": "Optimization algorithms on matrix manifolds", "venue": "ISBN 9780691132983", "year": 2009 }, { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky T.Q. Chen", "David Duvenaud", "Joern-Henrik Jacobsen" ], "title": "Invertible residual networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Denis Boyda", "Gurtej Kanwar", "Sébastien Racanière", "Danilo Jimenez Rezende", "Michael S Albergo", "Kyle Cranmer", "Daniel C Hackett", "Phiala E Shanahan" ], "title": "Sampling using su(n) gauge equivariant flows", "venue": null, "year": 2008 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Tian Qi Chen", "Jens Behrmann", "David K Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Residual flows for invertible generative modeling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Krzysztof Choromanski", "David Cheikhi", "Jared Davis", "Valerii Likhosherstov", "Achille Nazaret", "Achraf Bahamou", "Xingyou Song", "Mrugank Akarte", "Jack Parker-Holder", "Jacob Bergquist", "Yuan Gao", "Aldo Pacchiano", "Tamas Sarlos", "Adrian Weller", "Vikas Sindhwani" ], "title": "Stochastic flows and geometric optimization on the orthogonal group", "venue": "In 37th International Conference on Machine Learning (ICML 2020),", "year": 2020 }, { "authors": [ "Nicola De Cao", "Ivan Titov", "Wilker Aziz" ], "title": "Block neural autoregressive flow", "venue": "In Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2019 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Conor Durkan", "Artur Bekasov", "Iain Murray", "George Papamakarios" ], "title": "Neural spline flows", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Luca Falorsi", "Pim de Haan", "Tim R. Davidson", "Patrick Forré" ], "title": "Reparameterizing distributions on lie groups", "venue": "In Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Mathieu Germain", "Karol Gregor", "Iain Murray", "Hugo Larochelle" ], "title": "Made: Masked autoencoder for distribution estimation", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Adam Golinski", "Mario Lezcano-Casado", "Tom Rainforth" ], "title": "Improving normalizing flows via better orthogonal parameterizations", "venue": "In ICML Workshop on Invertible Neural Networks and Normalizing Flows,", "year": 2019 }, { "authors": [ "Henry Gouk", "Eibe Frank", "Bernhard Pfahringer", "Michael Cree" ], "title": "Regularisation of Neural Networks by Enforcing Lipschitz Continuity", "venue": "arXiv preprint arXiv:1804.04368,", "year": 2018 }, { "authors": [ "Will Grathwohl", "Ricky TQ Chen", "Jesse Bettencourt", "Ilya Sutskever", "David Duvenaud" ], "title": "Ffjord: Free-form continuous dynamics for scalable reversible generative models", "venue": "In 7th International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Luigi Gresele", "Giancarlo Fissore", "Adrián Javaloy", "Bernhard Schölkopf", "Aapo Hyvärinen" ], "title": "Relative gradient optimization of the jacobian term in unsupervised deep learning", "venue": "In 34th Conference on Neural Information Processing Systems (NeurIPS 2020),", "year": 2020 }, { "authors": [ "Kyle Helfrich", "Devin Willmott", "Qiang Ye" ], "title": "Orthogonal recurrent neural networks with scaled cayley transform", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jan Hermann", "Zeno Schätzle", "Frank Noé" ], "title": "Deep-neural-network solution of the electronic Schrödinger equation", "venue": "Nat. Chem.,", "year": 2020 }, { "authors": [ "Emiel Hoogeboom", "Rianne van den Berg", "Max Welling" ], "title": "Emerging convolutions for generative normalizing flows", "venue": "In International conference on machine learning,", "year": 2019 }, { "authors": [ "Emiel Hoogeboom", "Victor Garcia Satorras", "Jakub M Tomczak", "Max Welling" ], "title": "The convolution exponential and generalized sylvester flows", "venue": "In 34th Conference on Neural Information Processing Systems (NeurIPS 2020),", "year": 2020 }, { "authors": [ "Chin-Wei Huang", "David Krueger", "Alexandre Lacoste", "Aaron Courville" ], "title": "Neural autoregressive flows", "venue": "arXiv preprint arXiv:1804.00779,", "year": 2018 }, { "authors": [ "Gurtej Kanwar", "Michael S. Albergo", "Denis Boyda", "Kyle Cranmer", "Daniel C. Hackett", "Sébastien Racanière", "Danilo Jimenez Rezende", "Phiala E. Shanahan" ], "title": "Equivariant flow-based sampling for lattice gauge theory", "venue": "Phys. Rev. Lett.,", "year": 2020 }, { "authors": [ "Mahdi Karami", "Dale Schuurmans", "Jascha Sohl-Dickstein", "Laurent Dinh", "Daniel Duckworth" ], "title": "Invertible convolutional flow", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Jonas Köhler", "Leon Klein", "Frank Noé" ], "title": "Equivariant flows: exact likelihood generative learning for symmetric densities", "venue": "arXiv preprint arXiv:2006.02425,", "year": 2020 }, { "authors": [ "Mario Lezcano-Casado" ], "title": "Trivializations for gradient-based optimization on manifolds", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mario Lezcano-Casado" ], "title": "Curvature-dependant global convergence rates for optimization on manifolds of bounded geometry", "venue": "arXiv preprint arXiv:2008.02517,", "year": 2020 }, { "authors": [ "Mario Lezcano-Casado", "David Martı́nez-Rubio" ], "title": "Cheap orthogonal constraints in neural networks: A simple parametrization of the orthogonal and unitary group", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Christos Louizos", "Max Welling" ], "title": "Multiplicative normalizing flows for variational bayesian neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Zakaria Mhammedi", "Andrew Hellicar", "Ashfaqur Rahman", "James Bailey" ], "title": "Efficient orthogonal parametrisation of recurrent neural networks using householder reflections", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Thomas Müller", "Brian McWilliams", "Fabrice Rousselle", "Markus Gross", "Jan Novák" ], "title": "Neural importance sampling", "venue": "ACM Trans. Graph.,", "year": 2019 }, { "authors": [ "Frank Noé", "Simon Olsson", "Jonas Köhler", "Hao Wu" ], "title": "Boltzmann generators: Sampling equilibrium states of many-body systems with deep", "venue": "learning. Science,", "year": 2019 }, { "authors": [ "George Papamakarios", "Theo Pavlakou", "Iain Murray" ], "title": "Masked autoregressive flow for density estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "George Papamakarios", "Eric Nalisnick", "Danilo Jimenez Rezende", "Shakir Mohamed", "Balaji Lakshminarayanan" ], "title": "Normalizing flows for probabilistic modeling and inference", "venue": null, "year": 1912 }, { "authors": [ "Tomas Pevny", "Vasek Smidl", "Martin Trapp", "Ondrej Polacek", "Tomas Oberhuber" ], "title": "Sum-producttransform networks: Exploiting symmetries using invertible transformations", "venue": "arXiv preprint arXiv:2005.01297,", "year": 2020 }, { "authors": [ "David Pfau", "James S. Spencer", "Alexander G.D.G. Matthews", "W.M.C. Foulkes" ], "title": "Ab initio solution of the many-electron schrödinger equation with deep neural networks", "venue": "Phys. Rev. Research,", "year": 2020 }, { "authors": [ "Danilo Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Danilo Jimenez Rezende", "Sébastien Racanière", "Irina Higgins", "Peter Toth" ], "title": "Equivariant hamiltonian flows", "venue": "arXiv preprint arXiv:1909.13739,", "year": 2019 }, { "authors": [ "Uri Shalit", "Gal Chechik" ], "title": "Coordinate-descent for learning orthogonal matrices through Givens rotations", "venue": "In 31st Int. Conf. Mach. Learn. ICML 2014,", "year": 2014 }, { "authors": [ "Fazlollah Soleymani" ], "title": "A fast convergent iterative solver for approximate inverse of matrices", "venue": "Numerical Linear Algebra with Applications,", "year": 2014 }, { "authors": [ "Esteban G Tabak", "Cristina V Turner" ], "title": "A family of nonparametric density estimation algorithms", "venue": "Communications on Pure and Applied Mathematics,", "year": 2013 }, { "authors": [ "Esteban G Tabak", "Eric Vanden-Eijnden" ], "title": "Density estimation by dual ascent of the loglikelihood", "venue": "Communications in Mathematical Sciences,", "year": 2010 }, { "authors": [ "Jakub M Tomczak", "Max Welling" ], "title": "Improving variational auto-encoders using householder flow", "venue": "arXiv preprint arXiv:1611.09630,", "year": 2016 }, { "authors": [ "Rianne Van Den Berg", "Leonard Hasenclever", "Jakub M. Tomczak", "Max Welling" ], "title": "Sylvester normalizing flows for variational inference", "venue": "In 34th Conference on Uncertainty in Artificial Intelligence 2018,", "year": 2018 }, { "authors": [ "Hao Wu", "Jonas Köhler", "Frank Noé" ], "title": "Stochastic normalizing flows", "venue": "arXiv preprint arXiv:2002.06707,", "year": 2020 }, { "authors": [ "Yuichi Yoshida", "Takeru Miyato" ], "title": "Spectral norm regularization for improving the generalizability of deep learning, 2017", "venue": null, "year": 2017 }, { "authors": [ "Linfeng Zhang", "Lei Wang" ], "title": "Monge-ampere flow for generative modeling", "venue": "arXiv preprint arXiv:1809.10188,", "year": 2018 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nMany deep learning applications depend critically on the neural network parameters having a certain mathematical structure. As an important example, reversible generative models rely on invertibility and, in the case of normalizing flows, efficient inversion and computation of the Jacobian determinant (Papamakarios et al., 2019).\nPreserving parameter properties during training can be challenging and many approaches are currently in use. The most basic way of incorporating constraints is by network design. Many examples could be listed, like defining convolutional layers to obtain equivariances, constraining network outputs to certain intervals through bounded activation functions, Householder flows (Tomczak & Welling, 2016) to enforce layer-wise orthogonality, or coupling layers (Dinh et al., 2014; 2016) that enforce tractable inversion through their twochannel structure. A second approach concerns the optimizers used for training. Optimization routines have been tailored for example to maintain Lipschitz bounds (Yoshida & Miyato, 2017) or efficiently optimize orthogonal linear layers (Choromanski et al., 2020).\nThe present work introduces a novel algorithmic concept for training invertible linear layers and facilitate tractable inversion and determinant computation, see Figure 1. In lieu of directly changing the network parameters, the optimizer operates on perturbations to these parameters. The actual network parameters are frozen, while a parameterized perturbation (a rank-one update to the frozen parameters) serves as a proxy for optimization. Inputs are passed through the perturbed network\nduring training. In regular intervals, the perturbed parameters are merged into the actual network and the perturbation is reset to the identity. This stepwise optimization approach will be referred to as property-preserving parameter perturbation, or P4 update. A similar concept was introduced recently by Lezcano-Casado (2019), who used dynamic trivializations for optimization on manifolds.\nIn this work, we use P4 training to optimize invertible linear layers while keeping track of their inverses and determinants using rank-one updates. Previous work (see Section 2) has mostly focused on optimizing orthogonal matrices, which can be trivially inverted and have unity determinant. Only most recently, Gresele et al. (2020) presented a first method to optimize general invertible matrices implicitly using relative gradients, thereby providing greater flexibility and expressivity. While their scheme implicitly tracks the weight matrices’ determinants, it does not facilitate cheap inversion. In contrast, the present P4Inv layers are inverted at the cost of roughly three matrix-vector multiplications.\nP4Inv layers can approximate arbitrary invertible matrices A ∈ GL(n). Interestingly, our stepwise perturbation even allows sign changes in the determinants and recovers the correct inverse after emerging from the ill-conditioned regime. Furthermore, it avoids any explicit computations of inverses or determinants. All operations occurring in optimization steps have complexity of at most O(n2). To our knowledge, the present method is the first to feature these desirable properties. We show how P4Inv blocks can be utilized in normalizing flows by combining them with nonlinear, bijective activation functions and with coupling layers. The resulting neural networks are validated for density estimation and as deep generative models. Finally, we outline potential applications of P4 training to network properties other than invertibility." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "" }, { "heading": "2.1 RANK-ONE PERTURBATION", "text": "The P4Inv layers are based on rank-one updates, which are defined as transformations A 7→ A + uvT with u,v ∈ Rn. If A ∈ GL(n) and 1 + vTA−1u 6= 0, the updated matrix is also invertible and its inverse can be computed by the Sherman-Morrison formula\n(A+ uvT )−1 = A−1 − 1 1 + vTA−1u A−1uvTA−1. (1)\nFurthermore, the determinant is given by the matrix determinant lemma\ndet(A+ uvT ) = (1 + vTA−1u) det(A). (2)\nBoth these equations are widely used in numerical mathematics, since they sidestep the O(n3) cost and poor parallelization of both matrix inversion and determinant computation. The present work leverages these perturbation formulas to keep track of the inverses and determinants of weight matrices during training of invertible neural networks." }, { "heading": "2.2 EXISTING APPROACHES FOR TRAINING INVERTIBLE LINEAR LAYERS", "text": "Maintaining invertibility of linear layers has been studied in the context of convolution operators (Kingma & Dhariwal, 2018; Karami et al., 2019; Hoogeboom et al., 2019; 2020) and using Sylvester’s theorem (Van Den Berg et al., 2018). Those approaches often involve decompositions that include triangular matrices (Papamakarios et al. (2019)). While inverting triangular matrices has quadratic computational complexity, it is inherently sequential and thus fairly inefficient on parallel computers (see Section 4.1). More closely related to our work, Gresele et al. (2020) introduced a relative gradient optimization scheme for invertible matrices. In contrast to this related work, our method facilitates a cheap inverse pass and allows sign changes in the determinant. On the contrary, their method operates in a higher-dimensional search space, which might speed up the optimization in tasks that do not involve inversion during training." }, { "heading": "2.3 NORMALIZING FLOWS", "text": "Cheap inversion and determinant computation are specifically important in the context of normalizing flows, see Appendix C. They were introduced in Tabak et al. (2010); Tabak & Turner (2013)\nand are commonly used, either in variational inference (Rezende & Mohamed, 2015; Tomczak & Welling, 2016; Louizos & Welling, 2017; Van Den Berg et al., 2018) or for approximate sampling from distributions given by an energy function (van den Oord et al., 2018; Müller et al., 2019; Noé et al., 2019; Köhler et al., 2020). The most important normalizing flow architectures are coupling layers (Dinh et al., 2014; 2016; Kingma & Dhariwal, 2018; Müller et al., 2019), which are a subclass of autoregressive flows (Germain et al., 2015; Papamakarios et al., 2017; Huang et al., 2018; De Cao et al., 2019), and (2) residual flows (Chen et al., 2018; Zhang et al., 2018; Grathwohl et al., 2018; Behrmann et al., 2019; Chen et al., 2019). A comprehensive survey can be found in Papamakarios et al. (2019)." }, { "heading": "2.4 OPTIMIZATION UNDER CONSTRAINTS AND DYNAMIC TRIVIALIZATIONS", "text": "Constrained matrices can be optimized using Riemannian gradient descent on the manifold (Absil et al. (2009)). A reparameterization trick for general Lie groups has been introduced in Falorsi et al. (2019). For the unitary / orthogonal group there are multiple more specialized approaches, including using the Cayley transform (Helfrich et al., 2018), Householder Reflections (Mhammedi et al., 2017; Meng et al., 2020; Tomczak & Welling, 2016), Givens rotations (Shalit & Chechik, 2014; Pevny et al., 2020) or the exponential map (Lezcano-Casado & Martı́nez-Rubio, 2019; Golinski et al., 2019).\nLezcano-Casado (2019) introduced the concept of dynamic trivializations. This method performs training on manifolds by combining ideas from Riemannian gradient descent and trivializations (parameterizations of the manifold via an unconstrained space). Dynamic trivializations were derived in the general settings of Riemannian exponential maps and Lie groups. Convergence results were recently proven in follow-up work (Lezcano-Casado (2020)). P4 training resembles dynamic trivializations in that both perform a number of iteration steps in a fixed basis and infrequently lift the optimization problem to a new basis. In contrast, the rank-one updates do not strictly parameterize GL(n) but instead can access all of Rn×n. This introduces the need for numerical stabilization, but enables efficient computation of the inverse and determinant through equation 1 and equation 2, which is the method’s unique and most important aspect." }, { "heading": "3 P4 UPDATES: PRESERVING PROPERTIES THROUGH PERTURBATIONS", "text": "" }, { "heading": "3.1 GENERAL CONCEPT", "text": "A deep neural network is a parameterized function MA : Rn → Rm with a high-dimensional parameter tensor A. Now, let S define the subset of feasible parameter tensors so that the network satisfies a certain desirable property. In many situations, generating elements of S from scratch is much harder than transforming any A ∈ S into other elements A′ ∈ S, i.e. to move within S. The efficiency of perturbative updates can be leveraged as an incremental approach to retain certain desirable properties of the network parameters during training. Rather than optimizing the parameter tensors directly, we instead use a transformation RB : S → S, which we call a property-preserving parameter perturbation (P4). A P4 transforms a given parameter tensor A ∈ S into another tensor with the desired property A′ ∈ S. The P4 itself is also parameterized, by a tensor B. We demand that the identity idS : A 7→ A be included in the set of these transformations, i.e. there exists a B0 such that RB0 = idS.\nDuring training, the network is evaluated using the perturbed parameters à = RB(A). The parameter tensor of the perturbation, B, is trainable via gradient-based stochastic optimizers, while the actual parameters A are frozen. In regular intervals, every N iterations of the optimizer, the optimized parameters of the P4, B, are merged into A as follows:\nAnew ← RB(A), (3) Bnew ← B0. (4)\nThis update does not modify the effective (perturbed) parameters of the network Ã, since\nÃnew = RBnew(Anew) = RB0(RB(A)) = RB(A) = Ã.\nHence, this procedure enables a steady, iterative transformation of the effective network parameters and stochastic gradient descent methods can be used for training without major modifications. Furthermore, given a reasonable P4, the iterative update of A can produce increasingly non-trivial transformations, thereby enabling high expressivity of the resulting neural networks. This concept is summarized in Algorithm 1. Further extensions to stabilize the merging step will be exemplified in Section 3.3.\nAlgorithm 1: P4 Training Input : Model M , training data, loss function J , number of optimization steps Nsteps, merge interval N , perturbation R, optimizer OPT initialize A ∈ S; initialize B := B0; for i := 1 . . . Nsteps do\nX,Y0 := i-th batch from training data; à := RB(A) ; // perturb parameters Y := MÃ(X) ; // evaluate perturbed model j := J(Y ,Y0) ; // evaluate loss function gradient := ∂j/∂B ; // backpropagation B := OPT(B, gradient) ; // optimization step if i mod N = 0 then\nA := RB(A) ; // merging step: update frozen parameters B := B0 ; // merging step: reset perturbation\nend end\n3.2 P4INV: INVERTIBLE LAYERS VIA RANK-ONE UPDATES\nAlgorithm 2: P4Inv Merging Step Input : Matrix A, Inverse Ainv, Determinant d det factor := (1 + vTAinvu) new det := det factor · d; if ln |det factor| and ln |new det| are sane then\n/* update frozen parameters (equation 3) */ d := new det; A := Ru,v(A); Ainv := Ainv − 11+vTAinvuAinvuv\nTAinv; /* reset perturbation (equation 4) */ u := 0; v := N (0, In) ; // random reinitialization\nend\nThe P4 algorithm can in principle be applied to properties concerning either individual blocks or the whole network. Here we train individual invertible linear layers via rank-one perturbations. Each of these P4Inv layers is an affine transformation Ax+b. In this context, the weight matrix A is handled by the P4 update and the bias b is optimized without perturbation. Without loss of generality, we present the method for layers Ax.\nWe define S as the set of invertible matrices, for which we know the inverse and determinant. Then the rank-one update\nRu,v(A) = A+ uv T (5)\nwith B = (u,v) ∈ R2n is a P4 on S due to equations 1 and 2, which also define the inverse pass and determinant computation of the perturbed layer, see Appendix B for details. The perturbation can be reset by setting u, v, or both to zero. In subsequent parameter studies, a favorable training efficiency was obtained by setting u to zero and reinitializing v from Gaussian noise. (Using a unity standard deviation for the reinitialization ensures that gradient-based updates to u are on the same order of magnitude as updates to a standard linear layer so that learning rates are transferable.) The\ninverse matrix Ainv and determinant d are stored in the P4 layer alongside A and updated according to the merging step in Algorithm 2. Merges are skipped whenever the determinant update signals ill conditioning of the inversion. This is further explained in the following subsection." }, { "heading": "3.3 NUMERICAL STABILIZATION", "text": "The update to the inverse and determinant can become ill-conditioned if the denominator in equation 1 is close to zero. Thankfully, the determinant lemma from equation 2 provides an indicator for illconditioned updates (if absolute determinants become very small or very large). This indicator in combination with the stepwise merging approach can be used to tame potential numerical issues. Concretely, the following additional strategies are applied to ensure stable optimizations.\n• Skip Merges: Merges are skipped whenever the determinant update falls out of predefined bounds, see Appendix A for details. This allows the optimization to continue without propagating numerical errors into the actual weight matrix A. Note that numerical errors in the perturbed parameters à are instantaneous and vanish when the optimization leaves the ill-conditioned regime. As shown in our experiments in Section 4.2, merging steps that occur relatively infrequently without drastically hurting the efficiency of the optimization.\n• Penalization: The objective function can be augmented by a penalty function g(u,v) in order to prevent entering the ill-conditioned regime {(u,v) : det (Ru,v(A)) = 0} , see Appendix A.\n• Iterative Inversion: In order to maintain a small error of the inverse throughout training, the inverse is corrected after every Ncorrect-th merging step by one iteration of an iterative matrix inversion (Soleymani, 2014). This operation is O(n3) yet is highly parallel." }, { "heading": "3.4 USE IN INVERTIBLE NETWORKS", "text": "Our invertible linear layers can be employed in normalizing flows (Appendix C) thanks to having access to the determinant at each update step. We tested them in two different application scenarios:\nP4Inv Swaps In a first attempt, we integrate P4Inv layers with RealNVP coupling layers by substituting the simple coordinate swaps with general linear layers (see Figure 9 in Appendix H). Fixed coordinate swaps span a tiny subset of O(n). In contrast, P4Inv can express all of GL(n). We thus expect more expressivity with the help of better mixing. The parameter matrix A is initialized with a permutation matrix. Note that the P4 training is applied exclusively to the P4Inv layers rather than all parameters.\nNonlinear invertible layer In a second attempt, we follow the approach of Gresele et al. (2020) and stack P4Inv layers with intermediate bijective nonlinear activation functions. Here we use the elementwise Bent identity\nB(x) =\n√ x2 + 1− 1\n2 + x.\nIn contrast to more standard activation functions like sigmoids or ReLU variants, the Bent identity is an R-diffeomorphism. It thereby provides smooth gradients and is invertible over all of R." }, { "heading": "4 EXPERIMENTS", "text": "P4Inv updates are demonstrated in three steps. After a runtime comparison, single P4Inv layers are first fit to linear problems to explore their general capabilities and limitations. Second, to show their performance in deep architectures, P4Inv blocks are used in combination with the Bent identity to perform density estimation of common two-dimensional distributions. Third, to study the generative performance of normalizing flows that use P4Inv blocks, we train a RealNVP normalizing flow with P4 swaps as a Boltzmann generator (Noé et al., 2019). One important feature of this test problem is the availability of a ground truth energy function that is highly sensitive to any numerical problems in the network inversion." }, { "heading": "4.1 COMPUTATIONAL COST", "text": "P4Inv training facilitates cheap inversion and determinant computation. To demonstrate those benefits, the computational cost of computing the forward and inverse KL divergence in a normalizing flow framework was compared with standard linear layers and an LU decomposition. Importantly, the KL divergence includes a network pass and the Jacobian determinant.\nFigure 2 shows the wall-clock times per evaluation on an NVIDIA GeForce GTX 1080 card with a batch size of 32. As the matrix dimension grows, standard linear layers become increasingly infeasible due to the O(n3) cost of both determinant computation and inversion. The LU decomposition is fast for forward evaluation since the determinant is just the product of diagonal entries. However, the inversion does not parallelize well so that inverse pass of a 4096-dimensional matrix was almost as slow as a naive inversion. Note that this poor performance transfers to other decompositions involving triangular matrices, such as the Cholesky decomposition. In contrast, the P4Inv layers performed well for both the forward and inverse evaluation. Due to their O(n2) scaling, they outperformed the two other methods by two orders of magnitude on the 4096-dimensional problem.\nThis comparison shows that P4Inv layers are especially useful in the context of normalizing flows whose forward and inverse have to be computed during training. This situation occurs when flows are trained through a combination of density estimation and sampling." }, { "heading": "4.2 LINEAR PROBLEMS", "text": "Fitting linear layers to linear training data is trivial in principle using basic linear algebra methods. However, the optimization with stochastic gradient descent at a small learning rate will help illuminate some important capabilities and limitations of P4Inv layers. It will also help answer the open question if gradient-based optimization of an invertible matrix A allows crossing the ill-conditioned regime {A ∈ Rn×n : detA = 0}. Furthermore, the training efficiency of perturbation updates can be directly compared to arbitrary linear layers that are optimized without perturbations.\nSpecifically, each target problem is defined by a square matrix T . The training data is generated by sampling random vectors x and computing targets y = Tx. Linear layers are then initialized as the identity matrix A := I and the loss function J(A) = ‖Ax− y‖2 is minimized in three ways:\n1. by directly updating the matrix elements (standard training of linear layers),\n2. through P4Inv updates, and\n3. through the inverses of P4Inv updates, i.e., by training A through the updates in equation 1.\nThe first linear problem is a 32-dimensional positive definite matrix with eigenvalues close to 1. Figure 3 shows the evolution of eigenvalues and losses during training. All three methods of optimization successfully recovered the target matrix. While training P4Inv via the inverse led to slower convergence, the forward training of P4Inv converged in the same number of iterations as an unconstrained linear layer for a merge interval N = 1. Increasing the merge interval to N = 10 only affected the convergence minimally. Even for N = 50, the optimizer took only twice as many iterations as for an unconstrained linear layer.\nThe second target matrix was a 128-dimensional special orthogonal matrix. As shown in Figure 4, the direct optimization converged to the target matrix in a linear fashion. In contrast, the matrices generated by the P4Inv update avoided the region around the origin. This detour led to a slower convergence in the initial phase of the optimization. Notably, the inverse stayed accurate up to 5 decimals throughout training. Training an inverse P4Inv was not successful for this example. This shows that the inverse P4Inv update can easily get stuck in local minima. This is not surprising as the elements of the inverse (equation 1) are parameterized by R2n-dimensional rational quadratic functions. When training against linear training data with a unique global optimum, the multimodality can prevent convergence. When training against more complex target data, the premature convergence was mitigated, see Appendix G. However, this result suggests that the efficiency of the optimization may be perturbed by very complex nonlinear parameterizations.\nThe final target matrix was T = −I101, a matrix with determinant -1. In order to iteratively converge to the target matrix, the set of singular matrices has to be crossed. As expected, using a nonzero penalty parameter prevented the P4Inv update from converging to the target. However, when no penalty was applied, the matrix converged almost as fast as the usual linear training, see Figure 5. When the determinant approached zero, inversion became ill-conditioned and residues increased. However, after reaching the other side, the inverse was quickly recovered up to 5 decimal digits. Notably, the determinant also converged to the correct value despite never being explicitly corrected.\nThe favorable training efficiency encountered in those linear problems is surprising given the considerably reduced search space dimension. In fact, a subsequent rank-one parameterization of an MNIST classification task suggests that applications in nonlinear settings also converge as fast as standard MLPs in the initial phase, but slow down when approaching the optimum, see Appendix I." }, { "heading": "4.3 2D DISTRIBUTIONS", "text": "The next step was to assess the effectiveness of P4Inv layers in deep networks. This was particularly important to rule out a potentially harmful accumulation of rounding errors. Density estimation of common 2D toy distributions was performed by stacking P4Inv layers with Bent identities and their inverses. For comparison, an RNVP flow was constructed with the same number of tunable parameters as the P4Inv flow, see Appendix G for details.\nFigure 5: Training towards the matrix T = −I101 using no penalty. Residue of inversion (black line) and absolute determinants of the standard linear and P4Inv layer. Both converge to the target in a similar number iterations (dashed line).\nFigure 6: Density estimation for two-dimensional distributions from RealNVP (RNVP) and P4Inv networks with similar numbers of tunable parameters.\nFigure 6 compares the generated distributions from the two models. The samples from the P4Inv model aligned favorably with the ground truth. In particular, they reproduced the multimodality of the data. In contrast to RNVP, P4Inv cleanly separated the modes, which underlines the favorable mixing achieved by general linear layers with elementwise nonlinear activations." }, { "heading": "4.4 BOLTZMANN GENERATORS OF ALANINE DIPEPTIDE", "text": "Boltzmann generators (Noé et al., 2019) combine normalizing flows with statistical mechanics in order to draw direct samples from a given target density, e.g. given by a many-body physics system. This setup is ideally suited to assess the inversion of normalizing flows since the given physical potential energy defines the target density and thereby provides a quantitative measure for the sample quality. In molecular examples, specifically, the target densities are multimodal, contain singularities, and are highly sensitive to small perturbations in the atomic positions. Therefore, the generation of the 66-dimensional alanine dipeptide conformations is a highly nontrivial test for generative models.\nThe training efficiency and expressiveness of Boltzmann Generators (see Appendix E for details) were compared between pure RNVP baseline models as used in Noé et al. and models augmented by P4Inv swaps (see Section 3.4). The deep neural network architecture and training strategy are described in Appendix H. Both flows had 25 blocks as from Figure 9 in the appendix, resulting in 735,050 RNVP parameters. In contrast, the P4Inv blocks had only 9,000 tunable parameters. Due to this discrepancy and the depth of the network, we cannot expect dramatic improvements from adding P4Inv swaps. However, significant numerical errors in the inversion would definitely show in such a setup due to the highly sensitive potential energy.\nFigure 7 (left) shows the energy statistics of generated samples. To demonstrate the sensitivity of the potential energy, the training data was first perturbed by 0.004 nm (less than 1% of the total length of the molecule) and energies were evaluated for the perturbed data set. As a consequence, the mean of the potential energy distribution increased by 13 kBT .\nIn comparison, the Boltzmann generators produced much more accurate samples. The energy distributions from RNVP and P4Inv blocks were only shifted upward by ≈ 2.6 kBT and rarely generated samples with infeasibly large energies. The performance of both models was comparable with slight advantages for models with P4Inv swaps. This shows that the P4Inv inverses remained intact during training. Finally, Figure 7 (right) shows the joint distribution of the two backbone torsions. Both Boltzmann generators reproduced the most important local minima of the potential energy. As in the 2D toy problems, the P4Inv layers provided a cleaner separation of modes." }, { "heading": "5 OTHER POTENTIAL APPLICATIONS OF P4 UPDATES", "text": "Perturbation theorems are ubiquitous in mathematics and physics so that P4 updates will likely prove useful for retaining other properties of individual layers or neural networks as a whole. To this end, the P4 scheme in Section 3.1 is formulated in general terms. Orthogonal matrices may be parameterized in a similar manner to P4Inv through Givens rotations or double-Householder updates. Optimizers that constrain a joint property of multiple layers have previously been used to enforce Lipschitz bounds (Gouk et al. (2018), Yoshida & Miyato (2017)) and could also benefit from the present work. Applications in physics often rely on networks that obey the relevant physical invariances and equivariances (e.g. Köhler et al. (2020); Boyda et al. (2020); Kanwar et al. (2020); Hermann et al. (2020); Pfau et al. (2020); Rezende et al. (2019)). These properties might also be amenable to P4 training if suitable property-preserving perturbations can be defined." }, { "heading": "6 CONCLUSIONS", "text": "We have introduced P4Inv updates, a novel algorithmic concept to preserve tractable inversion and determinant computation of linear layers using parameterized perturbations. Applications to normalizing flows proved the efficiency and accuracy of the inverses and determinants during training. A crucial aspect of the P 4 method is its decoupled merging step, which allows stable and efficient updates. As a consequence, the invertible linear P4Inv layers can approximate any well-conditioned regular matrix. This feature might open up new avenues to parameterize useful subsets of GL(n) through penalty functions. Since perturbation theorems like the rank-one update exist for many classes of linear and nonlinear functions, we believe that the P4 concept presents an efficient and widely applicable way of preserving desirable network properties during training." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We thank the anonymous reviewers for their valuable suggestions that helped a lot in improving the manuscript." }, { "heading": "A SANITY CHECK FOR THE RANK-ONE UPDATE", "text": "Based on the matrix determinant lemma (equation 2) rank-one updates are ill-conditioned if the term G := 1+vTAinvu vanishes. If such a perturbation is ever merged into the network parameters, the stored matrix determinant and inverse degrade and cannot be recovered. Therefore, merges are only accepted if the following conditions hold:\nC (0) min ≤ ln |G| ≤ C (0) max and C (1) min ≤ ln |GdetA| ≤ C (1) max.\nThe constants Cmin and Cmax regularize the matrix A and its inverse Ainv, respectively, since ln |detA| = − ln |detAinv|. The penalty function is also based on these constraints:\ng(u,v) = Cp · ( ReLU2 (ln |G| − Cmax) + ReLU2 (Cmin − ln |G|)\n+ReLU2 (ln |GdetA| − Cmax) + ReLU2 (Cmin − ln |GdetA|) )\nwith a penalty parameter Cp. For the experiments in this work, we used Cmin = −2, Cmax = 15, C\n(0) min = −6, C (0) max = inf , C (1) min = −2.5, and C (1) max = 15.5.\nB IMPLEMENTATION OF P4INV LAYERS\nIn practice, the P4Inv layer stores the current inverse Ainv ≈ A−1 and determinant alongside A. The forward pass of an input vector x can be computed efficiently by first computing uTx and then adding vuTx to Ax. The inverse pass can be similarly structured to avoid any matrix multiplies.\nNote that P4 training can straightforwardly be applied to only a part of the network parameters. In this case, all other parameters are directly optimized without the perturbation proxy and the gradient of the loss function J is composed of elements from ∂J/∂B and ∂J/∂A. Furthermore, the perturbation of parameters and evaluation of the perturbed model can sometimes be done more efficiently in one step. Also, the merging step from equation 3 and equation 4 can additionally be augmented to rectify numerical errors made during optimization.\nIn order to allow crossings of otherwise inaccessible regions of the parameter space the merging step was accepted every Nforce merges, even if the determinant was poorly conditioned. If u or v ever contain non-numeric values, merging steps were rejected and the perturbation is reset without performing a merge." }, { "heading": "C NORMALIZING FLOWS", "text": "A Normalizing flow is a learnable bijection f : Rn → Rn which transforms a simple base distribution p(z), by first sampling z ∼ p(z), and then transforming into x = f(z). According to the change of variables, the transformed sample x has the density:\nq(x) = p ( f−1(x) ) ∣∣detJf−1(x)∣∣ . Given a target distribution ρ(x), this tractable density allows minimizing the reverse KullbackLeibler (KL) divergence DKL [q(x)‖ρ(x)] , e.g., if ρ(x) is known up to a normalizing constant, or the forward KL divergence DKL [ρ(x)‖q(x)] , if having access to samples from ρ(x)." }, { "heading": "D COUPLING LAYERS", "text": "Maintaining invertibility and computing the inverse and its Jacobian is a challenge, if f could be an arbitrary function. Thus, it is common to decompose f in a sequence of coupling layers\nf = g(1) ◦ S(1) ◦ . . . g(K) ◦ S(K).\nEach g(k) is constrained to the form g(k)(x) = T (k)(x1, x2)⊕x2, where x = x1⊕x2, x1 ∈ Rm and x2 ∈ Rn−m. Here T (k) : Rm × Rn−m → Rm is a point-wise transformation, which is invertible in\nits first component given a fixed x2. Possible implementations include simple affine transformations (Dinh et al., 2014; 2016) as well as complex transformations based on splines (Müller et al., 2019; Durkan et al., 2019). Each g(k) thus has a block-triangular Jacobian matrix\nJg(k) =\n[ JT (k) M (k)\n0 Im×m\n] ,\nwhere JT (k) is a (n − m) × (n − m) diagonal matrix. The layers S(k) take care of achieving a mixing between coordinates and are usually represented as simple swaps\nS(k) = [\n0 In−m×n−m Im×m 0\n] .\nThe total log Jacobian of fθ is then given by\nlog detJfθ = K∑ k=1 tr [log (JT (k))] + log detS(k),\nwhere log detS(k) = 0 when S(k) is given by the simple swaps above." }, { "heading": "E BOLTZMANN GENERATORS", "text": "Boltzmann Generators (Noé et al., 2019) are generative neural networks that sample with respect to a known target energy, as for example given by molecular force fields. The potential energy u : R3n → R of such systems is a function of the atom positions x. The corresponding probability of each configuration x is given by the Boltzmann distribution px(x) = exp(−βu(x))/Z, where β = 1/(kBT ) is inverse temperature with the Boltzmann constant kB . The normalization constant Z is generally not known.\nThe generation uses a normalizing flow and training is performed via a combination of density estimation and energy-based training. Concretely, the following loss function is minimized\nJ(A) = wlJl(A) + weJe(A), (6)\nwhere wl+we = 1 denote weighting factors between density estimation and energy-based training. The maximum likelihood and KL divergence in equation 6 are defined respectively as\nJl = Ex∼px [ 1\n2 ‖Fxz(x;A)‖2 − lnRxz(x;A)\n] and\nJe = Ez∼pz [u(Fzx(z;A))− lnRzx(z;A)] .\nAs an example, we train a model for alanine dipeptide, a molecule with 22 atoms, in water. Water is represented by an implicit solvent model. This system was previously used in Wu et al. (2020). Training data was generated using MD simulations at 600K to sample all metastable regions." }, { "heading": "F TRAINING OF LINEAR TOY PROBLEMS", "text": "The P4Inv layers were trained using a stochastic gradient descent optimizer with a learning rate of 10−2 and the hyperparameters from Table 1. The matrices were initialized with the identity." }, { "heading": "G TRAINING OF 2D DISTRIBUTIONS", "text": "The P4Inv layers used for 2D distributions were composed of blocks containing\n1. a P4Inv layer with bias (2D),\n2. an elementwise Bent identity,\n3. another P4Inv layer with bias (2D), and\n4. an inverse elementwise Bent identity.\n100 of these blocks were stacked resulting in 1200 tunable parameters (counting elements of u and v). P4Inv training was performed with N = 10, Nforce = 10 and Ncorrect = 50. No penalty was used. Matrices were initialized with the identity I2.\nThe RealNVP network used for comparison was composed of five RealNVP layers. The additive and multiplicative conditioner networks used dense nets with two 6-dimensional hidden layers each and tanh activation functions, respectively. This resulted in a total of 1230 parameters.\nThe examples are taken from Grathwohl et al. (2018). Priors were two-dimensional standard normal distributions. Adam optimization was performed for 8 epochs of 20000 steps and with a batch size of 200. The initial learning rate was 5 · 10−3 and decreased by a factor of 0.5 in each epoch. After each merging step, the metaparameters of the Adam optimizer were reset to their initial state.\nFigure 8 complements Figure 6 by showing samples from a network with only inverse P4Inv blocks. While the samples are worse than with forward blocks, the distributions are still well represented. This result indicates that the premature convergence encountered for linear test problems is a lesser problem in nonlinear problems and deep architectures." }, { "heading": "H TRAINING OF BOLTZMANN GENERATORS", "text": "The normalizing flows in Boltzmann generators were composed of the blocks shown in Figure 9 and a mixed coordinate transform as defined in Noé et al. (2019). The test problem was taken from Dibak et al. (2020). RNVP layers contained two 60-dimensional hidden layers each and ReLU and tanh activation functions for both t and s, respectively. The baseline model consisted of blocks of alternated RNVP blocks and swaps. The P4Inv model used invertible linear layers instead of the swapping of input channels in the baseline model. The computational overhead due to this change was negligible. RNVP parameters were optimized directly as usual and only the P4Inv layers are affected by the P4 updates. Merging was performed every N = 100 steps with Nforce = 10 and Ncorrect = 50. No penalty was used, i.e. C0 = 0.0. The P4Inv matrices were initialized with the reverse permutation, i.e. Aij = δi(n−j).\nDensity estimation with Adam was performed for 40,000 optimization steps with a batch size of 256 and a learning rate of 10−3. A short energy-based training epoch was appended for 2000 steps with a learning rate of 10−5 and we/wl = 0.05. After each merging step, the metaparameters of the Adam optimizer were reset to their initial state for all P4Inv parameters." }, { "heading": "I MNIST CLASSIFICATION VIA RANK-ONE UPDATES", "text": "Compared to fully-connected multi-layer perceptrons (MLP), rank-one updates reduce the number of independent trainable parameters per layer from m · n to m + n − 1, where m and n are the input and output dimension, respectively. It is therefore useful to study how the reduced search space dimension affects the training efficiency in a nonlinear setup. To this end, non-invertible classifier MLPs were trained on MNIST with an unrestricted search space as well as through rankone updates. The original network was composed of two convolutional layers, two dropout layers, and two linear layers. In P4 training, the two linear layers (dimensions 9216×128 and 128×10) were trained through rank-one updates which were merged in every iteration (N = 1). Naturally, this training did not involve any inverse or determinant updates. A vanilla SGD optimizer was used with various learning rates.\nFigure 10 shows the training loss and test accuracy during training. As for the linear problems from the previous subsection, the training efficiency was virtually unaffected during the first phase of the optimization, i.e. when the descent direction did not change significantly between subsequent iterations. However, as the descent direction became more noisy in the vicinity of the optimum, the training with rank-one updates became less efficient." } ]
2,020
null
SP:6ba57dba7e320797ca311e5c7d6e55e130384df2
[ "To summarize, this paper proposed a new noise injection method that is easy to implement and is able to replace the original noise injection method in StyleGAN 2. The approach is supported by detailed theoretical analysis and impactful performance improvement on GAN training and inversion. The results show that they are able to achieve a considerable improvement on DCGAN and StyleGAN2." ]
Noise injection is an effective way of circumventing overfitting and enhancing generalization in machine learning, the rationale of which has been validated in deep learning as well. Recently, noise injection exhibits surprising effectiveness when generating high-fidelity images in Generative Adversarial Networks (e.g. StyleGAN). Despite its successful applications in GANs, the mechanism of its validity is still unclear. In this paper, we propose a geometric framework to theoretically analyze the role of noise injection in GANs. Based on Riemannian geometry, we successfully model the noise injection framework as fuzzy equivalence on geodesic normal coordinates. Guided by our theories, we find that existing methods are incomplete and a new strategy for noise injection is devised. Experiments on image generation and GAN inversion demonstrate the superiority of our method.
[]
[ { "authors": [ "Rameen Abdal", "Yipeng Qin", "Peter Wonka" ], "title": "Image2StyleGAN: How to embed images into the StyleGAN latent space", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Guozhong An" ], "title": "The effects of adding noise during backpropagation training on a generalization performance", "venue": "Neural computation,", "year": 1996 }, { "authors": [ "Martin Arjovsky", "Léon Bottou" ], "title": "Towards principled methods for training generative adversarial networks. arxiv e-prints, art", "venue": "arXiv preprint arXiv:1701.04862,", "year": 2017 }, { "authors": [ "Chaim Baskin", "Natan Liss", "Yoav Chai", "Evgenii Zheltonozhskii", "Eli Schwartz", "Raja Giryes", "Avi Mendelson", "Alexander M Bronstein" ], "title": "Nice: Noise injection and clamping estimation for neural network quantization", "venue": "arXiv preprint arXiv:1810.00162,", "year": 2018 }, { "authors": [ "Christopher M. Bishop" ], "title": "Training with noise is equivalent to tikhonov regularization", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large-scale GAN training for high fidelity natural image synthesis, 2018", "venue": null, "year": 2018 }, { "authors": [ "Luis A Caffarelli" ], "title": "The regularity of mappings with a convex potential", "venue": "Journal of the American Mathematical Society,", "year": 1992 }, { "authors": [ "Xiangxiang Chu", "Bo Zhang", "Xudong Li" ], "title": "Noisy differentiable architecture search", "venue": "arXiv preprint arXiv:2005.03566,", "year": 2020 }, { "authors": [ "Klaus Deimling" ], "title": "Nonlinear functional analysis", "venue": "Courier Corporation,", "year": 2010 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial networks, 2014a", "venue": null, "year": 2014 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprintarXiv:1412.6572,", "year": 2014 }, { "authors": [ "Zhezhi He", "Adnan Siraj Rakin", "Deliang Fan" ], "title": "Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Geoffrey E. Hinton", "Nitish Srivastava", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan R. Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": "arXiv preprint arXiv:1207.0580,", "year": 2012 }, { "authors": [ "Roger A. Horn", "Charles R. Johnson" ], "title": "Matrix Analysis", "venue": null, "year": 2013 }, { "authors": [ "Simon Jenni", "Paolo Favaro" ], "title": "On stabilizing generative adversarial training with noise", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tero Karras", "Samuli Laine", "Miika Aittala", "Janne Hellsten", "Jaakko Lehtinen", "Timo Aila" ], "title": "Analyzing and improving the image quality of StyleGAN", "venue": "arXiv preprint arXiv:1912.04958,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Na Lei", "Yang Guo", "Dongsheng An", "Xin Qi", "Zhongxuan Luo", "Shing-Tung Yau", "Xianfeng Gu" ], "title": "Mode collapse and regularity of optimal transportation", "venue": "maps. arXiv preprint arXiv:1902.02934,", "year": 2019 }, { "authors": [ "Na Lei", "Kehua Su", "Li Cui", "Shing-Tung Yau", "Xianfeng David Gu" ], "title": "A geometric view of optimal transportation and generative model", "venue": "Computer Aided Geometric Design,", "year": 2019 }, { "authors": [ "Zhang Ling", "Zhang Bo" ], "title": "Theory of fuzzy quotient space (methods of fuzzy granular computing)", "venue": null, "year": 2003 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture", "venue": "search. International Conference of Representation Learning (ICLR),", "year": 2019 }, { "authors": [ "V Murali" ], "title": "Fuzzy equivalence relations", "venue": "Fuzzy sets and systems,", "year": 1989 }, { "authors": [ "Hyeonwoo Noh", "Tackgeun You", "Jonghwan Mun", "Bohyung Han" ], "title": "Regularizing deep neural networks by noise: Its interpretation and optimization", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Jordi Recasens" ], "title": "Indistinguishability operators: Modelling fuzzy equalities and fuzzy equivalence relations, volume 260", "venue": "Springer Science & Business Media,", "year": 2010 }, { "authors": [ "Walter Rudin" ], "title": "Principles of mathematical analysis, volume 3. McGraw-hill", "venue": "New York,", "year": 1964 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Gilbert Strang" ], "title": "Introduction to linear algebra, volume 3. Wellesley-Cambridge", "venue": null, "year": 1993 }, { "authors": [ "Fisher Yu", "Ari Seff", "Yinda Zhang", "Shuran Song", "Thomas Funkhouser", "Jianxiong Xiao" ], "title": "Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop", "venue": "arXiv preprint arXiv:1506.03365,", "year": 2015 }, { "authors": [ "Ling Zhang", "Bo Zhang" ], "title": "Fuzzy reasoning model under quotient space structure", "venue": "Information Sciences,", "year": 2005 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Deli Zhao", "Xiaoou Tang" ], "title": "Cyclizing clusters via zeta function of a graph", "venue": "In Advances in Neural Information Processing Systems,", "year": 1953 }, { "authors": [ "Dengyong Zhou", "Jason Weston", "Arthur Gretton", "Olivier Bousquet", "Bernhard Schölkopf" ], "title": "Ranking on data manifolds. In Advances in neural information processing", "venue": null, "year": 2004 }, { "authors": [ "R R" ], "title": "Lipschitz with Lipschitz constant L. We show that f(R) has measure zero in R. As R is a zero measure subset of R, by the Kirszbraun theorem (Deimling, 2010), f has an extension to a Lipschitz function of the same Lipschitz constant on R. For convenience, we still denote the extension as f . Then the problem reduces to proving that f maps zero measure set to zero measure", "venue": null, "year": 2010 }, { "authors": [ "F latent spaceW" ], "title": "IMAGE ENCODING AND GAN INVERSION From a mathematical perspective, a well behaved generator should be easily invertible", "venue": "We adopt the methods in Image2StyleGAN (Abdal et al.,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Noise injection is usually applied as regularization to cope with overfitting or facilitate generalization in neural networks (Bishop, 1995; An, 1996). The effectiveness of this simple technique has also been proved in various tasks in deep learning, such as learning deep architectures (Hinton et al., 2012; Srivastava et al., 2014; Noh et al., 2017), defending adversarial attacks (He et al., 2019), facilitating stability of differentiable architecture search with reinforcement learning (Liu et al., 2019; Chu et al., 2020), and quantizing neural networks (Baskin et al., 2018). In recent years, noise injection1 has attracted more and more attention in the community of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a). Extensive research shows that it helps stabilize the training procedure (Arjovsky & Bottou, 2017; Jenni & Favaro, 2019) and generate images of high fidelity (Karras et al., 2019a;b; Brock et al., 2018). In practice, Fig. 1 shows significant improvement in hair quality due to noise injection.\nParticularly, noise injection in StyleGAN (Karras et al., 2019a;b) has shown the amazing capability of helping generate sharp details in images, shedding new light on obtaining high-quality photo-realistic results using GANs. Therefore, studying the underlying principle of noise injection in GANs is an important theoretical work of understanding GAN algorithms. In this paper, we propose a theoretical framework to explain and improve the effectiveness of noise injection in GANs. Our framework is motivated from a geometric perspective and also combined with the results of optimal transportation problem in GANs (Lei et al., 2019a;b). Our contributions are listed as follows:\n• We show that the existing GAN architectures, including Wasserstein GANs (Arjovsky et al., 2017), may suffer from adversarial dimension trap, which severely penalizes the property of generator;\n• Based on our theory, we attempt to explain the properties that noise injection is applied in the related literatures;\n• Based on our theory, we propose a more proper form for noise injection in GANs, which can overcome the adversarial dimension trap. Experiments on the state-of-the-art GAN architecture, StyleGAN2 (Karras et al., 2019b), demonstrate the superiority of our new method compared with original noise injection used in StyleGAN2.\n1It suffices to note that noise injection here is totally different from the research field of adversarial attacks raised in Goodfellow et al. (2014b).\nTo the best of our knowledge, this is the first work that theoretically draws the geometric picture of noise injection in GANs." }, { "heading": "2 RELATED WORKS", "text": "The main drawbacks of GANs are unstable training and mode collapse. Arjovsky et al. (Arjovsky & Bottou, 2017) theoretically analyze that noise injection directly to the image space can help smooth the distribution so as to stabilize the training procedure. The authors of Distribution-Filtering GAN (DFGAN) (Jenni & Favaro, 2019) then put this idea into practice and prove that this technique will not influence the global optimality of the real data distribution. However, as the authors pointed out in (Arjovsky & Bottou, 2017), this method depends on the amount of noise. Actually, our method of noise injection is essentially different from these ones. Besides, they do not provide a theoretical vision of explaining the interactions between injected noises and features.\nBigGAN (Brock et al., 2018) splits input latent vectors into one chunk per layer and projects each chunk to the gains and biases of batch normalization in each layer. They claim that this design allows direct influence on features at different resolutions and levels of hierarchy. StyleGAN (Karras et al., 2019a) and StyleGAN2 (Karras et al., 2019b) adopt a slightly different view, where noise injection is introduced to enhance randomness for multi-scale stochastic variations. Different from the settings in BigGAN, they inject extra noise independent of latent inputs into different layers of the network without projection. Our theoretical analysis is mainly motivated by the success of noise injection used in StyleGAN (Karras et al., 2019a). Our proposed framework reveals that noise injection in StyleGAN is a kind of fuzzy reparameterization in Euclidean spaces, and we extends it into generic manifolds (section 4.3)." }, { "heading": "3 THE INTRINSIC DRAWBACKS OF TRADITIONAL GANS", "text": "" }, { "heading": "3.1 OPTIMAL TRANSPORTATION AND DISCONTINUOUS GENERATOR", "text": "Traditional GANs with Wasserstein distance are equivalent to the optimal transportation problem, where the optimal generator is the optimal transportation map. However, there is rare chance for the optimal transportation map to be continuous, unless the support of Brenier potential is convex (Caffarelli, 1992). Considering that the Brenier potential of Wasserstein GAN is determined by the real data distribution and the inverse map of the generator, it is highly unlikely that its support is convex. This means that the optimal generator will be discontinuous, which is a fatal limitation to the capacity of GANs. Based on that, Lei et al. (Lei et al., 2019a) further point out that traditional GANs will hardly converge or converge to one continuous branch of the target mapping, thus leading to mode collapse. They then propose to find the continuous Brenier potential instead of the discontinuous\ntransportation map. In the next paragraph, we show that this solution may not totally overcome the problem that traditional GANs encounter due to structural limitations of neural networks. Besides, it suffices to note that their analysis is built upon the Wasserstein distance, and may not be directly applied to the Jenson-Shannon divergence or KL divergence. We refer the readers to Lei et al. (2019a); Caffarelli (1992) for more detailed analysis." }, { "heading": "3.2 ADVERSARIAL DIMENSION TRAP", "text": "In addition to the above discontinuity problem, another drawback is the relatively low dimension of latent spaces in GANs compared with the high variance of details in real-world data. Taking face images as an example, the hair, freckles, and wrinkles have extremely high degree of freedom, which make traditional GANs often fail to capture them. The repetitive application of non-invertible CNN blocks makes the situation even worse. Non-invertible CNN, which is a singular linear transformation, will drop the intrinsic dimensions of feature manifolds (Strang et al., 1993). So during the feedforward procedure of the generator, the dimensions of feature spaces will keep being dropped. Then it will have a high chance that the valid dimension of the input latent space is lower than that of the real data. The relatively lower dimension of the input latent space will then force the dimension of the support with respect to the distribution of generated images lower than that of the real data, as no smooth mappings increase the dimension. However, the discriminator, which measures the distance of these two distributions, will keep encouraging the generator to increase the dimension up to the same as the true data. This contradictory functionality, as we show in the theorem bellow, incurs severe punishment on the smoothness and invertibility of the generative model, which we refer as the adversarial dimension trap.\nTheorem 1. 2 For a deterministic GAN model and generator G : Z → X , if the dimension of the input latent Z is lower than that of data manifold X , then at least one of the two cases must stand:\n1. the generator cannot be Lipschitz; 2. the generator fails to capture the data distribution and is unable to perform inversion.\nNamely, for an arbitrary point x ∈ X , the possibility of G−1(x) = ∅ is 1. The above theorem stands for a wide range of GAN loss functions, including Wasserstein divergence, Jenson-Shannon divergence, and other KL-divergence based losses. Notice that this theorem implies much worse situation than it states. For any open sphere B in the data manifold X , the generator restricted in the pre-image of B also follows this theorem, which suggests bad properties of nearly every local neighborhood. This also suggests that the above consequences of Theorem 1 may both stand. As in some subsets, the generator may successfully capture the data distribution, while in some others, the generator may fail to do so.\nThe first issue in section 3.1 can be addressed by not learning the generator directly with continuous neural network components. We will show how our method addresses the second issue." }, { "heading": "4 FUZZY REPARAMETERIZATION", "text": "The generator G in the traditional GAN is a composite of sequential non-linear feature mappings, which can be denoted as G(z) = fk ◦ fk−1 ◦ · · · ◦ f1(z), where z ∼ N (0, 1) is the standard Gaussian. Each feature mapping, which is typically a single layer convolutional neural network (CNN) plus non-linear activations, carries out a certain purpose such as extracting multi-scale patterns, upsampling, or merging multi-head information. The whole network is then a deterministic mapping from the latent space Z to the image space X . We propose to replace f i(x), 1 ≤ i ≤ k, with\ngi(x) = µi(x) + σi(x) , ∼ N (0, 1), x ∈ gi−1 ◦ · · · ◦ g1(Z). (1)\nWe call it as Fuzzy Reparameterization (FR) as it in fact learns fuzzy equivalence relation of the original features, and uses reparameterization to model the high-dimensional feature manifolds. We believe that this is the proper form of generalization of noise injection in StlyeGAN, and will show the reasons and benefits in the following sub-sections.\n2As the common practice in the manifold learning community, our theorems and discussions are based on Riemannian manifolds. Proofs to all the theorems are included in the supplementary material.\nIt is not hard to see that our proposed method can be viewed as the extension of the reparameterization trick in VAEs (Kingma & Welling, 2013). While the reparameterization trick in VAEs serves to a differentiable solution to learn through random variables and is only applied in the latent space, our method is a type of deep noise injection in feature maps of each layer to correct the defect in GAN architectures. Therefore, the purposes of using reparameterization in these two scenarios are different, thus leading to thoroughly different theories that are presented in the next sub-section." }, { "heading": "4.1 HANDLING ADVERSARIAL DIMENSION TRAP WITH NOISE INJECTION", "text": "As Sard’s theorem tells us (Petersen et al., 2006), the key to solve the adversarial dimension trap is to avoid mapping low-dimensional feature spaces into high-dimensional ones, which looks like a pyramid structure in the generator. However, we really need the pyramid structure in practice because the final output dimension of generated images is much larger than that of the input space. So the solution could be that, instead of mapping into the full feature spaces, we choose to map only onto the skeleton of the feature spaces and use random noise to fill up the remaining space. For a compact manifold, it is easy to find that the intrinsic dimension of the skeleton set can be arbitrarily low by applying Heine–Borel theorem to the skeleton (Rudin et al., 1964). By this way, the model can escape from the adversarial dimension trap.\nNow we develop the idea in detail. The whole idea is based on approximating the manifold by the tangent polyhedron. Assume that the feature spaceM is a Riemannian manifold embedded in Rm. Then for any point µ ∈ M, the local geometry induces a coordinate transformation from a small neighborhood of µ inM to its projection onto the tangent space TµM at µ by the following theorem. Theorem 2. Given Riemannian manifoldM embedded in Rm, for any point µ ∈M, we let TµM denote the tangent space at µ. Then the exponential map Expµ induces a smooth diffeomorphism from a Euclidean ball BTµM(0, r) centered at O to a geodesic ball BM(µ, r) centered at µ inM. Thus {Exp−1µ , BM(µ, r), BTµM(0, r)} forms a local coordinate system ofM in BM(µ, r), which we call the normal coordinates. Thus we have\nBM(µ, r) = Expµ(BTµM(0, r)) = {τ : τ = Expµ(v), v ∈ BTµM(0, r)}. (2)\nTheorem 3. The differential of Expµ at the origin of TµM is identity I . Thus Expµ can be approximated by Expµ(v) = µ+ Iv + o(‖v‖2). (3) Thus, if r in equation (2) is small enough, we can approximate BM(µ, r) by\nBM(µ, r) ≈ µ+ IBTµM(0, r) = {τ : τ = µ+ Iv, v ∈ BTµM(0, r)}. (4)\nConsidering that TµM is an affine subspace of Rm, the coordinates on BTµM(0, r) admit an affine transformation into the coordinates on Rm. Thus equation (4) can be written as\nBM(µ, r) ≈ µ+ IBTµM(0, r) = {τ : τ = µ+ rT (µ) , ∈ B(0, 1)}. (5)\nWe remind the readers that the linear component matrix T (µ) differs at different µ ∈ M and is decided by the local geometry near µ. In the above formula, µ defines the center point and rT (µ) defines the shape of the approximated neighbor. So we call them a representative pair of BM(µ, r).\nPicking up a series of such representative pairs, which we refer as the skeleton set, we can construct a tangent polyhedron H ofM. Thus instead of trying to learn the feature manifold directly, we adopt a two-stage procedure. We first learn a map f : x 7→ [µ(x), σ(x)] (σ(x) ≡ rT (µ(x))) onto the skeleton set, then we use noise injection g : x 7→ µ(x) + σ(x) , ∼ U(0, 1) (uniform distribution) to fill up the flesh of the feature space as shown in Figure 2.\nHowever, the real world data often include fuzzy semantics. Even long range features could share some structural relations in common. It is unwise to model it with unsmooth architectures such as locally bounded sphere and uniform distribution. Thus we borrow the idea from fuzzy topology (Ling & Bo, 2003; Zhang & Zhang, 2005; Murali, 1989; Recasens, 2010) which is designed to address this issue. It is well known that for any distance metrics d(·, ·), e−d(µ,·) admits a fuzzy equivalence relation for points near µ, which is similar with the density of Gaussian. The fuzzy equivalence relation can be viewed as a suitable smooth alternative to the sphere neighborhood BM(µ, r). Thus we replace the uniform distribution with unclipped Gaussian3. Under this settings, the first stage mapping in fact learns a fuzzy equivalence relation, while the second stage is a reparameterization technique. Notice that the skeleton set can have arbitrarily low dimension by Heine–Borel theorem. So the first-stage map can be smooth and well conditioned. For the second stage, we can show that it possesses a smooth property in expectation by the following theorem.\nTheorem 4. Given f : x 7→ [µ(x), σ(x)]T , f is locally Lipschitz and ‖σ‖∞ = o(1). Define g(x) ≡ µ(x) + σ(x) , ∼ N (0, 1) (standard Gaussian). Then for any bounded set U , ∃L > 0, we have E[‖g(x) − g(y)‖2] ≤ L‖x − y‖2 + o(1),∀x, y ∈ U . Namely, the principal component of g is locally Lipschitz in expectation. Specifically, if the definition domain of f is bounded, then the principal component of g is globally Lipschitz in expectation." }, { "heading": "4.2 PROPERTIES OF NOISE INJECTION", "text": "As we have discussed, traditional GANs face two challenges: the discontinuous optimal generator and the adversarial dimension trap. Both of the two challenges will lead to an unsmooth generator. Theorem 1 also implies an unstable training procedure because the gradient explosion that may occur on the generator. Besides, the dimension reduction in GAN will make it hard to fit high-variance details as information keeps compressed along channels in the generator. With noise injection in the network of the generator, however, we can theoretically overcome such problems if the representative pairs are constructed properly to capture the local geometry. In this case, our model does not need to fit the discontinuous optimal transportation map, nor the image manifold with higher dimension than that the network architecture can handle. Thus the training procedure will not encourage the unsmooth generator, and can proceed more stably. Also, the extra noise can compensate the loss of information compression so as to capture high-variance details, which has been discussed and illustrated in StyleGAN (Karras et al., 2019a). We will evaluate the performance of our method from these aspects in section 5.\n4.3 CHOICE OF µ(x) AND σ(x)\nAs µ stands for a particular point in the feature space, we simply model it by the traditional deep CNN architectures. σ(x) is designed to fit the local geometry of µ(x). According to our theory, the local geometry should only admit minor differences from µ(x). Thus we believe that σ(x) should be determined by the spatial and semantic information contained in µ(x), and should characterize the local variations of the spatial and semantic information. The deviation of pixel-wise sum along channels of feature maps in StyleGAN2 highlights the semantic variations like hair, parts of background, and silhouettes, as the standard deviation map over sampling instances shows in Fig. 1. This observation suggests that the sum along channels identifies the local semantics we expect to reveal. Thus it should be directly connected to σ(x) we are pursuing here. For a given feature map µ = DCNN(x) from the deep CNN, which is a specific point in the feature manifold, the sum along its channels is\nµ̃ijk = c∑ i=1 µijk, (6)\n3A detailed analysis about why unclipped Gaussian should be applied is offered in the supplementary material.\nwhere i enumerates all the c feature maps of µ, while j, k enumerate the spatial footprint of µ in its h rows and w columns, respectively. The resulting µ̃ is then a spatial semantic identifier, whose variation corresponds to the local semantic variation. We then normalize µ̃ to obtain a spatial semantic coefficient matrix s with\nmean(µ̃) = 1\nh× w h∑ j=1 w∑ k=1 µ̃jk,\ns = µ̃−mean(µ̃), max(|s|) = max\n1≤j≤h,1≤k≤w |sjk|,\ns = s\nmax(|s|) .\n(7)\nRecall that the standard deviation of s over sampling instances highlights the local variance in semantics. Thus s can be decomposed into two independent components: sm that corresponds to the main content of the output image, which is almost invariant under changes of injected noise; sv that is associated with the variance that is induced by the injected noise, and is nearly orthogonal to the main content. We assume that this decomposition can be attained by an affine transformation on s such that sd = A ∗ s+ b = sm + sv, sv ∗ µ ≈ 0, (8) where ∗ denotes element-wise matrix multiplication, and 0 denotes the matrix whose all elements are zeros. To avoid numerical instability, we add 1 whose all elements are ones to the above decomposition, such that its condition number will not get exploded,\ns′ = αsd + (1− α)1, σ = s′\n‖s′‖2 .\n(9)\nThe regularized sm component is then used to enhance the main content in µ, and the regularized sv component is then used to guide the variance of injected noise. The final output o is then calculated as\no = rσ ∗ µ+ rσ ∗ , ∼ N (0, 1). (10) In the above procedure, A, b, r, and α are learnable parameters. Note that in the last equation, we do not need to decompose s′ into sv and sm, as sv is designed to be nearly orthogonal to µ, and sm is nearly invariant. Thus σ ∗ µ will automatically drop the sv component, and σ ∗ amounts to adding an invariant bias to the variance of injected noise. There are alternative forms for µ and σ with respect to various GAN architectures. However, modeling µ by deep CNNs and deriving σ through the spatial and semantic information of µ are universal for GANs, as they comply with our theorems. We further conduct ablation study to verify the effectiveness of the above procedure. The related results can be found in the supplementary material.\nUsing our formulation, noise injection in StyleGAN2 can be written as follows:\nµ = DCNN(x), o = µ+ r ∗ , ∼ N (0, 1), (11) where r is a learnable scalar parameter. This can be viewed as a special case of our method, where T (µ) in (5) is set to identity. Under this settings, the local geometry is assumed to be everywhere identical among the feature manifold, which suggests a globally Euclidean structure. While our theory supports this simplification and specialization, our choice of µ(x) and σ(x) can suit broader and more usual occasions, where the feature manifolds are non-Euclidean. We denote this fashion of noise injection as additive noise injection, and will extensively study its performance compared with our choice in the following section." }, { "heading": "5 EXPERIMENT", "text": "We conduct experiments on benchmark datasets including FFHQ faces, LSUN objects, and CIFAR-10. The GAN models we use are the baseline DCGAN (Radford et al., 2015) (originally without noise injection) and the state-of-the-art StyleGAN2 (Karras et al., 2019b) (originally with additive noise injection). For StyleGAN2, we use config-e in the original paper due to that config-e achieves the best performance with respect to Path Perceptual Length (PPL) score. Besides, we apply the experimental settings from StyleGAN2.\nImage synthesis. PPL (Zhang et al., 2018) has been proven an effective metric for measuring structural consistency of generated images (Karras et al., 2019b). Considering its similarity to the expectation of the Lipschitz constant of the generator, it can also be viewed as a quantification of the smoothness of the generator. The path length regularizer is proposed in StyleGAN2 to improve generated image quality by explicitly regularizing the Jacobian of the generator with respect to the intermediate latent space. We first compare the noise injection methods with the bald StyleGAN2, which remove the additive noise injection and path length regularizer in StyleGAN2. As shown in Table 1, we can find that all types of noise injection significantly improve the PPL scores. It is worth noting that our method without path length regularizer can achieve comparable performance against the standard StyleGAN2 on the FFHQ dataset, and the performance can be further improved if combined with path length regularizer. Considering the extra GPU memory consuming of path length regularizer in training, we think that our method offers a computation-friendly alternative to StyleGAN2 as we observe smaller GPU memory occupation of our method throughout all the experiments. Another benefit is that our method accelerates the convergence to the optimal FID scores, as illustrated in Figure 4. This superior convergence can be explained with our theorem. The underlying reason is that our method offers an architecture that is more consistent with the intrinsic geometry of the feature space. Thus it is easier for the network to fit.\nFor the LSUN-Church dataset, we observe an obvious improvement in FID scores compared with StyleGAN2. We believe that this is because the LSUN-Church data are scene images and contain various semantics of multiple objects, which are hard to fit for the original StyleGAN2 that is more suitable for single object synthesis. So our FR architecture offers more degrees of freedom to the generator to fit the true distribution of the dataset. In all cases, our method is superior to StyleGAN2 in both PPL and FID scores. This proves that our noise injection method is more powerful than the one used in StyleGAN2. For DCGAN, as it does not possess the intermediate latent space, we cannot facilitate it with path length regularizer. So we only compare the additive noise injection with our FR method. Through all the cases we can find that our method achieves the best performance in PPL and FID scores.\nWe also study whether our choice for µ(x) and σ(x) can be applied to broader occasions. We further conduct experiments on a cat dataset which consists of 100 thousand selected images from 800 thousand LSUN-Cat images by PageRank algorithm (Zhou et al., 2004). For DCGAN, we conduct extra experiments on CIFAR-10 to test whether our method could succeed in multi-class image synthesis. The results are reported in Figure 5. We can see that our method still outperforms the compared methods in PPL scores and the FID scores are comparable, indicating that the proposed noise injection is more favorable of preserving structural consistency of generated images with real ones.\nNumerical stability. As we have analyzed before, noise injection should be able to improve the numerical stability of GAN models. To evaluate it, we examine the condition number of different GAN architectures. The condition number of a given function f is defines as Horn & Johnson (2013)\nCond(f) = lim δ→0 sup ‖∆x‖≤δ ‖f(x)− f(x+ ∆x)‖/‖f(x)‖ ‖∆x‖/‖x‖ . (12)\nIt measures how sensitive a function is to changes or errors in the input. A function with a high condition number is said to be ill-conditioned. Considering the numerical infeasibility of the sup operator in the definition of condition number, we resort to the following alternative approach. We first sample a batch of 50000 pairs of (Input, Perturbation) from the input distribution and the perturbation ∆x ∼ N (0, 1e-4), and then compute the corresponding condition numbers. We compute the mean value and the mean value of the largest 1000 values of these 50000 condition numbers as Mean Condition (MC) and Top Thousand Mean Condition (TTMC) respectively to evaluate the condition of GAN models. We report the results in Table 2, where we can find that noise injection significantly improves the condition of GAN models, and our proposed method dominates the performance.\nGAN inversion. StyleGAN2 makes use of a latent style space that is capable of enabling controllable image modifications. This characteristic motivates us to study the image embedding capability of our method via GAN inversion algorithms (Abdal et al., 2019) as it may help further leverage the potential of GAN models. From the experiments, we find that the StyleGAN2 model is prone to work well for full-face, non-blocking human face images. For this type of images, we observe comparable performance for all the GAN architectures. We think that this is because those images are close to the ‘mean’ face of FFHQ dataset (Karras et al., 2019a), thus easy to learn for the StyleGAN-based models. For faces of large pose or partially occluded ones, the capacity of compared models differs significantly. Noise injection methods outperform the bald StyleGAN2 by a large margin, and our method achieves the best performance. The detailed implementation and results are reported in the supplementary material." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose a theoretical framework to explain the effect of noise injection technique in GANs. We prove that the generator can easily encounter difficulty of unsmoothness, and noise injection is an effective approach to addressing this issue. Based on our theoretical framework, we also derive a more proper formulation for noise injection. We conduct experiments on various datasets to confirm its validity. Despite the superiority compared with the existing methods, however, it is still unclear whether our formulation is optimal and universal for different networks. In future work, we will further investigate the realization of noise injection, and attempt to find more powerful way to characterize local geometries of feature spaces." }, { "heading": "A PROOF TO THEOREMS", "text": "A.1 THEOREM 1\nProof. Denote the dimensions of G(Z) and X as dG and dX , respectively. There are two possible cases for G: dG is lower than dX , or dG is higher than or equal to dX .\nFor the first case, a direct consequence is that, for almost all points in X , there are no pre-images under G. This means that for an arbitrary point x ∈ X , the possibility of G−1(x) = ∅ is 1, as {x ∈ X : G−1(x) 6= ∅} ⊂ G(Z) ∩ X , which is a zero measure set in X . This also implies that the generator is unable to perform inversion. Another consequence is that, the generated distribution Pg can never get aligned with real data distribution Pr. Namely, the distance between Pr and Pg cannot be zero for arbitrary distance metrics. For the KL divergence, the distance will even approach infinity.\nFor the second case, dG ≥ dX > dZ . We simply show that a Lipschitz-continuous function cannot map zero measure set into positive measure set. Specifically, the image of low dimensional space of a Lipschitz-continuous function has measure zero. Thus if dG ≥ dX , G cannot be Lipschitz. Now we prove our claim.\nSuppose that f : Rn → Rm, n < m, f is Lipschitz with Lipschitz constant L. We show that f(Rn) has measure zero in Rm. As Rn is a zero measure subset of Rm, by the Kirszbraun theorem (Deimling, 2010), f has an extension to a Lipschitz function of the same Lipschitz constant on Rm. For convenience, we still denote the extension as f . Then the problem reduces to proving that f maps zero measure set to zero measure set. For every > 0, we can find countable union of balls {Bk}k of radius rk such that Rn ⊂ ∪kBk and ∑ km(Bk) < in Rm, where m(·) is the Lebesgue measure in Rm. But f(Bk) is contained in a ball with radius Lrk. Thus we have m(f(Rn)) ≤ Lm ∑ km(Bk) < L\nm , which means that it is a zero measure set in Rm. For the mapping between manifolds, using the chart system can turn it into the case we analyze above, which completes our proof.\nWe want to remind the readers that, even if the generator suits one of the cases in Theorem 1, the other case can still occur. For example, G could succeed in capturing the distribution of certain parts of the real data, while it may fail in the other parts. Then for the pre-image of those successfully captured data, the generator will not have finite Lipschitz constant.\nA.2 THEOREMS 2 & 3\nTheorems 2 & 3 are classical conclusions in Riemannian manifold. We refer readers to section 5.5 of the book written by Petersen et al. (2006) for detailed proofs and illustration.\nA.3 THEOREM 4\nProof.\nE[‖g(x)− g(y)‖2] ≤ ‖µ(x)− µ(y)‖2 + E[‖σ(x) − σ(y)δ‖2] (13) ≤ Lµ‖x− y‖2 + 2C‖σ‖∞ ≤ Lµ‖x− y‖2 + o(1), (14)\nwhere C is a constant related to the dimension of the image space of σ and Lµ is the Lipschitz constant of µ." }, { "heading": "B WHY GAUSSIAN DISTRIBUTION?", "text": "We first introduce the notion of fuzzy equivalence relations. Definition 1. A t-norm is a function T : [0, 1]×[0, 1]→ [0, 1] which satisfies the following properties:\n1. Commutativity: T (a, b) = T (b, a).\n2. Monotonicity: T (a, b) ≤ T (c, d), if a ≤ c and b ≤ d.\n3. Associativity: T (a, T (b, c)) = T (T (a, b), c).\n4. The number 1 acts as identity element: T (a, 1) = a. Definition 2. Given a t-norm T , a T -equivalence relation on a set X is a fuzzy relation E on X and satisfies the following conditions:\n1. E(x, x) = 1,∀x ∈ X (Reflexivity).\n2. E(x, y) = E(y, x),∀x, y ∈ X (Symmetry).\n3. T (E(x, y), E(y, z)) ≤ E(x, z)∀x, y, z ∈ X (T -transitivity).\nThen it is easy to check that T (x, y) = xy is a t-norm, and E(x, y) = e−d(x,y) is a T -equivalence for any distance metric d on X , as\nT (E(x, y), E(y, z)) = e−(d(x,y)+d(y,z)) ≤ e−d(x,z) = E(x, z).\nConsidering that we want to contain the fuzzy semantics of real world data in our local geometries of feature manifolds, a natural solution will be that we sample points from the local neighborhood of µ with different densities on behalf of different strength of semantic relations with µ. Points with stronger semantic relations will have larger densities to be sampled. A good framework to model this process is the fuzzy equivalence relations we mention above, where the degrees of membership E are used as the sampling density. However, our expansion of the exponential map Expµ carries an error term of o(‖v‖2). We certainly do not want the local error to be out of control, and we also wish to constrain the sampling locally. Thus we accelerate the decrease of density when points depart from the center µ, and constrain the integral of E to be identity, which turns E to the density of standard Gaussian." }, { "heading": "C DATASETS", "text": "FFHQ Flickr-Faces-HQ (FFHQ) (Karras et al., 2019a) is a high-quality image dataset of human faces, originally created as a benchmark data for generative adversarial networks (GANs). The dataset consists of 70,000 high-quality PNG images and contains considerable variations in terms of age, pose, expression, hair style, ethnicity and image backgrounds. It also covers diverse accessories such as eyeglasses, sunglasses, hats, etc.\nLSUN-Church and Cat-Selected LSUN-Church is the church outdoor category of LSUN dataset (Yu et al., 2015), which consists of 126 thousand church images of various styles. CatSelected contains 100 thousand cat images selected by ranking algorithm (Zhou et al., 2004) from the LSUN cat category. The plausibility of using PageRank to rank data was analyzed in Zhou et al. (2004). We also used the algorithm presented in Zhao & Tang (2009) to construct the graph from the cat data.\nCIFAR-10 The CIFAR-10 dataset (Krizhevsky et al., 2009) consists of 60,000 images of size 32x32. There are all 10 classes and 6000 images per class. There are 50,000 training images and 10,000 test images.\nD IMPLEMENTATION DETAILS\nD.1 MODELS\nWe illustrate the generator architectures of StyleGAN2 based methods in Figure 6. For all those models, the discriminators share the same architecture as the original StyleGAN2. The generator architecture of DCGAN based methods are illustrated in Figure 7. For all those models, the discriminators share the same architecture as the original DCGAN." }, { "heading": "E EXPERIMENT ENVIRONMENT", "text": "All experiments are carried out by TensorFlow 1.14 and Python 3.6 with CUDA Version 10.2 and NVIDIA-SMI 440.64.00. We basically build our code upon the framework of NVIDIA official StyleGAN2 code, which is available at https://github.com/NVlabs/stylegan2. We use a variety of servers to run the experiments as reported in Table 3.\nF IMAGE ENCODING AND GAN INVERSION\nFrom a mathematical perspective, a well behaved generator should be easily invertible. In the last section, we have shown that our method is well conditioned, which implies that it could be easily invertible. We adopt the methods in Image2StyleGAN (Abdal et al., 2019) to perform GAN inversion and compare the mean square error and perceptual loss on a manually collected dataset of 20 images. The images are shown in Figure 8 and the quantitative results are provided in Table 4. For our FR\nmethods, we further optimize the α parameter in Eq. 7 in section 4.3, which fine-tunes the local geometries of the network to suit the new images that might not be settled in the model. Considering that α is limited to [0, 1], we use (α ∗)t\n(α∗)t+(1−α∗)t to replace the original α and optimize t. The initial value of t is set to 1.0 and α∗ is constant with the same value as α in the converged FR models.\nDuring the experiments, we find that the StyleGAN2 model is prone to work well for full-face, non-blocking human face images. For this type of images (which we refer as regular case in Figure 9), we observe comparable performance for all the GAN architectures. We think that this is because those images are closed to the ‘mean’ face of FFHQ dataset (Karras et al., 2019a), thus easy to learn for the StyleGAN based models. For faces of large pose or partially blocked ones, the capacity of different models differs significantly. Noise injection methods outperform the bald StyleGAN2 by a large margin, and our method achieves the best performance." }, { "heading": "G ABLATION STUDY OF FR", "text": "In Tab. 5, we perform the ablation study of the proposed FR method on the FFHQ dataset. We test 5 different choices of FR implementation and compare their FID and PPL scores after convergence.\n1. No normalization: in this setting we remove the normalization of µ̃ in Eq. 7, and use the unnormalized µ̃ to replace s in the following equations. The network comes to a minimum FID of 23.77 after training on 1323 thousand images, and then quickly falls into mode collapse after that.\nThe zero PPL scores in ‘No normalization’ and ’No stabilization’ suggest that the generator output is invariant to small perturbations, which means mode collapse. We can find that the stabilization and normalization in the FR implementation in section 4.3 is necessary for the network to avoid numerical instability and mode collapse. The implementation of FR method reaches the best performance in PPL score and comparable performance against the ‘no decomposition’ method in FID score. As analyzed in StyleGAN (Karras et al., 2019a) and StyleGAN2 (Karras et al., 2019b), for high fidelity images, PPL is more convincing than the FID score in measuring the synthesis quality. Therefore, the FR implementation is the best among these methods." } ]
2,020
null
SP:bdbb12951868ea0864f926192fdbe2e62ecdb0e3
[ "The authors proposed in this paper a supervised approach relying on given relative and quantitative attribute discrepancies. A UNet-like generator learns adversarially tends to generate realistic images while a \"ranker\" tends to predict the magnitude of the input parameter used to control the image manipulation. The controled parameter is defined implictly using images whose the discrepancy of the attribute of interest is known. This allows fine-grained manipulation of the attribute of interest. The results of the approach is illustrated on face datasets (CelebA-HQ and LFWA)." ]
We propose a new model to refine image-to-image translation via an adversarial ranking process. In particular, we simultaneously train two modules: a generator that translates an input image to the desired image with smooth subtle changes with respect to some specific attributes; and a ranker that ranks rival preferences consisting of the input image and the desired image. Rival preferences refer to the adversarial ranking process: (1) the ranker thinks no difference between the desired image and the input image in terms of the desired attributes; (2) the generator fools the ranker to believe that the desired image changes the attributes over the input image as desired. Preferences over pairs of real images are introduced to guide the ranker to rank image pairs regarding the interested attributes only. With an effective ranker, the generator would “win” the adversarial game by producing high-quality images that present desired changes over the attributes compared to the input image. The experiments demonstrate that our TRIP can generate high-fidelity images which exhibit smooth changes with the strength of the attributes.
[]
[ { "authors": [ "Yazeed Alharbi", "Peter Wonka" ], "title": "Disentangled image generation through structured noise injection", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yunbo Cao", "Jun Xu", "Tie-Yan Liu", "Hang Li", "Yalou Huang", "Hsiao-Wuen Hon" ], "title": "Adapting ranking svm to document retrieval", "venue": "In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval,", "year": 2006 }, { "authors": [ "Olivier Chapelle", "Alekh Agarwal", "Fabian H Sinz", "Bernhard Schölkopf" ], "title": "An analysis of inference with the universum", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Yu Deng", "Jiaolong Yang", "Dong Chen", "Fang Wen", "Xin Tong" ], "title": "Disentangled and controllable face image generation via 3d imitative-contrastive learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Zheng Ding", "Yifan Xu", "Weijian Xu", "Gaurav Parmar", "Yang Yang", "Max Welling", "Zhuowen Tu" ], "title": "Guided variational autoencoder for disentanglement learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Zhenliang He", "Wangmeng Zuo", "Meina Kan", "Shiguang Shan", "Xilin Chen" ], "title": "Attgan: Facial attribute editing by only changing what you want", "venue": "IEEE Transactions on Image Processing,", "year": 2019 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Taeksoo Kim", "Moonsu Cha", "Hyunsoo Kim", "Jung Kwon Lee", "Jiwon Kim" ], "title": "Learning to discover cross-domain relations with generative adversarial networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Ruho Kondo", "Keisuke Kawano", "Satoshi Koide", "Takuro Kutsuna" ], "title": "Flow-based image-to-image translation with feature disentanglement", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Guillaume Lample", "Neil Zeghidour", "Nicolas Usunier", "Antoine Bordes", "Ludovic Denoyer", "Marc’Aurelio Ranzato" ], "title": "Fader networks: Manipulating images by sliding attributes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Xiao Li", "Chenghua Lin", "Chaozheng Wang", "Frank Guerin" ], "title": "Latent space factorisation and manipulation via matrix subspace projection", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Alexander H Liu", "Yen-Cheng Liu", "Yu-Ying Yeh", "Yu-Chiang Frank Wang" ], "title": "A unified feature disentangler for multi-domain image translation and manipulation", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Augustus Odena" ], "title": "Semi-supervised learning with generative adversarial networks", "venue": "Workshop on Data-Efficient Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Devi Parikh", "Kristen Grauman" ], "title": "Relative attributes", "venue": "In 2011 International Conference on Computer Vision,", "year": 2011 }, { "authors": [ "Yassir Saquil", "Kwang In Kim", "Peter M. Hall" ], "title": "Ranking cgans: Subjective control over semantic image attributes", "venue": "In British Machine Vision Conference 2018,", "year": 2018 }, { "authors": [ "Zhou Wang", "Alan C Bovik", "Hamid R Sheikh", "Eero P Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE transactions on image processing,", "year": 2004 }, { "authors": [ "Po-Wei Wu", "Yu-Jing Lin", "Che-Han Chang", "Edward Y Chang", "Shih-Wei Liao" ], "title": "Relgan: Multidomain image-to-image translation via relative attributes", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Ke Zhou", "Gui-Rong Xue", "Hongyuan Zha", "Yong Yu" ], "title": "Learning to rank with ties", "venue": "In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2008 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Image-to-image (I2I) translation (Isola et al., 2017) aims to translate an input image into the desired ones with changes in some specific attributes. Current literature can be classified into two categories: binary translation (Zhu et al., 2017; Kim et al., 2017), e.g., translating an image from “not smiling” to “smiling”; fine-grained translation (Lample et al., 2017; He et al., 2019; Liu et al., 2018; Saquil et al., 2018), e.g., generating a series of images with smooth changes from “not smiling” to “smiling”. In this work, we focus on the high-quality fine-grained I2I translation, namely, generate a series of realistic versions of the input image with smooth changes in the specific attributes (See Fig. 1). Note that the desired high-quality images in our context are two folds: first, the generated images look as realistic as training images; second, the generated images are only modified in terms of the specific attributes.\nRelative attribute (RA), referring to the preference of two images over the strength of the interested attribute, is widely used in the fine-grained I2I translation task due to their rich semantic information. Previous work Ranking Conditional Generative Adversarial Network (RCGAN) (Saquil et al., 2018) adopts two separate criteria for a high-quality fine-grained translation. Specifically, a ranker is adopted to distill the discrepancy from RAs regarding the targeted attribute, which then guides the generator to translate the input image into the desired one. Meanwhile, a discriminator ensures the generated images as realistic as the training images. However, the generated fine-grained images guided by the ranker are out of the real data distribution, which conflicts with the goal of the discriminator. Therefore, the generated images cannot maintain smooth changes and suffer from low-quality issues. RelGAN (Wu et al., 2019) applied a unified discriminator for the high-quality fine-grained translation. The discriminator guides the generator to learn the distribution of triplets, which consist of pairs of images and their corresponding numerical labels (i.e., relative attributes). Further, RelGAN adopted the fine-grained RAs within the same framework to enable a smooth interpolation. However, the joint data distribution matching does not explicitly model the discrepancy from the RAs and fails to capture sufficient semantic information. The generated images fail to change smoothly over the interested attribute.\nIn this paper, we propose a new adversarial ranking framework consisting of a ranker and a generator for high-quality fine-grained translation. In particular, the ranker explicitly learns to model the\ndiscrepancy from the relative attributes, which can guide the generator to produce the desired image from the input image. Meanwhile, the rival preference consisting of the generated image and the input image is constructed to evoke the adversarial training between the ranker and the generator. Specifically, the ranker cannot differentiate the strength of the interested attribute between the generated image and the input image; while the generator aims to achieve the agreement from the ranker that the generated image holds the desired difference compared to the input. Competition between the ranker and the generator drives both two modules to improve themselves until the generations exhibit desired preferences while possessing high fidelity. We summarize our contributions as follows:\n• We propose Translation via RIval Preference (TRIP) consisting of a ranker and a generator for a high-quality fine-grained translation. The rival preference is constructed to evoke the adversarial training between the ranker and the generator, which enhances the ability of the ranker and encourages a better generator.\n• Our tailor-designed ranker enforces a continuous change between the generated image and the input image, which promotes a better fine-grained control over the interested attribute.\n• Empirical results show that our TRIP achieves the state-of-art results on the fine-grained imageto-image translation task. Meanwhile, the input image can be manipulated linearly along the strength of the attribute.\n• We further extend TRIP to the fine-grained I2I translation of multiple attributes. A case study demonstrates the efficacy of our TRIP in terms of disentangling multiple attributes and manipulating them simultaneously." }, { "heading": "2 RELATED WORKS", "text": "We mainly review the literature related to fine-grained I2I translation, especially smooth facial attribute transfer. We summarized them based on the type of generative models used.\nAE/VAE-based methods can provide a good latent representation of the input image. Some works (Lample et al., 2017; Liu et al., 2018; Li et al., 2020; Ding et al., 2020) proposed to disentangle the attribute-dependent latent variable from the image representation but resorted to different disentanglement strategies. Then the fine-grained translation can be derived by smoothly manipulating the attribute variable of the input image. However, the reconstruction loss, which is used to ensure the image quality, cannot guarantee a high fidelity of the hallucinated images.\nFlow-based Some works (Kondo et al., 2019) incorporates feature disentanglement mechanism into flow-based generative models. However, the designed multi-scale disentanglement requires large computation. And the reported results did not show satisfactory performance on smooth control.\nGAN-based GAN is a widely adopted framework for a high-quality image generation. Various methods applied GAN as a base for fine-grained I2I translation through relative attributes. The main differences lie in the strategies of incorporating the preference over the attributes into the image generation process. Saquil et al. (2018) adopted two critics consisting of a ranker, learning from the relative attributes, and a discriminator, ensuring the image quality. Then the combination of two critics is supposed to guide the generator to produce high-quality fine-grained images. However, the ranker would induce the generator to generate out-of-data-distribution images, which is opposite to the target of the discriminator, thereby resulting in poor-quality images. Wu et al. (2019) applied a unified discriminator, which learns the joint data distribution of the triplet constructed with a pair of images and a discrete numerical label (i.e., relative attribute). However, such a joint distribution modeling approach only models the discrete discrepancy of the RAs, which fails to generalize to the continuous labels very well. Rather than using RAs, He et al. (2019) directly modeled the attribute\nwith binary classification, which cannot capture detailed attribute information, and hence fail to make a smooth control over the attributes. Deng et al. (2020) embeded 3D priors into adversarial learning. However, it relies on available priors for attributes, which limits the practicality. Alharbi and Wonka (2020) proposed an unsupervised disentanglement method. It injects the structure noises to GAN for controlling specific parts of the generated images, which makes global or local features changed in a disentangled way. However, it is unclear how global or local features are related to facial attributes. Thus, it is difficult to change specific attributes.\nOur method is based on GAN. To ensure good control over the target attribute, the critic in GAN should transfer the signal about the subtle difference over the target attribute to the generator. Previous methods model it as two sequential processes. Namely, they capture the subtle difference over attribute using a classification model or a ranking model, and count on the learned attribute model to generalize learned attribute preference to the unseen generated images through interpolation. However, the learned attribute model never meets our expectation, since they haven’t seen the generated images at all during its training. As for our TRIP, we consider introducing the generated image into the training process of the attribute model, i.e., the ranker. Since the supervision over the generated images is not accessible, we formulate the ranker into an adversarial ranking process using the constructed rival preference, following the adversarial training of vanilla GAN. Consequently, our ranker (the attribute model) can critic the generated image during its whole training process, and it no doubt can generalize to generated images to ensure sufficient fine-grained control over the target attribute." }, { "heading": "3 TRIP FOR FINE-GRAINED IMAGE-TO-IMAGE TRANSLATION", "text": "In this section, we propose a new model, named TRanslation via Rival Preferences (TRIP) for high-quality fine-grained image-to-image (I2I) translation, which learns a mapping that translates an input image to a set of realistic output images by smoothly controlling the specific attributes.\nThe whole structure of TRIP is shown in Fig. 2, which consists of a generator and a ranker. The generator takes as input an image along with a continuous latent variable that controls the change of the attribute, and outputs the desired image; while the ranker provides information in terms of image quality and the preference over the attribute, which guides the learning of the generator. We implement the generator with a standard encoder-decoder architecture following Wu et al. (2019). In the following, we focus on describing the detailed design of the ranker and the principle behind it." }, { "heading": "3.1 RANKER FOR RELATIVE ATTRIBUTES", "text": "Relative attributes (RAs) are assumed to be most representative and most valid to describe the information related to the relative emphasis of the attribute, owing to its simplicity and easy construction (Parikh and Grauman, 2011; Saquil et al., 2018). For a pair of images (x,y), RAs refer to their preference over the specific attribute: y x when y shows a greater strength than x on the target attribute and vice versa.\nPairwise learning to rank is a widely-adopted technique to model the relative attributes (Parikh and Grauman, 2011). Given a pair of images (x,y) and its relative attribute, the pairwise learning to rank\ntechnique is formulated as a binary classification (Cao et al., 2006), i.e.,\nR(x,y) = { 1 y x; −1 y ≺ x, (1)\nwhere R(x,y) is the ranker’s prediction for the pair of images (x,y).\nx\ny\nR +1/-1\nFigure 3: The ranker model.\nFurther, the attribute discrepancy between RAs, distilled by the ranker, can then be used to guide the generator to translate the input image into the desired one.\nHowever, the ranker is trained on the real image pairs, which only focuses on the modeling of preference over the attribute and ignores image quality. To achieve the agreement with the ranker, the generator possibly produces unrealistic images, which conflicts with the goal of the discriminator." }, { "heading": "3.2 RIVAL PREFERENCES ENHANCING THE RANKER", "text": "According to the above analysis, we consider incorporating the generated image pairs into the modeling of RAs, along with the real image pairs to reconcile the goal of the ranker and the discriminator. Meanwhile, the resultant ranker will not only generalize well to the generated pairs but also avoid providing untrustworthy feedback by discriminating the unrealistic images.\nMotivated by the adversarial training of GAN, we introduce an adversarial ranking process between a ranker and a generator to incorporate the generated pairs into the training of ranker. To be specific,\n• Ranker. Inspired by semi-supervised GAN (Odena, 2016), we assign a pseudo label to the generated pairs. In order to avoid a biased influence on the ranking decision over real image pairs, i.e., positive (+1) or negative (-1), the pseudo label is designed as zero. Note that the generated pair consists of a synthetic image and its input in order to connect the ranker prediction to the controlling latent variable.\nR(x,∆) = { +1 ∆ = y ∧ y x; −1 ∆ = y ∧ y ≺ x; 0 ∆ = ŷ.\n(2)\nwhere ŷ denotes the output of the generator given the input image x and v, i.e., ŷ = G(x, v). ∆ is a placeholder that can either be a real image y or be a generated image ŷ.\n• Generator. The goal of the generator is to achieve the consistency between the ranking prediction R(x, ŷ) and the corresponding latent variable v. When v > 0, the ranker is supposed to believe that the generated image ŷ has a larger strength of the specific attribute than the input x, i.e., R(x, ŷ) = +1; and vice versa.\nR(x, ŷ) = { +1 v > 0; −1 v < 0. (3)\nWe denominate the opposite goals between the ranker and the generator w.r.t. the generated pairs as rival preferences1. An intuitive example of the rival preference is given in Fig. 4 for better understanding.\nThe ranker is promoted in terms of the following aspects: (1) The function of the ranker on the real image pairs is not changed. The generated pairs are uniformly sampled regarding their latent variables. By assigning label zero, the ranking information implied within the pairs is neutral-\nized to maintain the ranking performance on the real image pairs. (2) The ranker avoids providing biased ranking prediction for unrealistic image pairs. As we constrain the generated pairs at the decision boundary, i.e, R(x, ŷ) = 0, the ranker is invariant to the features specified by the generated\n1“rival” means adversarial. We use it to distinguish it from adversarial training in the community.\npairs (Chapelle et al., 2008), suppressing the influence of the unrealistic features on the ranking decision. (3) The ranker can capture the exclusive difference over the specific attribute through the adversarial process. Since the ranker rejects to give effective feedback for unrealistic image pairs, only the realistic image pairs can attract the attention of the ranker. Therefore, the ranker only passes the effective information related to the target attribute to the generator.\nThen, we introduce a parallel head following the feature layer to ensure the image quality together with a rank head, shown in Fig. 2. According to the above analysis, the ranker would not evoke conflict with the goal of the image quality. Therefore, we successfully reconcile the two goals of image quality and the extraction of the attribute difference. With a powerful ranker, the generator would “win” the adversarial game by producing the realistic pairs consistent with the latent variable. Remark 1 (Assigning zero to similar real image pairs). It is natural to assign zero to pairs {(x,y)|y = x}, where = denotes that x and y have same strength in the interested attribute. They can improve the ranking prediction (Zhou et al., 2008)." }, { "heading": "3.3 LINEARIZING THE RANKING OUTPUT", "text": "Equation 3 models the relative attributes of the generated pairs as a binary classification, which fails to enable a fine-grained translation since the subtle changes implied by the latent variable are not distinguished by the ranker. For example, given v1 > v2 > 0, the ranker give same feedbacks for (x, ŷ1) and (x, ŷ2) are both +1, which loses the discrimination between the two pairs. To achieve the fine-grained translation, we linearize the ranker’s output for the generated pairs so as to align the ranker prediction with the latent variable. We thus reformulate the binary classification as the regression:\nR(x, ŷ) = v. (4)\nNote that the output of the ranker can reflect the difference in a pair of images. Given two latent variables 1 > v2 > v1 > 0, the ranking predictions for the pair generated from v2 should be larger than that from v1, i.e., 1 > R(x, ŷ2) > R(x, ŷ1) > 0. The ranker’s outputs for the generated pairs would be linearly correlated to the corresponding latent variable. Therefore, the generated output images can change smoothly over the input image according to the latent variable." }, { "heading": "3.4 TRANSLATION VIA RIVAL PREFERENCES (TRIP)", "text": "In the following, we introduce the loss functions for the two parallel heads in the ranker. The overall network structure can be seen in Fig. 2.\nLoss of rank head R: we adopt the least square loss for the ranking predictions. The loss function for the ranker and the generator is defined as:\nLRrank = Ep(x,y,r) [ (R (x,y)− r)2 ] + λEp(x)p(v) [ (R (x, G(x, v))− 0)2 ] ; (5a)\nLGrank = Ep(x)p(v) [ (R (x, G(x, v))− v)2 ] , (5b)\nwhere ŷ = G(x, v). r = {\n1 y x −1 y ≺ x denotes the relative attribute. p(x,y, r) are the joint\ndistribution of real image preferences. p(x) is the distribution of the training images. p(v) is a uniform distribution [−1, 1]. λ is the weight factor that considers the rival preferences. By optimizing LDrank (equation 5a), the ranker is trained to predict correct labels for real image pairs and assign zero for generated pairs, i.e., equation 2. By optimizing LGrank (equation 5b), the generator is trained to output the desire image ŷ, where the difference between ŷ and x is consistent with the latent variable v, i.e., equation 4.\nLoss of GAN head D: to be consistent with the above rank head and also ensure a stable training, a regular least square GAN’s loss is adopted:\nLDgan = Ep(x) [ (D (x)− 1)2 ] + Ep(x)p(v) [ (D (G(x, v))− 0)2 ] ; (6a)\nLGgan = Ep(x)p(v) [ (D (G(x, v))− 1)2 ] , (6b)\nwhere 1 denotes the real image label while 0 denotes the fake image label.\nJointly training the rank head and the gan head, the gradients backpropagate through the shared feature layer to the generator. Thus our TRIP can conduct the high-quality fine-grained I2I translation." }, { "heading": "3.5 EXTENDED TO THE MULTIPLE ATTRIBUTES", "text": "To generalize our TRIP to multiple (K) attributes, we use vectors v and r with K dimension to denote the latent variable and the preference label, respectively. Each dimension controls the change of one of the interested attributes. In particular, the ranker consists of one GAN head and K parallel rank head. The overall loss function is summarized as follows:\nLRrank = Ep(x,y,r) ∑ k [ (Rk (x,y)− rk)2 ] + λEp(x)p(v) ∑ k [ (Rk (x, G(x, v))− 0)2 ] ; (7a)\nLGrank = Ep(x)p(v) ∑ k [ (Rk (x, G(x, v))− vk)2 ] , (7b)\nwhere Rk is the output of the k-th rank head. vk and rk are the k-th dimension of v and r, respectively." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we compare our TRIP with various baselines on the task of fine-grained image-toimage translation. We verify that our ranker can distinguish the subtle difference in a pair of images. Thus we propose to apply our ranker for evaluating the fine-graininess of image pairs generated by various methods. We finally extend TRIP to the translation of multiple attributes.\nDatasets. We conduct experiments on the high quality version of a subset from Celeb Faces Attributes Dataset (CelebA-HQ) (Karras et al., 2018) and Labeled Faces in the Wild with attributes (LFWA) (Liu et al., 2015). CelebA-HQ consists of 30K face images of celebrities, annotated with 40 binary attributes such as hair colors, gender and age. LFWA has 13, 143 images with 73 annotated binary attributes. We resize the images of two datasets to 256 × 256. The relative attributes are obtained for any two images x and y only based on the binary label of attributes. For instance, for “smiling” attribute, we construct the comparison x > y when the label of x is “smiling” while the label of y is “not smiling”, and vice versa. Therefore, we make a fair comparison with other baselines in terms of same supervision information.\nImplementation Details. As the translation is conducted on the unpaired setting, the cycle consistency loss Lcycle (Zhu et al., 2017) are usually introduced to keep the identity of faces when translation. An orthogonal loss Lo and the gradient penalty loss Lgp are added to stabilize the training following (Wu et al., 2019). The weighting factor for Lgan, Lcycle, Lo and Lgp are λg, λc, λo and λgp, respectively. Except λg = 0.5 for CelebA-HQ and λg = 5 for LFWA, we set the same parameter for all datasets. Specifically, we set λ = 0.5, λc = 2.5, λgp = 150, λo = 10\n−6. We use the Adam optimizer [23] with β1 = 0.5 and β2 = 0.999. The learning rate is set to 1e-5 for the ranker and 5e-5 for the generator. The batch size is set to 4. See appendix for details about the network architecture and the experiment setting.\nBaselines. We compare TRIP with FN (Lample et al., 2017), RelGAN (Wu et al., 2019) and RCGAN (Saquil et al., 2018). We use the released codes of FN, RelGAN and RCGAN 2. We did not compare with AttGAN since RelGAN outperforms AttGAN, which is shown in (Wu et al., 2019).\nEvaluation Metrics. Follow Wu et al. (2019), we use three metrics to quantitatively evaluate the performance of fine-grained translation. Standard deviation of structural similarity (SSIM) measures the fine-grained translation. Frechet Inception Distance (FID) measures the visual quality. Accuracy of Attribute Swapping (AAS) evaluates the accuracy of the binary image translation. The swapping for the attribute is to translate an image, e.g., from “smiling” into “not smiling”. The calculation can be found in App. D.\n2https://github.com/facebookresearch/FaderNetworks, https://github.com/willylulu/RelGAN, https://github.com/saquil/RankCGAN\nRCGAN\nTRIP\nInput v = −1 v = −0.8 v = −0.6 v = −0.2 v = 0 v = 0.2 v = 0.4 v = 0.6 v = 0.8 v = 1v = −0.4\nRelGAN\nFN" }, { "heading": "4.1 FINE-GRAINED IMAGE-TO-IMAGE TRANSLATION", "text": "We conduct fine-grained I2I translation on a single attribute. On CelebA-HQ dataset, we make the I2I translation in terms of “smile”, “gender”, “mouth open” and “high cheekbones” attributes, respectively. On LFWA dataset, we make the I2I translation in terms of “smile” and “Frown” attributes, respectively. We show that our TRIP achieves the best performance on the fine-grained I2I translation task comparing with various strong baselines. Best visual results As shown in Fig. 5, (1) all GANs can translate the input image into “more smiling” when v > 0 or “less smiling” when v < 0. The degree of changes is consistent with the numerical value of v. (2) Our GAN’s generation shows the best visual quality, generating realistic output images that are different from the input images only in the specific attribute. In contrast, FN suffers from image distortion issues. RelGAN’s generation not only changed the specific attribute “smile”, but also other irrelevant attributes, “hair color”. RCGAN exhibits extremely poor generation results.\nBest fine-grained score We present the quantitative evaluation of the fine-grained translation in Table 1. Our TRIP achieves the lowest SSIM scores, consistent with the visual results. Note that a trivial case to obtain a low SSIM is when the translation is failed. Namely, the generator would output the same value no matter what the latent variable is. Therefore, we further apply AAS to evaluate the I2I translation in a binary manner. Most GANs achieve over 95% accuracy except for FN (See the App. Fig. 12). Under this condition, it guarantees that a low SSIM indeed indicates the output images change smoothly with the latent variable.\nBest image quality score Table 1 presents the quantitative evaluation of the image quality. (1) Our TRIP achieves the best image quality with the lowest FID scores. (2) FN achieves the best FID on LFWA dataset. Because the FN achieves a relatively low accuracy of the translation, < 75% in Fig. 12, many generated images would be the same as the input image. It means that the statistics of the translated images are similar to that of the input images, leading to a low FID. (3) RCGAN has the worst FID scores, consistent with the visual results in Fig. 1." }, { "heading": "4.2 PHYSICAL MEANING OF RANKER OUTPUT", "text": "From Fig. 5 and Table 1, it shows that when conditioning on different latent variables, our TRIP can translate an input image into a series of output images that exhibit the corresponding changes over\nthe attribute. We then evaluate the function of our ranker using these fine-grained generated pairs. It verifies that our ranker’s output well-aligns to the relative change in the pair of images.\nWe further evaluate fine-grained I2I translations w.r.t. the “smile” attribute on the test dataset of CelebA-HQ (See Fig. 6). The trained generator is applied to generate a set of G(x, v) by taking as inputs an image from the test dataset and v = −1.0,−0.5, 0.0, 0.5, 1.0, respectively3. Then we collect the output of the ranker for each generated pair and plot the density in terms of different types of pairs, i.e., with different v.\nAs shown in Fig. 6, (1) for a large v, the ranker would output a large prediction. It demonstrates that the ranker indeed generalizes to synthetic imaged pairs and can discriminate the subtle change among each image pair. (2) The ranker can capture the whole ordering instead of the exact value w.r.t. the latent variable. Because the ranker that assigns 0 to the generated pairs inhibits the generator’s loss optimizing to zero, although our generator’s objective is to ensure the ranker output values are consistent with the latent variable. However, the adversarial training would help the ranker to achieve an equilibrium with the generator when convergence, so that the ranker can maintain the whole ordering regarding the latent variable." }, { "heading": "4.3 LINEAR TENDENCY ON THE LATENT VARIABLE", "text": "As our ranker can reveal the relative changes in pairs of images, we use it to evaluate the subtle differences of the fine-grained synthetic image pairs generated by different methods.\nWe generate the fine-grained pairs on the test dataset of CelebA-HQ w.r.t. the “smile” attribute. Each trained model produces a series of synthetic images by taking as input a real image and different latent variables. The range of the latent variable is from -1 to 1 with step 0.13. Then the ranker, pre-trained by our TRIP, is applied to evaluate the generated pairs and group them in terms of different conditioned latent variables for different models, respectively. In terms of each group, we calculate the mean and the standard deviation (std) for the outputs of the ranker (Fig. 7).\nFig. 7 shows that (1) the ranking output of our TRIP exhibits a linear trend with the lowest variance w.r.t. the latent variable. This demonstrates that TRIP can smoothly translate the input image to the desired image over the specific attribute along the latent variable. (2) The ranking output of RCGAN behaves like a tanh curve with a sudden change when the latent variable is around zero. It means that RCGAN cannot smoothly control the attribute strength for the input image. In addition, RCGAN has the largest variance on the ranking output due to the low quality of the generated images, which introduces noises to the ranker’s prediction on the generated pairs. (3) RelGAN manifests a three-step like curve, which indicates a failure of fine-grained generation. This is mainly because of its specific design of the interpolation loss. (4) FN presents a linear tendency like our TRIP, which denotes that it can make a fine-grained control over the attribute. However, the mean of the ranking output for the generated pairs is relatively low in FN, since it fails to translate some input images into the desired\n3When conditioning on negative values of the latent variable, we use the test samples with the “smiling” attribute. When conditioning on positive values, we use the test samples with the “not smiling” attribute.\noutput images. This is verified by its low translation accuracy (See the appendix Fig. 12), lower than 85%. In addition, FN also exhibits a large variance of the ranking output due to the poor image quality.\n4.4 EXTENDED TO MULTIPLE ATTRIBUTES\nWe conduct fine-grained I2I translation with two attributes “smile” and “male” on CelebA-HQ to show that our model can generalize well to the case with multiple attributes. We use the latent variable with two dimensions to control the change of “smile” and “male” attributes, respectively.\nWe show the generated outputs conditioning on different v in Fig. 8. (1) Our GAN can disentangle multiple attributes. When conditioning on v = [−1, 0]/[1, 0], the output images O-1,0/O1,0 appear “less smiling”/“more smiling” with no change in the “masculine” attribute.\nWhen conditioning on v = [0,−1]/[0, 1], the output images O0, -1/O0,1 appear “less masculine”/“more masculine” with no change in the “smiling” attribute. In addition, a fine-grained control over the strength of a single attribute is still practical. (2) Our TRIP can manipulates the subtle changes of multiple attributes simultaneously. For example, when conditioning v = [1,−1], the output image O1, -1 appear “less smiling” and “more masculine”. Our TRIP can make a fine-grained translation on “smille” and “masculine”." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a novel GAN for fine-grained image-to-image translation by modeling RAs, where the generated data is to model fake data region for the ranking model. We empirically show the efficacy of our GAN for the fine-grained translation on CelebA-HQ and LFWA dataset. Our proposed GAN can be deemed as a new form of semi-supervised GAN. The supervised pairwise ranking and the unsupervised generation target is incorporated into a single model function. So one of the promise of this work can be extended to semi-supervised GAN area." }, { "heading": "A ABLATION STUDY", "text": "In Fig. 9 and Table 2, we show an ablation study of our model. (1) Without Lrank, the generated images exhibit no change over the input image. The generator fails to learn the mapping of the translation, which is reflected by a extremely low translation accuracy. (2) Without Lgan, the image quality degrades, achieving a high FID score. (3) Setting λ = 0, i.e., without considering the adversarial ranking, the performance of facial image manipulation collapses, obtaining a low translation accuracy. (4) When optimizing with equation 3, i.e., not linearing the ranking output for the generated pairs, the fine-grained control over the attributes fails, getting a high SSIM score. (5) With our TRIP, the generated images present desired changes consistent with the latent variable and possess good quality." }, { "heading": "B CONVERGENCE OF TRIP", "text": "We plot the training curve of the ranker and the generator, respectively, as shown in Fig. 10. It demonstrates that the ranker and the generator are trained against each other until convergence.\nWe plot the distribution of the ranker’s prediction for real image pairs and generated image pairs with different relative attributes (RAs) (+1/0/− 1) using the ranker in Fig. 11. (1) At the beginning of the training (Fig. 11a), the ranker gives similar predictions for real image pairs with different RAs. The same observations can also be found on the generated image pairs. (2) After 100 iterations (Fig. 11b), the ranker learns to give the desired prediction for different kinds of pair, i.e., > 0 (averaged) for pairs with RA (+1), 0 for pairs with RA (0) and −1 for pairs with RA (−1). (3) After 9, 900 iterations (Fig. 11c), TRIP converges. In terms of the real image pairs, the ranker output +1 for the pairs with RA (+1), 0 for the pairs with RA (0) and −1 for the pairs with RA (−1) in the sense of average.\nThis verifies that our ranker can give precise ranking predictions for real image pairs. In terms of the generated pairs, the ranker outputs +0.5 for the pairs with RA (+1), 0 for the pairs with RA (0) and −0.5 for the pairs with RA (−1) in the sense of average. This is a convergence state due to rival preferences. We take pairs with RA (+1) as an example. The generated pairs with RA (+1) are expected to be assigned 0 when optimizing the ranker and to be assigned +1 when optimizing the generator. Therefore, the convergence state should be around 0.5. And so forth. This can explain why the ranker would output 0 for the pairs with RA (0) and −0.5 for pairs with RA (−1) in the sense of average." }, { "heading": "C EXPERIMENTAL SETTING", "text": "C.1 NETWORK STRUCTURE\nOur generator will take one image and a random sampled relative attribute as input and output a translated image. Our generator network is same as RelGAN (Wu et al., 2019), which is composed of three convolutional layers for down-sampling, six residual blocks, and three transposed convolutional layers for up-sampling (shown in Table 3). Our proposed ranker will take pair of images (x,y) as inputs and output the classification score. It is comprised of two functional components rank layer and GAN layer following a feature layer (shown in Table 4).The rank layer and the GAN layer is for calculating LDrank and L D gan, respectively. The feature layer F is composed of six convolutional layers. The rank layer is composed of a subtract layer, one convolutional layer, one flatten layer and one dense layer. The subtract layer operates on F (x) and F (y), i.e., F (y)− F (x). The GAN layer is composed of one flatten layer and one dense layer.\nC.2 TRAINING\nWe split the dataset into training/test with a ratio 90/10. We pretrain our GAN only with Lgan to enable a good reconstruction for the generator. By doing so, we ease the training by sequencing the learning of our GAN. That is, we first make a generation with good quality. Then when our GAN begins to train, the ranker can mainly focus on the relationship between the generated pairs and its corresponding conditional v, rather than handling the translation quality and the generation quality together. All the experiment results are obtained by a single run." }, { "heading": "D EVALUATION", "text": "SSIM. We first apply the generator to produce a set of fine-grained output images {x1, . . . ,x11} by conditioning an input image and a set of latent variable values from −1 to 1 with a step 0.2. We then compute the standard deviation of the structural similarity (SSIM) (Wang et al., 2004) between xi−1 and xi as follows: σ ({SSIM (xi−1,xi) | i = 1, · · · , 11}) . (8) We calculate SSIM for each image from the test dataset and average them to get the final score.\nAAS. The accuracy is evaluated by a facial attribute classifier that uses the Resnet-18 architecture (He et al., 2016). To obtain AAS, we first translate the test images with the trained GANs and then apply the classifier to evaluate the classification accuracy of the translated images coupled with its swapping attribute. Higher accuracy means that more images are translated as desired.\nFID. It is evaluated with 30K translated images on CelebA-HQ dataset and 13, 143 translated images on LFWA dataset.\nE MORE EXPERIMENT RESULTS" }, { "heading": "F FINE-GRAINED I2I TRANSLATION ON NON-FACIAL DATASET", "text": "To further evaluate the effectiveness of our method, we conduct fine-grained I2I translation on shoes → edges from the UT Zappos50K dataset Isola et al. (2017). Fig. 16 shows that our TRIP can make a fine-grained translation from shoe images to edge images.\n(a) FN\n(b) RelGAN\n(c) RCGAN\n(d) TRIP\n(a) FN\n(b) RelGAN\n(c) RCGAN" } ]
2,020
TRIP: REFINING IMAGE-TO-IMAGE TRANSLATION
SP:878a518cb77731b8b376d5fd82542670e195f0d6
[ "This paper aims to develop a transformer-based pre-trained model for multivariate time series representation learning. Specifically, the transformer’s encoder is only used and a time-series imputation task is constructed as their unsupervised learning objective. This is a bit similar to the BERT model in NLP. But authors added a mask for each variable of the time series. After pretraining with this imputation loss, the transformer can be used for downstream tasks, such as regression and classification. As the authors mentioned on page 6, this is achieved by further fine-tuning all weights of the pre-trained transformer. " ]
In this work we propose for the first time a transformer-based framework for unsupervised representation learning of multivariate time series. Pre-trained models can be potentially used for downstream tasks such as regression and classification, forecasting and missing value imputation. We evaluate our models on several benchmark datasets for multivariate time series regression and classification and show that they exceed current state-of-the-art performance, even when the number of training samples is very limited, while at the same time offering computational efficiency. We show that unsupervised pre-training of our transformer models offers a substantial performance benefit over fully supervised learning, even without leveraging additional unlabeled data, i.e., by reusing the same data samples through the unsupervised objective.
[]
[ { "authors": [ "A. Bagnall", "J. Lines", "A. Bostrom", "J. Large", "E. Keogh" ], "title": "The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances", "venue": "Data Mining and Knowledge Discovery,", "year": 2017 }, { "authors": [ "Anthony Bagnall", "Hoang Anh Dau", "Jason Lines", "Michael Flynn", "James Large", "Aaron Bostrom", "Paul Southam", "Eamonn Keogh" ], "title": "The UEA multivariate time series classification archive, 2018", "venue": null, "year": 2018 }, { "authors": [ "Iz Beltagy", "Matthew E. Peters", "Arman Cohan" ], "title": "Longformer: The Long-Document Transformer", "venue": "[cs],", "year": 2020 }, { "authors": [ "Filippo Maria Bianchi", "Lorenzo Livi", "Karl Øyvind Mikalsen", "Michael Kampffmeyer", "Robert Jenssen" ], "title": "Learning representations of multivariate time series with missing data", "venue": "Pattern Recognition,", "year": 2019 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc V. Le", "Ruslan Salakhutdinov" ], "title": "Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context", "venue": null, "year": 1901 }, { "authors": [ "Edward De Brouwer", "Jaak Simm", "Adam Arany", "Yves Moreau" ], "title": "GRU-ODE-Bayes: Continuous Modeling of Sporadically-Observed Time Series", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Angus Dempster", "Franccois Petitjean", "Geoffrey I. Webb" ], "title": "ROCKET: exceptionally fast and accurate time series classification using random convolutional kernels", "venue": "Data Mining and Knowledge Discovery,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "venue": "CoRR, abs/1810.04805,", "year": 2018 }, { "authors": [ "H. Fawaz", "B. Lucas", "G. Forestier", "Charlotte Pelletier", "D. Schmidt", "Jonathan Weber", "Geoffrey I. Webb", "L. Idoumghar", "Pierre-Alain Muller", "Franccois Petitjean" ], "title": "InceptionTime: Finding AlexNet for Time Series Classification", "venue": "ArXiv, 2019a. doi: 10.1007/s10618-020-00710-y", "year": 2019 }, { "authors": [ "Hassan Fawaz", "Germain Forestier", "Jonathan Weber", "Lhassane Idoumghar", "Pierre-Alain Muller" ], "title": "Deep learning for time series classification: a review", "venue": "Data Mining and Knowledge Discovery,", "year": 2019 }, { "authors": [ "Vincent Fortuin", "M. Hüser", "Francesco Locatello", "Heiko Strathmann", "G. Rätsch" ], "title": "SOM-VAE: Interpretable Discrete Representation", "venue": "Learning on Time Series. ICLR,", "year": 2019 }, { "authors": [ "Jean-Yves Franceschi", "Aymeric Dieuleveut", "Martin Jaggi" ], "title": "Unsupervised Scalable Representation Learning for Multivariate Time Series", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Sepp Hochreiter" ], "title": "The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions", "venue": "Int. J. Uncertain. Fuzziness Knowl.-Based Syst.,", "year": 1998 }, { "authors": [ "Cheng-Zhi Anna Huang", "Ashish Vaswani", "Jakob Uszkoreit", "Ian Simon", "Curtis Hawthorne", "Noam Shazeer", "Andrew M Dai", "Matthew D Hoffman", "Monica Dinculescu", "Douglas Eck" ], "title": "Music transformer: Generating music with long-term structure", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "A. Jansen", "M. Plakal", "Ratheet Pandya", "D. Ellis", "Shawn Hershey", "Jiayang Liu", "R.C. Moore", "R.A. Saurous" ], "title": "Unsupervised Learning of Semantic Audio Representations", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "A. Kopf", "Vincent Fortuin", "Vignesh Ram Somnath", "M. Claassen" ], "title": "Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations", "venue": null, "year": 2019 }, { "authors": [ "Shiyang Li", "Xiaoyong Jin", "Yao Xuan", "Xiyou Zhou", "Wenhu Chen", "Yu-Xiang Wang", "Xifeng Yan" ], "title": "Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Bryan Lim", "Sercan O. Arik", "Nicolas Loeff", "Tomas Pfister" ], "title": "Temporal fusion transformers for interpretable multi-horizon time series forecasting, 2020", "venue": null, "year": 2020 }, { "authors": [ "J. Lines", "Sarah Taylor", "Anthony J. Bagnall" ], "title": "Time Series Classification with HIVE-COTE", "venue": "ACM Trans. Knowl. Discov. Data,", "year": 2018 }, { "authors": [ "Benjamin Lucas", "Ahmed Shifaz", "Charlotte Pelletier", "Lachlan O’Neill", "Nayyar Zaidi", "Bart Goethals", "Francois Petitjean", "Geoffrey I. Webb" ], "title": "Proximity Forest: An effective and scalable distancebased classifier for time series", "venue": "Data Mining and Knowledge Discovery,", "year": 2019 }, { "authors": [ "Xinrui Lyu", "Matthias Hueser", "Stephanie L. Hyland", "George Zerveas", "Gunnar Raetsch" ], "title": "Improving Clinical Predictions through Unsupervised Time Series Representation Learning", "venue": "In Proceedings of the NeurIPS 2018 Workshop on Machine Learning for Health,", "year": 2018 }, { "authors": [ "J. Ma", "Zheng Shou", "Alireza Zareian", "Hassan Mansour", "A. Vetro", "S. Chang" ], "title": "Cdsa: Cross-dimensional self-attention for multivariate, geo-tagged time series imputation", "venue": null, "year": 1905 }, { "authors": [ "P. Malhotra", "T. Vishnu", "L. Vig", "Puneet Agarwal", "G. Shroff" ], "title": "TimeNet: Pre-trained deep recurrent neural network for time series classification", "venue": null, "year": 2017 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "W. Li", "Peter J. Liu" ], "title": "Exploring the Limits of Transfer Learning with a Unified Text-toText", "venue": "Transformer. ArXiv,", "year": 2019 }, { "authors": [ "Sheng Shen", "Zhewei Yao", "Amir Gholami", "Michael W. Mahoney", "Kurt Keutzer" ], "title": "PowerNorm: Rethinking Batch Normalization in Transformers", "venue": "[cs],", "year": 2020 }, { "authors": [ "Ahmed Shifaz", "Charlotte Pelletier", "F. Petitjean", "Geoffrey I. Webb" ], "title": "TS-CHIEF: a scalable and accurate forest algorithm for time series classification", "venue": "Data Mining and Knowledge Discovery,", "year": 2020 }, { "authors": [ "C. Tan", "C. Bergmeir", "François Petitjean", "Geoffrey I. Webb" ], "title": "Monash University, UEA", "venue": "UCR Time Series Regression Archive. ArXiv,", "year": 2020 }, { "authors": [ "Chang Wei Tan", "Christoph Bergmeir", "Francois Petitjean", "Geoffrey I Webb" ], "title": "Time series regression", "venue": "arXiv preprint arXiv:2006.12672,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is All you Need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Neo Wu", "Bradley Green", "Xue Ben", "Shawn O’Banion" ], "title": "Deep transformer models for time series forecasting: The influenza prevalence", "venue": null, "year": 2020 }, { "authors": [ "Chuxu Zhang", "Dongjin Song", "Yuncong Chen", "Xinyang Feng", "C. Lumezanu", "Wei Cheng", "Jingchao Ni", "B. Zong", "H. Chen", "Nitesh V. Chawla" ], "title": "A Deep Neural Network for Unsupervised Anomaly Detection and Diagnosis in Multivariate Time Series Data", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Dempster" ], "title": "We recorded the times required for training our fully supervised models until convergence on a GPU, as well as for the currently fastest and top performing (in terms of classification accuracy and regression error) baseline methods, ROCKET and XGBoost on a CPU. These have been shown to be orders of magnitude faster than methods such as TS-CHIEF, Proximity Forest", "venue": "Elastic Ensembles, DTW and HIVE-COTE,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Multivariate time series (MTS) are an important type of data that is ubiquitous in a wide variety of domains, including science, medicine, finance, engineering and industrial applications. Despite the recent abundance of MTS data in the much touted era of “Big Data”, the availability of labeled data in particular is far more limited: extensive data labeling is often prohibitively expensive or impractical, as it may require much time and effort, special infrastructure or domain expertise. For this reason, in all aforementioned domains there is great interest in methods which can offer high accuracy by using only a limited amount of labeled data or by leveraging the existing plethora of unlabeled data.\nThere is a large variety of modeling approaches for univariate and multivariate time series, with deep learning models recently challenging or replacing the state of the art in tasks such as forecasting, regression and classification (De Brouwer et al., 2019; Tan et al., 2020a; Fawaz et al., 2019b). However, unlike in domains such as Computer Vision or Natural Language Processing (NLP), the dominance of deep learning for time series is far from established: in fact, non-deep learning methods such as TS-CHIEF (Shifaz et al., 2020), HIVE-COTE (Lines et al., 2018), and ROCKET (Dempster et al., 2020) currently hold the record on time series regression and classification dataset benchmarks (Tan et al., 2020a; Bagnall et al., 2017), matching or even outperforming sophisticated deep architectures such as InceptionTime (Fawaz et al., 2019a) and ResNet (Fawaz et al., 2019b).\nIn this work, we investigate, for the first time, the use of a transformer encoder for unsupervised representation learning of multivariate time series, as well as for the tasks of time series regression and classification. Transformers are an important, recently developed class of deep learning models, which were first proposed for the task of natural language translation (Vaswani et al., 2017) but have since come to monopolize the state-of-the-art performance across virtually all NLP tasks (Raffel et al., 2019). A key factor for the widespread success of transformers in NLP is their aptitude for learning how to represent natural language through unsupervised pre-training (Brown et al., 2020; Raffel et al., 2019; Devlin et al., 2018). Besides NLP, transformers have also set the state of the art in several domains of sequence generation, such as polyphonic music composition (Huang et al., 2018).\nTransformer models are based on a multi-headed attention mechanism that offers several key advantages and renders them particularly suitable for time series data (see Appendix section A.4 for details). Inspired by the impressive results attained through unsupervised pre-training of transformer models in NLP, as our main contribution, in the present work we develop a generally applicable\nmethodology (framework) that can leverage unlabeled data by first training a transformer model to extract dense vector representations of multivariate time series through an input denoising (autoregressive) objective. The pre-trained model can be subsequently applied to several downstream tasks, such as regression, classification, imputation, and forecasting. Here, we apply our framework for the tasks of multivariate time series regression and classification on several public datasets and demonstrate that transformer models can convincingly outperform all current state-of-the-art modeling approaches, even when only having access to a very limited amount of training data samples (on the order of hundreds of samples), an unprecedented success for deep learning models. Importantly, despite common preconceptions about transformers from the domain of NLP, where top performing models have billions of parameters and require days to weeks of pre-training on many parallel GPUs or TPUs, we also demonstrate that our models, using at most hundreds of thousands of parameters, can be trained even on CPUs, while training them on GPUs allows them to be trained as fast as even the fastest and most accurate non-deep learning based approaches." }, { "heading": "2 RELATED WORK", "text": "Regression and classification of time series: Currently, non-deep learning methods such as TSCHIEF (Shifaz et al., 2020), HIVE-COTE (Lines et al., 2018), and ROCKET (Dempster et al., 2020) constitute the state of the art for time series regression and classification based on evaluations on public benchmarks (Tan et al., 2020a; Bagnall et al., 2017), followed by CNN-based deep architectures such as InceptionTime (Fawaz et al., 2019a) and ResNet (Fawaz et al., 2019b). ROCKET, which on average is the best ranking method, is a fast method that involves training a linear classifier on top of features extracted by a flat collection of numerous and various random convolutional kernels. HIVE-COTE and TS-CHIEF (itself inspired by Proximity Forest (Lucas et al., 2019)), are very sophisticated methods which incorporate expert insights on time series data and consist of large, heterogeneous ensembles of classifiers utilizing shapelet transformations, elastic similarity measures, spectral features, random interval and dictionary-based techniques; however, these methods are highly complex, involve significant computational cost, cannot benefit from GPU hardware and scale poorly to datasets with many samples and long time series; moreover, they have been developed for and only been evaluated on univariate time series.\nUnsupervised learning for multivariate time series: Recent work on unsupervised learning for multivariate time series has predominantly employed autoencoders, trained with an input reconstruction objective and implemented either as Multi-Layer Perceptrons (MLP) or RNN (most commonly, LSTM) networks. As interesting variations of the former, Kopf et al. (2019) and Fortuin et al. (2019) additionally incorporated Variational Autoencoding into this approach, but focused on clustering and the visualization of shifting sample topology with time. As an example of the latter, Malhotra et al. (2017) presented a multi-layered RNN sequence-to-sequence autoencoder, while Lyu et al. (2018) developed a multi-layered LSTM with an attention mechanism and evaluated both an input reconstruction (autoencoding) as well as a forecasting loss for unsupervised representation learning of Electronic Healthcare Record multivariate time series.\nAs a novel take on autoencoding, and with the goal of dealing with missing data, Bianchi et al. (2019) employ a stacked bidirectional RNN encoder and stacked RNN decoder to reconstruct the input, and at the same time use a user-provided kernel matrix as prior information to condition internal representations and encourage learning similarity-preserving representations of the input. They evaluate the method on the tasks of missing value imputation and classification of time series under increasing “missingness” of values.\nA distinct approach is followed by Zhang et al. (2019), who use a composite convolutional - LSTM network with attention and a loss which aims at reconstructing correlation matrices between the variables of the multivariate time series input. They use and evaluate their method only for the task of anomaly detection.\nFinally, Jansen et al. (2018) rely on a triplet loss and the idea of temporal proximity (the loss rewards similarity of representations between proximal segments and penalizes similarity between distal segments of the time series) for unsupervised representation learning of non-speech audio data. This idea is explored further by Franceschi et al. (2019), who combine the triplet loss with a deep causal dilated CNN, in order to make the method effective for very long time series.\nTransformer models for time series: Recently, a full encoder-decoder transformer architecture was employed for univariate time series forecasting: Li et al. (2019) showed superior performance compared to the classical statistical method ARIMA, the recent matrix factorization method TRMF, an RNN-based autoregressive model (DeepAR) and an RNN-based state space model (DeepState) on 4 public forecasting datasets, while Wu et al. (2020) used a transformer to forecast influenza prevalence and similarly showed performance benefits compared to ARIMA, an LSTM and a GRU Seq2Seq model with attention, and Lim et al. (2020) used a transformer for multi-horizon univariate forecasting, supporting interpretation of temporal dynamics. Finally, Ma et al. (2019) use an encoder-decoder architecture with a variant of self-attention for imputation of missing values in multivariate, geo-tagged time series and outperform classic as well as the state-of-the-art, RNN-based imputation methods on 3 public and 2 competition datasets for imputation.\nBy contrast, our work aspires to generalize the use of transformers from solutions to specific generative tasks (which require the full encoder-decoder architecture) to a framework which allows for unsupervised pre-training and with minor modifications can be readily used for a wide variety of downstream tasks; this is analogous to the way BERT (Devlin et al., 2018) converted a translation model into a generic framework based on unsupervised learning, an approach which has become a de facto standard and established the dominance of transformers in NLP." }, { "heading": "3 METHODOLOGY", "text": "" }, { "heading": "3.1 BASE MODEL", "text": "At the core of our method lies a transformer encoder, as described in the original transformer work by Vaswani et al. (2017); however, we do not use the decoder part of the architecture. A schematic diagram of the generic part of our model, common across all considered tasks, is shown in Figure 1. We refer the reader to the original work for a detailed description of the transformer model, and here present the proposed changes that make it compatible with multivariate time series data, instead of sequences of discrete word indices.\nIn particular, each training sample X ∈ Rw×m, which is a multivariate time series of length w and m different variables, constitutes a sequence of w feature vectors xt ∈ Rm: X ∈ Rw×m = [x1,x2, . . . ,xw]. The original feature vectors xt are first normalized (for each dimension, we subtract the mean and divide by the variance across the training set samples) and then linearly projected onto a d-dimensional vector space, where d is the dimension of the transformer model sequence element representations (typically called model dimension):\nut = Wpxt + bp (1)\nwhere Wp ∈ Rd×m, bp ∈ Rd are learnable parameters and ut ∈ Rd, t = 0, . . . , w are the model input vectors1. These will become the queries, keys and values of the self-attention layer, after adding the positional encodings and multiplying by the corresponding matrices.\nWe note that the above formulation also covers the univariate time series case, i.e., m = 1, although we only evaluate our approach on multivariate time series in the scope of this work. We additionally note that the input vectors ut need not necessarily be obtained from the (transformed) feature vectors at a time step t: because the computational complexity of the model scales as O(w2) and the number of parameters2 as O(w) with the input sequence length w, to obtain ut in case the granularity (temporal resolution) of the data is very fine, one may instead use a 1D-convolutional layer with 1 input and d output channels and kernels Ki of size (k,m), where k is the width in number of time steps and i the output channel:\nut i = u(t, i) = ∑ j ∑ h x(t + j, h)Ki(j, h), i = 1, . . . , d (2)\n1Although equation 1 shows the operation for a single time step for clarity, all input vectors are embedded concurrently by a single matrix-matrix multiplication\n2Specifically, the learnable positional encoding, batch normalization and output layers\nIn this way, one may control the temporal resolution by using a stride or dilation factor greater than 1. Moreover, although in the present work we only used equation 1, one may use equation 2 as an input to compute the keys and queries and equation 1 to compute the values of the self-attention layer. This is particularly useful in the case of univariate time series, where self-attention would otherwise match (consider relevant/compatible) all time steps which share similar values for the independent variable, as noted by Li et al. (2019).\nFinally, since the transformer is a feed-forward architecture that is insensitive to the ordering of input, in order to make it aware of the sequential nature of the time series, we add positional encodings Wpos ∈ Rw×d to the input vectors U ∈ Rw×d = [u1, . . . ,uw]: U ′ = U + Wpos. Instead of deterministic, sinusoidal encodings, which were originally proposed by Vaswani et al. (2017), we use fully learnable positional encodings, as we observed that they perform better for all datasets presented in this work. Based on the performance of our models, we also observe that the positional encodings generally appear not to significantly interfere with the numerical information of the time series, similar to the case of word embeddings; we hypothesize that this is because they are learned so as to occupy a different, approximately orthogonal, subspace to the one in which the projected time series samples reside. This approximate orthogonality condition is much easier to satisfy in high dimensional spaces.\nAn important consideration regarding time series data is that individual samples may display considerable variation in length. This issue is effectively dealt with in our framework: after setting a maximum sequence length w for the entire dataset, shorter samples are padded with arbitrary values, and we generate a padding mask which adds a large negative value to the attention scores for the padded positions, before computing the self-attention distribution with the softmax function. This forces the model to completely ignore padded positions, while allowing the parallel processing of samples in large minibatches.\nTransformers in NLP use layer normalization after computing self-attention and after the feedforward part of each encoder block, leading to significant performance gains over batch normalization, as originally proposed by Vaswani et al. (2017). However, here we instead use batch normalization, because it can mitigate the effect of outlier values in time series, an issue that does not arise in NLP word embeddings. Additionally, the inferior performance of batch normalization in NLP has been mainly attributed to extreme variation in sample length (i.e., sentences in most tasks) (Shen et al., 2020), while in the datasets we examine this variation is much smaller. In Table 11 of the Appendix we show that batch normalization can indeed offer a very significant performance benefit over layer normalization, while the extent can vary depending on dataset characteristics." }, { "heading": "3.2 REGRESSION AND CLASSIFICATION", "text": "The base model architecture presented in Section 3.1 and depicted in Figure 1 can be used for the purposes of regression and classification with the following modification: the final representation vectors zt ∈ Rd corresponding to all time steps are concatenated into a single vector z̄ ∈ Rd·w = [z1; . . . ; zw], which serves as the input to a linear output layer with parameters Wo ∈ Rn×(d·w), bo ∈ Rn, where n is the number of scalars to be estimated for the regression problem (typically n = 1), or the number of classes for the classification problem:\nŷ = Woz̄ + bo (3)\nIn the case of regression, the loss for a single data sample will simply be the squared error L = ‖ŷ − y‖2, where y ∈ Rn are the ground truth values. We clarify that regression in the context of this work means predicting a numeric value for a given sequence (time series sample). This numeric value is of a different nature than the numerical data appearing in the time series: for example, given a sequence of simultaneous temperature and humidity measurements of 9 rooms in a house, as well as weather and climate data such as temperature, pressure, humidity, wind speed, visibility and dewpoint, we wish to predict the total energy consumption in kWh of a house for that day. The parameter n corresponds to the number of scalars (or the dimensionality of a vector) to be estimated.\nIn the case of classification, the predictions ŷ will additionally be passed through a softmax function to obtain a distribution over classes, and its cross-entropy with the categorical ground truth labels will be the sample loss.\nFinally, when fine-tuning the pre-trained models, we allow training of all weights; instead, freezing all layers except for the output layer would be equivalent to using static, pre-extracted time-series representations of the time series. In Table 12 in the Appendix we show the trade-off in terms of speed and performance when using a fully trainable model versus static representations." }, { "heading": "3.3 UNSUPERVISED PRE-TRAINING", "text": "As a task for the unsupervised pre-training of our model we consider the autoregressive task of denoising the input: specifically, we set part of the input to 0 and ask the model to predict the masked values. The corresponding setup is depicted in the right part of Figure 1. A binary noise mask M ∈ Rw×m, is created independently for each training sample, and the input is masked by elementwise multiplication: X̃ = M X. On average, a proportion r of each mask column of length w (corresponding to a single variable in the multivariate time series) is set to 0 by alternating between segments of 0s and 1s. We choose the state transition probabilities such that each masked segment (sequence of 0s) has a length that follows a geometric distribution with mean lm and is succeeded by an unmasked segment (sequence of 1s) of mean length lu = 1−rr lm. We chose lm = 3 for all presented experiments. The reason why we wish to control the length of the masked sequence, instead of simply using a Bernoulli distribution with parameter r to set all mask elements independently at random, is that very short masked sequences (e.g., of 1 masked element) in the input can often be trivially predicted with good approximation by replicating the immediately preceding or succeeding values or by the average thereof. In order to obtain enough long masked sequences with relatively high likelihood, a very high masking proportion r would be required, which would render the overall task detrimentally challenging. Following the process above, at each time step on average r ·m variables will be masked. We empirically chose r = 0.15 for all presented experiments. This input masking process is different from the “cloze type” masking used by NLP models such as BERT,\nwhere a special token and thus word embedding vector replaces the original word embedding, i.e., the entire feature vector at affected time steps. We chose this masking pattern because it encourages the model to learn to attend both to preceding and succeeding segments in individual variables, as well as to existing contemporary values of the other variables in the time series, and thereby to learn to model inter-dependencies between variables. In Table 10 in the Appendix we show that this masking scheme is more effective than other possibilities for denoising the input.\nUsing a linear layer with parameters Wo ∈ Rm×d, bo ∈ Rm on top of the final vector representations zt ∈ Rd, for each time step the model concurrently outputs its estimate x̂t of the full, uncorrupted input vectors xt; however, only the predictions on the masked values (with indices in the set M ≡ {(t, i) : mt,i = 0}, where mt,i are the elements of the mask M), are considered in the Mean Squared Error loss for each data sample:\nx̂t = Wozt + bo (4)\nLMSE = 1 |M | ∑∑ (t,i)∈M (x̂(t, i)− x(t, i))2 (5)\nThis objective differs from the one used by denoising autoencoders, where the loss considers reconstruction of the entire input, under (typically Gaussian) noise corruption. Also, we note that the approach described above differs from simple dropout on the input embeddings, both with respect to the statistical distributions of masked values, as well as the fact that here the masks also determine the loss function. In fact, we additionally use a dropout of 10% when training all of our supervised and unsupervised models." }, { "heading": "4 EXPERIMENTS & RESULTS", "text": "In the experiments reported below we use the predefined training - test set splits of the benchmark datasets and train all models long enough to ensure convergence. We do this to account for the fact that training the transformer models in a fully supervised way typically requires more epochs than fine-tuning the ones which have already been pre-trained using the unsupervised methodology of Section 3.3. Because the benchmark datasets are very heterogeneous in terms of number of samples, dimensionality and length of the time series, as well as the nature of the data itself, we observed that we can obtain better performance by a cursory tuning of hyperparameters (such as the number of encoder blocks, the representation dimension, number of attention heads or dimension of the feedforward part of the encoder blocks) separately for each dataset. To select hyperparameters, for each dataset we randomly split the training set in two parts, 80%-20%, and used the 20% as a validation set for hyperparameter tuning. After fixing the hyperparameters, the entire training set was used to train the model again, which was finally evaluated on the test set. A set of hyperparameters which has consistently good performance on all datasets is shown in Table 14 in the Appendix, alongside the hyperparameters that we have found to yield the best performance for each dataset (Tables 15, 16, 17, 18." }, { "heading": "4.1 REGRESSION", "text": "We select a diverse range of 6 datasets from the Monash University, UEA, UCR Time Series Regression Archive Tan et al. (2020a) in a way so as to ensure diversity with respect to the dimensionality and length of time series samples, as well as the number of samples (see Appendix Table 3 for dataset characteristics). Table 1 shows the Root Mean Squared Error achieved by of our models, named TST for “Time Series Transformer”, including a variant trained only through supervision, and one first pre-trained on the same training set in an unsupervised way. We compare them with the currently best performing models as reported in the archive. Our transformer models rank first on all but two of the examined datasets, for which they rank second. They thus achieve an average rank of 1.33, setting them clearly apart from all other models; the overall second best model, XGBoost, has an average rank of 3.5, ROCKET (which outperformed ours on one dataset) on average ranks in 5.67th place and Inception (which outperformed ours on the second dataset) also has an average rank of 5.67. On average, our models attain 30% lower RMSE than the mean RMSE among\nall models, and approx. 16% lower RMSE than the overall second best model (XGBoost), with absolute improvements varying among datasets from approx. 4% to 36%. We note that all other deep learning methods achieve performance close to the middle of the ranking or lower. In Table 1 we report the ”average relative difference from mean” metric rj for each model j, the over N datasets:\nrj = 1\nN N∑ i=1 R(i, j)− R̄i R̄i , R̄i = 1 M M∑ k=1 R(i, k)\n, where R(i, j) is the RMSE of model j on dataset i and M is the number of models.\nImportantly, we also observe that the pre-trained transformer models outperform the fully supervised ones in 3 out of 6 datasets. This is interesting, because no additional samples are used for pre-training: the benefit appears to originate from reusing the same training samples for learning through an unsupervised objective. To further elucidate this observation, we investigate the following questions:\nQ1: Given a partially labeled dataset of a certain size, how will additional labels affect performance? This pertains to one of the most important decisions that data owners face, namely, to what extent will further annotation help. To clearly demonstrate this effect, we choose the largest dataset we have considered from the regression archive (12.5k samples), in order to avoid the variance introduced by small set sizes. The left panel of Figure 2 (where each marker is an experiment) shows how performance on the entire test set varies with an increasing proportion of labeled training set data used for supervised learning. As expected, with an increasing proportion of available labels performance improves both for a fully supervised model, as well as the same model that has been first pre-trained on the entire training set through the unsupervised objective and then fine-tuned. Interestingly, not only does the pre-trained model outperform the fully supervised one, but the benefit persists throughout the entire range of label availability, even when the models are allowed to use all labels; this is consistent with our previous observation on Table 1 regarding the advantage of reusing samples.\nQ2: Given a labeled dataset, how will additional unlabeled samples affect performance? In other words, to what extent does unsupervised learning make it worth collecting more data, even if no additional annotations are available? This question differs from the above, as we now only scale the availability of data samples for unsupervised pre-training, while the number of labeled samples is fixed. The right panel of Figure 2 (where each marker is an experiment) shows that, for a given number of labels (shown as a percentage of the totally available labels), the more data samples are used for unsupervised learning, the lower the error achieved (note that the horizontal axis value 0 corresponds to fully supervised training only, while all other values to unsupervised pre-training followed by supervised fine-tuning). This trend is more linear in the case of supervised learning on 20% of the labels (approx. 2500). Likely due to a small sample (here, meaning set) effect, in the case of having only 10% of the labels (approx. 1250) for supervised learning, the error first decreases rapidly as we use more samples for unsupervised pre-training, and then momentarily increases, before it decreases again (for clarity, the same graphs are shown separately in Figure 3 in the Appendix). Consistent with our observations above, it is interesting to again note that, for a given number of labeled samples, even reusing a subset of the same samples for unsupervised pretraining improves performance: for the 1250 labels (blue diamonds of the right panel of Figure 2 or left panel of Figure 3 in the Appendix) this can be observed in the horizontal axis range [0, 0.1], and for the 2500 labels (blue diamonds of the right panel of Figure 2 or right panel of Figure 3 in the Appendix) in the horizontal axis range [0, 0.2]." }, { "heading": "4.2 CLASSIFICATION", "text": "We select a set of 11 multivariate datasets from the UEA Time Series Classification Archive (Bagnall et al., 2018) with diverse characteristics in terms of the number, dimensionality and length of time series samples, as well as the number of classes (see Appendix Table 4). As this archive is new, there have not been many reported model evaluations; we follow Franceschi et al. (2019) and use as a baseline the best performing method studied by the creators of the archive, DTWD (dimensionDependent DTW), together with the method proposed by Franceschi et al. (2019) themselves (a dilation-CNN leveraging unsupervised and supervised learning). Additionally, we use the publicly available implementations Tan et al. (2020b) of ROCKET, which is currently the top performing\nmodel for univariate time series and one of the best in our regression evaluation, and XGBoost, which is one of the most commonly used models for univariate and multivariate time series, and also the best baseline model in our regression evaluation (Section 4.1). Finally, we did not find any reported evaluations of RNN-based models on any of the UCR/UEA archives, possibly because of a common perception for long training and inference times, as well as difficulty in training (Fawaz et al., 2019b); therefore, we implemented a stacked LSTM model and also include it in the comparison. The performance of the baselines alongside our own models are shown in Table 2 in terms of accuracy, to allow comparison with reported values.\nIt can be seen that our models performed best on 7 out of the 11 datasets, achieving an average rank of 1.7, followed by ROCKET, which performed best on 3 datasets and on average ranked 2.3th. The dilation-CNN (Franceschi et al., 2019) and XGBoost, which performed best on the remaining 1 dataset, tied and on average ranked 3.7th and 3.8th respectively. Interestingly, we observe that all datasets on which ROCKET outperformed our model were very low dimensional (specifically, 3-dimensional). Although our models still achieved the second best performance for UWaveGestureLibrary, in general we believe that this indicates a relative weakness of our current models when dealing with very low dimensional time series. As discussed in Section 3.1, this may be due to the problems introduced by a low-dimensional representation space to the attention mechanism, as well as the added positional embeddings; to mitigate this issue, in future work we intend to use a 1D-convolutional layer to extract more meaningful representations of low-dimensional input features (see Section 3.1). Conversely, our models performed particularly well on very high-\ndimensional datasets (FaceDetection, HeartBeat, InsectWingBeat, PEMS-SF), and/or datasets with relatively more training samples. As a characteristic example, on InsectWingBeat (which is by far the largest dataset with 30k samples and contains time series of 200 dimensions and highly irregular length) our model reached an accuracy of 0.689, while all other methods performed very poorly - the second best was XGBoost with an accuracy of 0.369. However, we note that our model performed exceptionally well also on datasets with only a couple of hundred samples, which in fact constitute 8 out of the 11 examined datasets.\nFinally, we observe that the pre-trained transformer models performed better than the fully supervised ones in 8 out of 11 datasets, sometimes by a substantial margin.Again, no additional samples were available for unsupervised pre-training, so the benefit appears to originate from reusing the same samples." }, { "heading": "5 CONCLUSION", "text": "In this work we propose a novel framework for multivariate time series representation learning based on the transformer encoder architecture. The framework includes an unsupervised pre-training scheme, which we show that can offer substantial performance benefits over fully supervised learning, even without leveraging additional unlabeled data, i.e., by reusing the same data samples. By evaluating our framework on several public multivariate time series datasets from various domains and with diverse characteristics, we demonstrate that it is currently the best performing method for regression and classification, even for datasets where only a few hundred training samples are available." }, { "heading": "A APPENDIX", "text": "A.1 ADDITIONAL POINTS & FUTURE WORK\nExecution time for training: While a precise comparison in terms of training time is well out of scope for the present work, in Section A.3 of the Appendix we demonstrate that our transformerbased method is economical in terms of its use of computational resources. However, alternative self-attention schemes, such as sparse attention patterns (Li et al., 2019), recurrence (Dai et al., 2019) or compressed (global-local) attention (Beltagy et al., 2020), can help drastically reduce the O(w2) complexity of the self-attention layers with respect to the time series length w, which is the main performance bottleneck.\nImputation and forecasting: The model and training process described in Section 3.3 is exactly the setup required to perform imputation of missing values, without any modifications, and we observed that it was possible to achieve very good results following this method; as a rough indication, our models could reach Root Mean Square Errors very close to 0 when asked to perform the input denoising (autoregressive) task on the test set, after being subjected to unsupervised pre-training on the training set. We also show example results of imputation on one of the datasets presented in this work in Figure 5. However, we defer a systematic quantitative comparison with the state of the art to future work. Furthermore, we note that one may simply use different patterns of masking to achieve different objectives, while the rest of the model and setup remain the same. For example, using a mask which conceals the last part of all variables simultaneously, one may perform forecasting (see Figure 4 in Appendix), while for longer time series one may additionally perform this process within a sliding window. Again, we defer a systematic investigation to future work.\nExtracted representations: The representations zt extracted by the transformer models can be used directly for evaluating similarity between time series, clustering, visualization and any other use cases where time series representations are used in practice. A valuable benefit offered by transformers is that representations can be independently addressed for each time step; this means that, for example, a greater weight can be placed at the beginning, middle or end of the time series, which allows to selectively compare time series, visualize temporal evolution of samples etc.\nA.2 CRITERIA FOR DATASET SELECTION\nWe select a diverge range of datasets from the Monash University, UEA, UCR Time Series Regression and Classification Archives, in a way so as to ensure diversity with respect to the dimensionality and length of time series samples, as well as the number of samples. Additionally, we have tried to include both ”easy” and ”difficult” datasets (where the baselines perform very well or less well). In the following we provide a more detailed rationale for each of the selected multivariate datasets.\nEthanolConcentration: very low dimensional, very few samples, moderate length, large number of classes, challenging\nFaceDetection: very high dimensional, many samples, very short length, minimum number of classes\nHandwriting: very low dimensional, very few samples, moderate length, large number of classes\nHeartbeat: high dimensional, very few samples, moderate length, minimum number of classes\nJapaneseVowels: very heterogeneous sample length, moderate num. dimensions, very few samples, very short length, moderate number of classes, all baselines perform well\nInsectWingBeat: very high dimensional, many samples, very short length, moderate number of classes, very challenging\nPEMS-SF: extremely high dimensional, very few samples, moderate length, moderate number of classes\nSelfRegulationSCP1: Few dimensions, very few samples, long length, minimum number of classes, baselines perform well\nSelfRegulationSCP2: similar to SelfRegulationSCP1, but challenging\nSpokenArabicDigits: Moderate number of dimensions, many samples, very heterogeneous length, moderate number of classes, most baselines perform well\nUWaveGestureLibrary: very low dimensional, very few samples, moderate length, moderate number of classes, baselines perform well\nA.3 EXECUTION TIME\nWe recorded the times required for training our fully supervised models until convergence on a GPU, as well as for the currently fastest and top performing (in terms of classification accuracy and regression error) baseline methods, ROCKET and XGBoost on a CPU. These have been shown to be orders of magnitude faster than methods such as TS-CHIEF, Proximity Forest, Elastic Ensembles, DTW and HIVE-COTE, but also deep learning based methods Dempster et al. (2020). Although XGBoost and ROCKET are incomparably faster than the transformer on a CPU, as can be seen in Table 7 in the Appendix, exploiting commercial GPUs and the parallel processing capabilities of a transformer typically enables as fast (and sometimes faster) training times as these (currently fastest available) methods. In practice, despite allowing for many hundreds of epochs, using a GPU we never trained our models longer than 3 hours on any of the examined datasets.\nAs regards deep learning models, LSTMs are well known to be slow, as they require O(w) sequential operations (where w is the length of the time series) for each sample, with the complexity per layer scaling as O(N · d2), where d is the internal representation dimension (hidden state size). We refer the reader to the original transformer paper (Vaswani et al., 2017) for a detailed discussion about how tranformers compare to Convolutional Neural Networks in terms of computational efficiency.\nA.4 ADVANTAGES OF TRANSFORMERS\nTransformer models are based on a multi-headed attention mechanism that offers several key advantages and renders them particularly suitable for time series data:\n• They can concurrently take into account long contexts of input sequence elements and learn to represent each sequence element by selectively attending to those input sequence elements which the model considers most relevant. They do so without position-dependent prior bias; this is to be contrasted with RNN-based models: a) even bi-directional RNNs treat elements in the middle of the input sequence differently from elements close to the two endpoints, and b) despite careful design, even LSTM (Long Short Term Memory) and GRU (Gated Recurrent Unit) networks practically only retain information from a limited number of time steps stored inside their hidden state (vanishing gradient problem (Hochreiter, 1998; Pascanu et al., 2013)), and thus the context used for representing each sequence element is inevitably local.\n• Multiple attention heads can consider different representation subspaces, i.e., multiple aspects of relevance between input elements. For example, in the context of a signal with two frequency components, 1/T1 and 1/T2 , one attention head can attend to neighboring time points, while another one may attend to points spaced a period T1 before the currently examined time point, a third to a period T2 before, etc. This is to be contrasted with attention mechanisms in RNN models, which learn a single global aspect/mode of relevance between sequence elements.\n• After each stage of contextual representation (i.e., transformer encoder layer), attention is redistributed over the sequence elements, taking into account progressively more abstract representations of the input elements as information flows from the input towards the output. By contrast, RNN models with attention use a single distribution of attention weights to extract a representation of the input, and most typically attend over a single layer of representation (hidden states).\nA.5 HYPERPARAMETERS" } ]
2,020
null
SP:2fe9ca0b44e57587b94159cb8fa201f79c13db50
[ "In this paper, the authors proposed a novel reparameterization framework of the last network layer that takes semantic hierarchy into account. Specifically, the authors assume a predefined hierarchy graph, and model the classifier of child classes as a parent classifier plus offsets $\\delta$ recursively. The authors show that such hierarchy can be parameterized a matrix multiplication $\\Delta \\mathbf{H}$ where $\\mathbf{H}$ is predefined by the graph. In addition, the authors further propose to fix the norm of $\\delta$ in a decaying manner with respect to path length. The resulting spherical objective is optimized via Riemannian gradient descent." ]
This paper considers classification problems with hierarchically organized classes. We force the classifier (hyperplane) of each class to belong to a sphere manifold, whose center is the classifier of its super-class. Then, individual sphere manifolds are connected based on their hierarchical relations. Our technique replaces the last layer of a neural network by combining a spherical fully-connected layer with a hierarchical layer. This regularization is shown to improve the performance of widely used deep neural network architectures (ResNet and DenseNet) on publicly available datasets (CIFAR100, CUB200, Stanford dogs, Stanford cars, and Tiny-ImageNet).
[]
[ { "authors": [ "P.-A. Absil", "R. Mahony", "R. Sepulchre" ], "title": "Optimization Algorithms on Matrix Manifolds", "venue": null, "year": 2007 }, { "authors": [ "Gregor Bachmann", "Gary Bécigneul", "Octavian-Eugen Ganea" ], "title": "Constant curvature graph convolutional networks", "venue": "arXiv preprint arXiv:1911.05076,", "year": 2019 }, { "authors": [ "Kayhan Batmanghelich", "Ardavan Saeedi", "Karthik Narasimhan", "Sam Gershman" ], "title": "Nonparametric spherical topic modeling with word embeddings", "venue": "In Proceedings of the conference. Association for Computational Linguistics. Meeting,", "year": 2016 }, { "authors": [ "Gary Bécigneul", "Octavian-Eugen Ganea" ], "title": "Riemannian adaptive optimization methods", "venue": "arXiv preprint arXiv:1810.00760,", "year": 2018 }, { "authors": [ "Alsallakh Bilal", "Amin Jourabloo", "Mao Ye", "Xiaoming Liu", "Liu Ren" ], "title": "Do convolutional neural networks learn class hierarchy", "venue": "IEEE transactions on visualization and computer graphics,", "year": 2017 }, { "authors": [ "S. Bonnabel" ], "title": "Stochastic gradient descent on riemannian manifolds", "venue": "IEEE Transactions on Automatic Control,", "year": 2013 }, { "authors": [ "Nicolas Boumal" ], "title": "An introduction to optimization on smooth manifolds", "venue": "Available online,", "year": 2020 }, { "authors": [ "Michael M Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: going beyond euclidean data", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "Lijuan Cai", "Thomas Hofmann" ], "title": "Hierarchical document categorization with support vector machines", "venue": "In Proceedings of the thirteenth ACM international conference on Information and knowledge management,", "year": 2004 }, { "authors": [ "Lijuan Cai", "Thomas Hofmann" ], "title": "Exploiting known taxonomies in learning overlapping concepts", "venue": "In IJCAI,", "year": 2007 }, { "authors": [ "Ines Chami", "Zhitao Ying", "Christopher Ré", "Jure Leskovec" ], "title": "Hyperbolic graph convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Christopher De Sa", "Albert Gu", "Christopher Ré", "Frederic Sala" ], "title": "Representation tradeoffs for hyperbolic embeddings", "venue": "Proceedings of machine learning research,", "year": 2018 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Lun Du", "Zhicong Lu", "Yun Wang", "Guojie Song", "Yiming Wang", "Wei Chen" ], "title": "Galaxy network embedding: A hierarchical community structure preserving approach", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Susan Dumais", "Hao Chen" ], "title": "Hierarchical classification of web content", "venue": "In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval,", "year": 2000 }, { "authors": [ "Siddarth Gopal", "Yiming Yang", "Alexandru Niculescu-Mizil" ], "title": "Regularization framework for large scale hierarchical classification", "venue": "Proceedings of European Conference on Machine Learning,", "year": 2012 }, { "authors": [ "Albert Gu", "Frederic Sala", "Beliz Gunel", "Christopher Ré" ], "title": "Learning mixed-curvature representations in product spaces", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Aditya Khosla", "Nityananda Jayadevaprakash", "Bangpeng Yao", "Li Fei-Fei" ], "title": "Novel dataset for fine-grained image categorization", "venue": "In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2011 }, { "authors": [ "Max Kochurov", "Rasul Karimov", "Serge" ], "title": "Kozlukov. Geoopt: Riemannian optimization in pytorch", "venue": null, "year": 2020 }, { "authors": [ "Daphne Koller", "Mehran Sahami" ], "title": "Hierarchically classifying documents using very few words", "venue": "Technical report, Stanford InfoLab,", "year": 1997 }, { "authors": [ "Jonathan Krause", "Michael Stark", "Jia Deng", "Li Fei-Fei" ], "title": "3d object representations for fine-grained categorization", "venue": "In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia,", "year": 2013 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Qi Liu", "Maximilian Nickel", "Douwe Kiela" ], "title": "Hyperbolic graph neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Andrew McCallum", "Ronald Rosenfeld", "Tom M Mitchell", "Andrew Y Ng" ], "title": "Improving text classification by shrinkage in a hierarchy of classes", "venue": "In ICML,", "year": 1998 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "arXiv preprint arXiv:1301.3781,", "year": 2013 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Maximillian Nickel", "Douwe Kiela" ], "title": "Poincaré embeddings for learning hierarchical representations", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Yehonatan Sela", "Moti Freiman", "Elia Dery", "Yifat Edrei", "Rifaat Safadi", "Orit Pappo", "Leo Joskowicz", "Rinat Abramovitch" ], "title": "fmri-based hierarchical svm model for the classification and grading of liver fibrosis", "venue": "IEEE transactions on biomedical engineering,", "year": 2011 }, { "authors": [ "Ondrej Skopek", "Octavian-Eugen Ganea", "Gary" ], "title": "Bécigneul. Mixed-curvature variational autoencoders", "venue": "arXiv preprint arXiv:1911.08411,", "year": 2019 }, { "authors": [ "Alexandru Tifrea", "Gary Bécigneul", "Octavian-Eugen Ganea" ], "title": "Poincaré glove: Hyperbolic word embeddings", "venue": "arXiv preprint arXiv:1810.06546,", "year": 2018 }, { "authors": [ "Ivan Vendrov", "Ryan Kiros", "Sanja Fidler", "Raquel Urtasun" ], "title": "Order-embeddings of images and language", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Ke Wang", "Senqiang Zhou", "Shiang Chen Liew" ], "title": "Building hierarchical classifiers using class proximity", "venue": "In VLDB,", "year": 1999 }, { "authors": [ "Andreas S Weigend", "Erik D Wiener", "Jan O Pedersen" ], "title": "Exploiting hierarchy in text categorization", "venue": "Information Retrieval,", "year": 1999 }, { "authors": [ "P. Welinder", "S. Branson", "T. Mita", "C. Wah", "F. Schroff", "S. Belongie", "P. Perona" ], "title": "Caltech-UCSD Birds 200", "venue": "Technical Report CNS-TR-2010-001, California Institute of Technology,", "year": 2010 }, { "authors": [ "Pengtao Xie", "Yuntian Deng", "Yi Zhou", "Abhimanu Kumar", "Yaoliang Yu", "James Zou", "Eric P Xing" ], "title": "Learning latent space models with angular constraints", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jun-Yan Zhu", "Philipp Krähenbühl", "Eli Shechtman", "Alexei A Efros" ], "title": "Generative visual manipulation", "venue": null, "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "Applying inductive biases or prior knowledge to inference models is a popular strategy to improve their generalization performance (Battaglia et al., 2018). For example, a hierarchical structure is found based on the similarity or shared characteristics between samples and thus becomes a basic criterion to categorize particular objects. The known hierarchical structures provided by the datasets (e.g., ImageNet (Deng et al., 2009) classified based on the WordNet graph; CIFAR100 (Krizhevsky, 2009) in ten different groups) can help the network identify the similarity between the given samples.\nIn classification tasks, the final layer of neural networks maps embedding vectors to a discrete target space. However, there is no mechanism forcing similar categories to be distributed close to each other in the embedding. Instead, we may observe classes to be uniformly distributed after training, as this simplifies the separation by the last fully-connected layer. This behavior is a consequence of seeing the label structure as ‘flat,’ i.e., when we omit to consider the hierarchical relationships between classes (Bilal et al., 2017).\nTo alleviate this problem, in this study, we force similar classes to be closer in the embedding by forcing their hyperplanes to follow a given hierarchy. One way to realize that is by making children nodes dependent on parent nodes and constraining their distance through a regularization term. However, the norm itself does not give a relevant information on the closeness between classifiers. Indeed, two classifiers are close if they classify two similar points in the same class. This means similar classifiers have to indicate a similar direction. Therefore, we have to focus on the angle between classifiers, which can be achieved through spherical constraints.\nContributions. In this paper, we propose a simple strategy to incorporate hierarchical information in deep neural network architectures with minimal changes to the training procedure, by modifying only the last layer. Given a hierarchical structure in the labels under the form of a tree, we explicitly force the classifiers of classes to belong to a sphere, whose center is the classifier of their super-class, recursively until we reach the root (see Figure 2). We introduce the spherical fully-connected layer and the hierarchically connected layer, whose combination implements our technique. Finally, we investigate the impact of Riemannian optimization instead of simple norm normalization.\nBy its nature, the proposed technique is quite versatile because the modifications only affect the structure of last fully-connected layer of the neural network. Thus, it can be combined with many other strategies (like spherical CNN from Xie et al. (2017), or other deep neural network architectures).\nRelated works. Hierarchical structures are well-studied, and their properties can be effectively learned using manifold embedding. The design of the optimal embedding to learn the latent hierarchy\nis a complex task, and was extensively studied in the past decade. For example, Word2Vec (Mikolov et al., 2013b;a) and Poincaré embedding (Nickel & Kiela, 2017) showed a remarkable performance in hierarchical representation learning. (Du et al., 2018) forced the representation of sub-classes to “orbit” around the representation of their super-class to find similarity based embedding. Recently, using elliptical manifold embedding (Batmanghelich et al., 2016), hyperbolic manifolds (Nickel & Kiela, 2017; De Sa et al., 2018; Tifrea et al., 2018), and a combination of the two (Gu et al., 2019; Bachmann et al., 2019), shown that the latent structure of many data was non-Euclidean (Zhu et al., 2016; Bronstein et al., 2017; Skopek et al., 2019). (Xie et al., 2017) showed that spheres (with angular constraints) in the hidden layers also induce diversity, thus reducing over-fitting in latent space models.\nMixing hierarchical information and structured prediction is not new, especially in text analysis (Koller & Sahami, 1997; McCallum et al., 1998; Weigend et al., 1999; Wang et al., 1999; Dumais & Chen, 2000). Partial order structure of the visual-semantic hierarchy is exploited using a simple order pair with max-margin loss function in (Vendrov et al., 2016). The results of previous studies indicate that exploiting hierarchical information during training gives better and more resilient classifiers, in particular when the number of classes is large (Cai & Hofmann, 2004). For a given hierarchy, it is possible to design structured models incorporating this information to improve the efficiency of the classifier. For instance, for support vector machines (SVMs), the techniques reported in (Cai & Hofmann, 2004; 2007; Gopal et al., 2012; Sela et al., 2011) use hierarchical regularization, forcing the classifier of a super-class to be close to the classifiers of its sub-classes. However, the intuition is very different in this case, because SVMs do not learn the embedding.\nIn this study, we consider that the hierarchy of the class labels is known. Moreover, we do not change prior layers of the deep neural network, and only work on the last layer that directly contributed to build hyperplanes for a classification purpose. Our work is thus orthogonal to those works on embedding learning, but not incompatible.\nComparison with hyperbolic/Poincaré/graph networks. Hyperbolic network is a recent technique that shows impressive results for hierarchical representation learning. Poincaré networks (Nickel & Kiela, 2017) were originally designed to learn the latent hierarchy of data using low-dimension embedding. To alleviate their drawbacks due to a transductive property which cannot be used for unseen graph inference, hyperbolic neural networks equipped set aggregation operations have been proposed (Chami et al., 2019; Liu et al., 2019). These methods have been mostly focused on learning embedding using a hyperbolic activation function for hierarchical representation. Our technique is orthogonal to these works: First, we assume that the hierarchical structure is not learnt but already known. Second, our model focuses on generating individual hyperplanes of embedding vectors given by the network architecture. While spherical geometry has a positive curvature, moreover, that of hyperbolic space has a constant negative curvature. However, our technique and hyperbolic networks are not mutually exclusive. Meanwhile focusing on spheres embedded in Rd in this study, it is straightforward to consider spheres embedded in hyperbolic spaces." }, { "heading": "2 HIERARCHICAL REGULARIZATION", "text": "2.1 DEFINITION AND NOTATIONS\nWe assume we have samples with hierarchically ordered classes. For instance, apple, banana, and orange are classes that may belong to the super-class “fruits.” This represents hierarchical relationships with trees, as depicted in Figure 1.\nWe identify nodes in the graph through the path taken in the tree. To represent the leaf (highlighted in blue in Figure 1), we use the notation n{1,3,2}. This means it is the second child of the super-class n{1,3}, and recursively, until we reach the root.\nMore formally, we identify nodes as np, where p is the path to the node. A path uniquely defines a node where only one possible path exists. Using the concatenation, between the path p and its child i, a new path p̃ can be defined as follows,\np̃ = 〈p, i〉 (1)\nWe denote P the set of all paths in the tree starting from the root, with cardinality |P|. Notice that |P| is also the number of nodes in the tree (i.e., number of classes and super-classes). We distinguish the set P from the set L, the set of paths associated to nodes whose label appears in the dataset. Although L may equal to P , this is not the case in our experiments. We show an example in Appendix A." }, { "heading": "2.2 SIMILARITY BETWEEN OBJECTS AND THEIR REPRESENTATION", "text": "Let X be the network input (e.g. an image), and φθ(X) be its representation, i.e., the features of X extracted by a deep neural network parameterized by θ. We start with the following observation:\nGiven a representation, super-class separators should be similar to separators for their sub-classes.\nThis assumption implies the following direct consequence.\nAll objects whose labels belong to the same super-class have a similar representation.\nThat is a natural property that we may expect from a good representation. For instance, two dogs from different breeds should share more common features than that of a dog shares with an apple. Therefore, the parameter of the classifiers that identify dog’s breed should also be similar. Their difference lies in the parameters associated to some specific features that differentiate breeds of dogs.\nAlthough this is not necessarily satisfied with arbitrary hierarchical classification, we observe this in many existing datasets. For instance, Caltech-UCSD Birds 200 and Stanford dogs are datasets that classify, respectively, birds and dogs in term of their breeds. A possible example where this assumption may not be satisfied is a dataset whose super-classes are “labels whose first letter is «·».”" }, { "heading": "2.3 HIERARCHICAL REGULARIZATION", "text": "Starting from a simple observation in the previous section, we propose a regularization technique that forces the network to have similar representation for classes along a path p, which implies having similar representation between similar objects. More formally, if we have an optimal classifier wp for the super-class p and a classifier w〈p,i〉 for the class 〈p, i〉, we expect that\n‖wp − w〈p,i〉‖ is small. (2)\nIf this is satisfied, separators for objects in the same super-class are also similar because\n‖w〈p,i〉 − w〈p,j〉‖ = ‖(w〈p,i〉 − wp)− (w〈p,j〉 − wp)‖ ≤ ‖wp − w〈p,i〉‖︸ ︷︷ ︸ small + ‖wp − w〈p,j〉‖︸ ︷︷ ︸ small . (3)\nHowever, the optimal classifier for an arbitrary representation φθ(X) may not satisfy equation 2. The naive and direct way to ensure equation 2 is through hierarchical regularization, which forces classifiers in the same path to be close to each other." }, { "heading": "2.4 HIERARCHICAL LAYER AND HIERARCHICALLY CONNECTED LAYER", "text": "In the previous section, we described the hierarchical regularization technique given a hierarchical structure in the classes. In this section, we show how to conveniently parametrize equation 2. We first express the classifier as a sum of vectors δ defined recursively as follows:\nw〈p,i〉 = wp + δ〈p,i〉, δ{} = 0, (4)\nwhere {} is the root. It is possible to consider δ{} 6= 0, which shifts separating hyper-planes. We do not consider this case in this paper. Given equation 4, we have that ‖δ〈p,i〉‖ is small in equation 2. Finally, it suffices to penalize the norm of δ〈p,i〉 during the optimization. Notice that, by construction, the number of δ’s is equal to the number of nodes in the hierarchical tree.\nNext, consider the output of CNNs for classification,\nφθ(·)TW, (5) where θ denotes the parameters of the hidden layers, W = [w1, . . . , w|L|] denotes the last fullyconnected layer, and wi denotes the separator for the class i. For simplicity, we omit potential additional nonlinear functions, such as a softmax, on top of the prediction.\nWe have parametrized wi following the recursive formula in equation 4. To define the matrix formulation of equation 4, we first introduce the Hierarchical layer H which plays an important role. This hierarchical layer can be identified to the adjacency matrix of the hierarchical graph. Definition 1. (Hierarchical layer). Consider ordering over the sets P and L, i.e., for i = 1, . . . , |P| and j = 1, . . . , |L|,\nP = {p1, . . . , pi, . . . , p|P|} and L = {p1, . . . , pj , . . . , p|L|}. In other words, we associate to all nodes an index. Then, the hierarchical layer H is defined as\nH ∈ B|P|×|L|, Hi, j = 1 if npi npj , 0 otherwise. (6) where npi npj means npj is a parent of npi .\nWe illustrate an example of H in Appendix A. The next proposition shows that equation 5 can be written using a simple matrix-matrix multiplication, involving the hierarchical layer. Proposition 1. Consider a representation φθ(·), where φθ(·) ∈ Rd. LetW be the matrix of separators\nW = [wp1 , . . . , wp|L| ], pi ∈ L, (7) where the separators are parametrized as equation 4. Let ∆ be defined as\n∆ ∈ Rd×|P|, ∆ = [δp1 , . . . , δp|P| ], (8) where P and L are defined in Section 2.1. Consider the hierarchical layer defined in Definition 1. Then, the matrix of separators W can be expressed as\nW = ∆H. (9)\nWe can see W = ∆H as a combination of an augmented fully-connected layer, combined with the hierarchical layer that selects the right columns of ∆, hence the term hierarchically connected layer. The `2 regularization of the δ can be conducted by the parameter weight decay, which is widely used in training of neural networks. The hierarchical layer H is fixed, while ∆ is learnable. This does not affect the complexity of the back-propagation significantly, as ∆H is a simple linear form.\nThe size of the last layer slightly increases, from |L| × d to |P| × d, where d is the dimension of the representation φθ(·). For instance, in the case of Tiny-ImageNet, the number of parameters of the last layer only increases by roughly 36% ; nevertheless, the increased number of parameters of the last layer is still usually negligible in comparison with the total number of parameters for classical network architectures." }, { "heading": "3 HIERARCHICAL SPHERES", "text": "The hierarchical (`2) regularization introduced in the previous section induces separated hyper-planes along a path to be close to each other. However, this approach has a significant drawback.\nWe rewind equation 2, which models the similarity of two separators wp and w〈p, i〉. The similarity between separators (individual hyper-planes) should indicate that they point roughly the same direction, i.e., ∥∥∥∥ wp‖wp‖ − w〈p, i〉‖w〈p, i〉‖\n∥∥∥∥ is small. (10) However, this property is not necessarily captured by equation 2. For instance, assume that wp = −w〈p, i〉, i.e., the separators point in two opposite directions (and thus completely different). Then, equation 2 can be arbitrarily small in the function of ‖wp‖ but not in equation 10:∥∥wp − w〈p, i〉∥∥ = 2‖wp‖ ; ∥∥∥∥ wp‖wp‖ − w〈p, i〉‖w〈p, i〉‖ ∥∥∥∥ = 2. (11)\nThis can be avoided, for example, by deploying the regularization parameter (or weight decay) independently for each ‖δp‖. However, it is costly in terms of hyper-parameter estimation. In order to enforce the closeness of embedding vectors whose paths are similar, we penalize large norms of δ. We also want to bound it away from zero to avoid the problem of separators that point in different direction may have a small norm. This naturally leads to a spherical constraint. Indeed, we transform the `2 regularization over δp by fixing its norm in advance, i.e.,\n‖δp‖ = Rp > 0. (12) In other words, we define δp on a sphere of radiusRp. The fully-connected layer ∆ is then constrained on spheres, hence it is named spherical fully-connected layer.\nHence, we have w〈p,i〉 constrained on a sphere centered at wp. This constraint prevents the direction of w〈p,i〉 from being too different from that of wp, while bounding the distance away from zero. This does not add hyperparameters: instead of weight decay, we have the radius Rp of the sphere." }, { "heading": "3.1 RADIUS DECAY W.R.T. PATH LENGTH", "text": "We allow the radius of the spheres, Rp, to be defined as a function of the path. In this study, we use a simple strategy called radius decay, where Rp decreases w.r.t. the path length:\nRp = R0γ |p|, (13)\nwhere R0 is the initial radius, γ is the radius decay parameter, and |p| is the length of the path. The optimal radius decay can be easily found using cross-validation. The radius decay is applied prior to learning (as opposed to weight-decay); then, the radius remains fixed during the optimization. As opposed to weight-decay, whose weight are multiplied by some constant smaller than one after each iteration, the radius decay here depends only on the path length, and the radius remains fixed during the optimization process.\nThe simplest way to apply the radius decay is by using the following predefined diagonal matrix D,\nDi, i = R0γ|pi|, pi ∈ P, 0 otherwise, (14) where pi follows the ordering from Definition 1. Finally, the last layer of the neural network reads,\nφθ(·)︸ ︷︷ ︸ Network ∆DH︸ ︷︷ ︸ Last layer . (15)\nThe only learnable parameter in the last layer is ∆." }, { "heading": "3.2 OPTIMIZATION", "text": "There are several ways to optimize the network in the presence of the spherical fully-connected layer: by introducing the constraint in the model, “naively” by performing normalization after each step, or\nby using Riemannian optimization algorithms. For simplicity, we consider the minimization problem,\nmin θ,∆ f(θ,∆), (16)\nwhere θ are the parameters of the hidden layers, ∆ the spherical fully-connected layer from equation 8, and f the empirical expectation of the loss of the neural network. For clarity, we use noiseless gradients, but all results also apply to stochastic ones. The superscript ·t denotes the t-th iteration." }, { "heading": "3.2.1 INTEGRATION OF THE CONSTRAINT IN THE MODEL", "text": "We present the simplest way to force the column of ∆ to lie on a sphere, as this does not require a dedicated optimization algorithm. It is sufficient to normalize the column of ∆ by their norm in the model. By introducing a dummy variable ∆̃, which is the normalized version of ∆, the last layer of the neural network equation 15 reads\n∆̃ = [ . . . ,\nδp ‖δp‖ , . . .\n] , φθ(·)∆̃DH. (17)\nThen, any standard optimization algorithm can be used for the learning phase. Technically, ∆ is not constrained on a sphere, but the model will act as if ∆ follows such constraint." }, { "heading": "3.2.2 OPTIMIZATION OVER SPHERES: RIEMANNIAN (STOCHASTIC) GRADIENT DESCENT", "text": "The most direct way to optimize over a sphere is to normalize the columns of ∆ by their norm after each iteration. However, this method has no convergence guarantee, and requires a modification in the optimization algorithm. Instead, we perform Riemannian gradient descent which we explain only briefly in this manuscript. We give the derivation of Riemannian gradient for spheres in Appendix B.\nRiemannian gradient descent involves two steps: first, a projection to the tangent space, and then, a retraction to the manifold. The projection step computes the gradient of the function on the manifold (as opposed to the ambient space Rd), such that its gradient is tangent to the sphere. Then, the retraction simply maps the new iterate to the sphere. With this two-step procedure, all directions pointing outside the manifold, (i.e., orthogonal to the manifold, thus irrelevant) are discarded by the projection. These two steps are summarized below,\nst = (δtp) T∇δpf(θt,∆t) · δtp −∇δpf(θt,∆t), δt+1p =\nδtp+h tst\n‖δtp+htst‖ , (18)\nwhere st is the projection of the descent direction to the tangent space, and δt+1p is the retraction of the gradient descent step with stepsize h. In our numerical experiments, we used the Geoopt optimizer (Kochurov et al., 2020), which implements Riemannian gradient descent on spheres." }, { "heading": "4 NUMERICAL EXPERIMENTS", "text": "We experimented the proposed method using five publicly available datasets, namely CIFAR100 (Krizhevsky, 2009), Caltech-UCSD Birds 200 (CUB200) (Welinder et al., 2010), StanfordCars (Cars) (Krause et al., 2013), Stanford-dogs (Dogs) (Khosla et al., 2011), and Tiny-ImageNet (Tiny-ImNet) (Deng et al., 2009). CUB200, Cars, and Dogs datasets are used for fine-grained visual categorization (recognizing bird, dog bleeds, or car models), while CIFAR100 and Tiny-ImNet datasets are used for the classification of objects and animals. Unlike the datasets for object classification, the fine-grained visual categorization datasets show low inter-class variances. See Appendix C.2 and C.3 for more details about the dataset and their hierarchy, respectively." }, { "heading": "4.1 DEEP NEURAL NETWORK MODELS AND TRAINING SETTING", "text": "We used the deep neural networks (ResNet (He et al., 2016) and DenseNet (Huang et al., 2017)). The input size of the datasets CUB200, Cars, Dogs, and Tiny-ImNet is 224× 224, and 32× 32 pixels for CIFAR100. Since the input-size of CIFAR100 does not fit to the original ResNet and DenseNet, we used a smaller kernel size (3 instead of 7) at the first convolutional layer and a smaller stride (1 instead of 2) at the first block.\nRemark: we do not use pretrained networks. All networks are trained from scratch, i.e., we did not use pre-trained models. This is because most publicly available pre-trained models used ImageNet for training while Dogs and Tiny-ImNet are parts of ImageNet.\nWe used the stochastic gradient descent (SGD) over 300 epochs, with a mini-batch of 64 and a momentum parameter of 0.9 for training. The learning rate schedule is the same for all experiments, starting at 0.1, then decaying by a factor of 10 after 150, then 255 epochs. All tests are conducted using NVIDIA Tesla V100 GPU with the same random seed. Settings in more detail are provided in the supplementary material. We emphasize that we used the same parameters and learning rate schedule for all scenarios. Those parameters and schedule were optimized for SGD on plain networks, but are probably sub-optimal for our proposed methods." }, { "heading": "4.2 RESULTS", "text": "Tables 1 and 2 show a comparison of the results obtained with several baseline methods and our methods. The first method, “Plain”, is a plain network for subclass classification without hierarchical information. The second one, “Multitask” is simply the plain network with multitask (subclass and super-class classification) setting using the hierarchical information. The third one, “Hierarchy”, uses our parametrizationW = ∆H with the hierarchical layer H, but the columns of ∆ are not constrained on spheres. Then, “+Manifold” means that ∆ is restricted on a sphere using the normalization technique from Section 3.2.1. Finally, “+Riemann” means we used Riemannian optimization from Section 3.2.2. We show the experimental results on fine-grained visual classification (Table 1) and general object classification (Table 2).\nNote that the multitask strategy in our experiment (and contrary to our regularization technique) does require an additional hyper-parameter that combines the two losses, because we train classifiers for super-classes and sub-classes simultaneously." }, { "heading": "4.2.1 FINE-GRAINED CATEGORIZATION", "text": "As shown in Table 1, our proposed parameterization significantly improves the test accuracy over the baseline networks (ResNet-18/50, DenseNet-121/160). Even the simple hierarchical setting which uses the hierarchical layer only (without spheres) shows superior performance compared to the baseline networks. Integrating the manifolds with Riemannian SGD further improves the generalization performance.\nSurprisingly, the plain network with deeper layers shows degraded performance. This can be attributed to overfitting which does not occur with our regularization technique, where larger networks show better performance, indicating the high efficiency of our approach." }, { "heading": "4.2.2 OBJECT CLASSIFICATION", "text": "We show test accuracy (%) of our proposed methods with different network models using CIFAR-100 and Tiny-ImNet, in Table 2. From the table, it can be seen that the proposed method has better accuracy than the baseline methods. Compared to the fine-grained classification datasets, the general object classification datasets have less similar classes on the same super-class. In these datasets, our method achieved relatively small gains.\nA higher inter-class variance may explain the lower improvement compared to fine-grained categorization. Nevertheless, for Tiny-ImNet, e.g., ResNet-18 (11.28M parameters) with our parametrization achieves better classification performance than plain ResNet-50 (23.91M parameters). The same applies to DenseNet-121 and DenseNet-161. These results indicate that our regularization technique, which does not introduce new parameters in the embedding layer, can achieve a classification performance similar to that of more complex models." }, { "heading": "4.3 RIEMANNIAN VS. PROJECTED SGD", "text": "Overall, Riemannian SGD showed slightly superior performance compared to projected SGD for fine-grained datasets, although, in most cases, the performance was similar. For instance, with the Dogs dataset on Resnet-50, Riemannian SGD shows a performance 4% higher than the projected SGD. For object classification, Riemannian SGD performs a bit more poorly. We suspect that, owing to the different radius decay parameters (0.5 in Table 1 and 0.9 in Table 2), the learning rate of Riemannian SGD should have been changed to a larger value." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "We presented a simple regularization method for neural networks using a given hierarchical structure of the classes. The method involves of the reformulation of the fully connected layer of the neural network using the hierarchical layer. We further improved the technique using spherical constraints, transforming the last layer into a spherical fully-connected layer. Finally, we compared the optimization of the neural network using several strategies. The reformulation using the hierarchical layer ∆H and the spherical constraint had a considerable impact on the generalization accuracy of the network. The Riemannian optimization had a lower overall impact, showing sometimes significant improvement and sometimes similar to its projected counterpart.\nIn this paper, we used the proposed regularization technique only on classical architectures. In the future, it would be interesting to use it on other architectures, e.g. Inception and SqueezeNet, for embedding, e.g. Poincaré, and other applications, e.g. Natural Language Processing (NLP). Moreover, in this paper, we used a given hierarchy mostly based on taxonomy designed by experts. This hierarchical structure, which is convenient for humans, may not be most convenient for classification algorithms. A self-supervised algorithm that learns classification and the hierarchy may be convenient because we do not need to access a hierarchy and lead to better results (because the structure will be more adapted to the task)." }, { "heading": "A EXAMPLE OF HIERARCHICAL STRUCTURE", "text": "Consider a dataset composed by the following labels: cats, dogs, apple, orange. These labels can be organized trough a hierarchical structure, with super-classes animal and fruit. In such case, the set P is composed by\nP = { {fruit}, {animal}, {fruit, apple}, {fruit, orange}, {animal, cat}, {animal, dog} } ,\nwhile the set L is composed by L = { {fruit, apple}, {fruit, orange}, {animal, cat}, {animal, dog} } .\nThen, its hierarchical layer reads (labels were added to ease the reading)\nH = {fruit, apple} {fruit, orange} {animal, cat} {animal, dog} {fruit} 1 1 0 0 {animal} 0 0 1 1 {fruit, apple} 1 0 0 0 {fruit, orange} 0 1 0 0 {animal, cat} 0 0 1 0 {animal, dog} 0 0 0 1" }, { "heading": "B OPTIMIZATION OVER SPHERES: RIEMANIAN (STOCHASTIC) GRADIENT DESCENT", "text": "We quickly recall some elements of optimization on manifolds, see e.g. Boumal (2020); Absil et al. (2007). For simplicity, we consider the optimization problem\nmin x∈Sd−1 f(x) (19)\nwhere Sd−1 is the sphere manifold with radius one centered at zero and embedded in Rd. The generic Riemannian gradient descent with stepsize h reads\nsk = −gradf(xk) (20) xk+1 = Rxk (hsk) (21)\nwhere gradf is the gradient of f on the sphere, which is a vector that belongs to the tangent space TxkSd−1 (plane tangent to the sphere that contains xk), and Rxk is a second-order retraction, i.e., a mapping from the tangent space TxkSd−1 to the sphere Sd−1 that satisfies some smoothness properties. The vector sk (that belongs to the tangent sphere) represents the local descent direction. We illustrate those quantities in Figure 3. Stochastic Riemannian gradient descent directly follows from (Bonnabel, 2013), replacing the gradient by its stochastic version.\nIn the special case of the sphere, we have an explicit formula for the tangent space and its projection, for the Riemannian gradient, and for the retraction:\nTxSd−1 = {y : yTx = 0} ; Px(y) = y − (xT y)x; (22)\ngradf(x) = Px(∇f(x)) ; Rx(y) = x+ y\n‖x+ y‖ . (23)\nThe retraction is not necessarily unique, but this one satisfies all requirement to ensure good convergence properties. The gradient descent algorithm on a sphere thus reads\nsk = ( (xTk∇f(xk) ) xk −∇f(xk) (24)\nxk+1 = xk + hksk ‖xk + hksk‖\n(25)\nIn our case, we have a matrix ∆, whose each column δp belongs to a sphere. It suffices to apply the Riemannian gradient descent separately on each δp. For practical reasons, we used the toolbox Geoopt (Kochurov et al., 2020; Bécigneul & Ganea, 2018) for numerical optimization." }, { "heading": "C NUMERICAL EXPERIMENTS: SUPPLEMENTARY MATERIALS", "text": "" }, { "heading": "C.1 DEEP NEURAL NETWORK MODELS AND TRAINING DETAILS", "text": "We used ResNet which consists of the basic blocks or the bottleneck blocks with output channels [64, 128, 256, 512] in Conv. layers. A dimensionality of an input vector to the FC layer is 512. We used DenseNet which includes hyperparameters such as [“growth rate”, “block configuration”, and “initial feature dimension”] for ’DenseNet-121’ [32, (6, 12, 24, 16), 64] and ‘DenseNet-161’ [48, (6, 12, 36, 24), 96], respectively. A dimensionality of an input vector for DenseNets to the FC layer is 64 and 96.\nParameters in our proposed method using ResNet and DenseNet are optimized using the SGD with several settings: we fixed 1) the weight initialization with Random-Seed number ‘0’ in pytorch, 2) learning rate schedule [0.1, 0.01, 0.001], 3) with momentum 0.9, 4) regularization: weight decay with 0.0001. A bias term in the FC layer is not used. The images (CUB200, Cars, Dogs, and Tiny-ImNet) in training and test sets are resized to 256× 256 size. Then, the image is cropped with 224× 224 size at random location in training and at center location in test. Horizontal flipping is applied in training. The learning rate decay by 0.1 at [150, 225] epochs from an initial value of 0.1. The experiments are conduced using GPU “NVIDIA TESLA V100\". We used one GPU for ResNet-18, and two GPUs for ResNet-50, DenseNet-121, and DenseNet-161." }, { "heading": "C.2 DATASET", "text": "We summarize the important information of the previous datasets in Table 3. The next section describe how we build the hierarchical tree for each dataset." }, { "heading": "C.3 HIERARCHY FOR DATASETS", "text": "In this section, we describe how we build the hierarchy tree for each dataset. We provide also the files containing the hierarchy used in the experiments in the folder Hierarchy_files.\nBefore explaining how we generate the hierarchy, we quickly describe the content of the files. Their name follow the pattern DATASETNAME_child_parent_pairs.txt. The first line in the file corresponds to the number of entries. Then, the file is divided into two columns, representing pairs of (child, parent). This means if the pair (n1, n2) exists in the file, the node n2 is the direct parent of the node n1. All labels have been converted into indexes.\nC.3.1 CIFAR100\nThe hierarchy of Cifar100 is given by the authors.\nC.3.2 CUB200\nWe classified the breed of birds into different groups, in function of the label name. For instance, the breeds Black_footed_Albatross, Laysan_Albatross and Sooty_Albatross are classified in the same super-class Albatross." }, { "heading": "C.3.3 STANFORD CARS", "text": "We manually classified the dataset into nine different super-classes: SUV, Sedan, Coupe, Hatchback, Convertible, Wagon, Pickup, Van and Mini-Van. In most cases, the super-class name appears in the name of the label." }, { "heading": "C.3.4 STANFORD DOGS", "text": "The hierarchy is recovered trough the breed presents at the end of the name of each dog specie. For instance, English Setter, Irish Setter, and Gordon Setter are classified under the class Setter." }, { "heading": "C.3.5 (TINY) IMAGENET", "text": "The labels of (tiny-)Imagenet are also Wordnet classes. We used the Wordnet hierarchy to build the ones of (Tiny) Imagenet. There are also two post-processing steps:\n1. Wordnet hierarchy is not a tree, which means one node can have more than one ancestor. The choice was systematic: we arbitrarily chose as unique ancestor the first one in the sorted list.\n2. In the case where a node has one and only one child, the node and its child are merged." }, { "heading": "C.4 GENERALIZATION PERFORMANCE ALONG DIFFERENT RADIUS DECAY VALUES", "text": "In this section, we show in Table 4 how the radius decay affects the test accuracy. In all experiments, we used the Resnet18 architecture with Riemannian gradient descent to optimize the spherical fully-connected layer.\nGlobally, we see that radius decay may influence the accuracy of the network. However, in most cases, the performance is not very sensitive to this parameter. The exception is for tiny-imagenet, where the hierarchy tree has many levels, and thus small values degrade a lot the accuracy." }, { "heading": "C.5 LEARNING RADIUS DECAY", "text": "Here, we replace the diagonal matrix D in equation 14 with a learnable parameter matrix which is trained using backpropagation without an additional constraint or a loss function for simplicity. As shown in Table 5, this learnable radius is not effective the in terms of an classification performance compared to that the predefined radius decay." }, { "heading": "C.6 LEARNING WITH RANDOM HIERARCHY", "text": "As shown in Table 6, the methods with a randomly generated hierarchy showed a degraded performance compared to that with a reasonable hierarchical information." }, { "heading": "C.7 SUPER-CLASS CLASSIFICATION EFFICIENCY", "text": "As shown in Table 7, our proposed methods (Hierarchy, +Manifold, and +Riemann) outperformed a multitask (multilabel) classification method in terms of test accuracy performance. Note that, in the multitask classification, a loss function for classification using superclasses is used additionally.\nC.8 VISUALIZATION OF EMBEDDING VECTORS\nIn this section, we visualized an embedding vector which is an input of the last classification layer. As suggested by the reviewer, first, we show a distribution of two-dimensional vector (R2) learned by the networks. To obtain these vectors, we added new layers (i.e. a mapping function Rm 7→ R2, m = 512) prior to the last FC layer of ResNet-18. As two-dimensional vector is not enough to represent a discriminative feature for the fine-grained dataset which have a large number of classes different from MNIST dataset with ten classes with gray level images, we observe multidimensional vectors used in the ResNet. Second, we use t-SNE, to visualize the high dimensional embedding vector (R512) of the original ResNet, which is one of popular methods for exploring high-dimensional vector, introduced by van der Maaten and Hinton. Even though this method is known to have limitation which is highly dependent on hyperparameters such perplexity values, it is still useful to observe distribution of those high dimensional vectors by using fixed hyperparameters. For a deterministic way of visualization on 2D plane regardless of hyperparameters, finally, we visualized the embedding vectors using the traditional dimension reduction technique, namely Principal Component Analysis (PCA). Note that t-SNE and PCA have complementary characteristics (e.g., stochastic vs. deterministic, non-linear vs. linear, capturing local vs. global structures).\n1) Learned two-dimensional representation. We show a distribution of two-dimensional embedding vector which is used for a classification. As shown in Figure 4, embedding vectors of our proposed methods are distributed more closely (clustered) with regard to their superclasses (Figure 4c,4d, and 4e). We used Cars dataset since the number of superclasses (nine super-classes) is small enough to show their distribution clearly. We show the results where the classification performances of all compared methods are similar. We captured the distribution of embedding vectors from an early epoch due to their a slow convergence rate. Two-dimensional vector seems too small to classify images of 196 classes which is highly non-linearly distributed.\n2) Observation of locality preserving structure via t-SNE. We visualize high-dimension embedding vector extracted from ResNet-18 which is mapped into two-dimensional space by preserving local pairwise relationship of those vectors, using t-Distributed Stochastic Neighboring Entities (tSNE). As in Figure 5, embedding vectors of our proposed methods are clearly clustered (Figure 5c,5d, and 5e) with regard to their superclasses compared to that of the baseline methods (Figure 5a and 5b).\n3) Observation of global structure via PCA. Using PCA, we capture a global structure of high dimensional embedding vectors by projection onto two-dimensional space. As shown in Figure 6, embedding vectors extracted from ResNet-18 of our proposed methods have less-Gaussian shape distribution (Figure 6d and 6e) with regard to their superclasses than that of the baseline methods (Figure 6a and 5b). We observe that individual distribution of each classes is similar to that of Figure 4." } ]
2,020
CONNECTING SPHERE MANIFOLDS HIERARCHICALLY
SP:cb6afa05735201fecf8106b77c2d0a883d5cd996
[ "This paper investigates the role of pre-training as an initialization for meta-learning for few-shot classification. In particular, they look at the extent to which the pre-trained representations are disentangled with respect to the class labels. They hypothesize that this disentanglement property of those representations is responsible for their utility as the starting point for meta-learning. Motivated by this, they design a regularizer to be used during the pre-training phase to encourage this disentanglement to be even more prominent with the hope that this pre-trained solution is now closer to the optimal one, thus requiring less additional episodic training which is time-consuming. They show experimentally that their modified pre-training phase sometimes leads to better results as an initialization for Prototypical Networks compared to the standard pre-trained solution, and sometimes converges faster." ]
Few-shot learning aims to classify unknown classes of examples with a few new examples per class. There are two key routes for few-shot learning. One is to (pre)train a classifier with examples from known classes, and then transfer the pretrained classifier to unknown classes using the new examples. The other, called meta few-shot learning, is to couple pre-training with episodic training, which contains episodes of few-shot learning tasks simulated from the known classes. Pre-training is known to play a crucial role for the transfer route, but the role of pre-training for the episodic route is less clear. In this work, we study the role of pre-training for the episodic route. We find that pre-training serves as major role of disentangling representations of known classes, which makes the resulting learning tasks easier for episodic training. The finding allows us to shift the huge simulation burden of episodic training to a simpler pre-training stage. We justify such a benefit of shift by designing a new disentanglement-based pretraining model, which helps episodic training achieve competitive performance more efficiently.
[]
[ { "authors": [ "Luca Bertinetto", "Joao F. Henriques", "Philip Torr", "Andrea Vedaldi" ], "title": "Meta-learning with differentiable closed-form solvers", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Wei-Yu Chen", "Yen-Cheng Liu", "Zsolt Kira", "Yu-Chiang Frank Wang", "Jia-Bin Huang" ], "title": "A closer look at few-shot classification", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yinbo Chen", "Xiaolong Wang", "Zhuang Liu", "Huijuan Xu", "Trevor Darrell" ], "title": "A New Meta-Baseline for Few-Shot Learning", "venue": "arXiv e-prints, art", "year": 2020 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L. Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Guneet Singh Dhillon", "Pratik Chaudhari", "Avinash Ravichandran", "Stefano Soatto" ], "title": "A baseline for few-shot image classification", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Nicholas Frosst", "Nicolas Papernot", "Geoffrey E. Hinton" ], "title": "Analyzing and improving representations with the soft nearest neighbor loss", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Dynamic few-shot visual learning without forgetting", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "cifar100. URL http://www.cs.toronto. edu/ ̃kriz/cifar.html", "venue": null, "year": 2019 }, { "authors": [ "Boris Oreshkin", "Pau Rodrı́guez López", "Alexandre Lacoste" ], "title": "Tadam: Task dependent adaptive metric for improved few-shot learning", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Hang Qi", "Matthew Brown", "David G. Lowe" ], "title": "Low-shot learning with imprinted weights", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Siyuan Qiao", "Chenxi Liu", "Wei Shen", "Alan L. Yuille" ], "title": "Few-shot image recognition by predicting parameters from activations", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Aravind Rajeswaran", "Chelsea Finn", "Sham M Kakade", "Sergey Levine" ], "title": "Meta-learning with implicit gradients", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Andrei A. Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ruslan Salakhutdinov", "Geoff Hinton" ], "title": "Learning a nonlinear embedding by preserving class neighbourhood structure", "venue": "Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics,", "year": 2007 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Qianru Sun", "Yaoyao Liu", "Tat-Seng Chua", "Bernt Schiele" ], "title": "Meta-transfer learning for few-shot learning", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Flood Sung", "Yongxin Yang", "Li Zhang", "Tao Xiang", "Philip H.S. Torr", "Timothy M. Hospedales" ], "title": "Learning to compare: Relation network for few-shot learning", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Eleni Triantafillou", "Tyler Zhu", "Vincent Dumoulin", "Pascal Lamblin", "Utku Evci", "Kelvin Xu", "Ross Goroshin", "Carles Gelada", "Kevin Swersky", "Pierre-Antoine Manzagol", "Hugo Larochelle" ], "title": "Metadataset: A dataset of datasets for learning to learn from few examples", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "koray kavukcuoglu", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "Advances in Neural Information Processing Systems", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, deep learning methods have outperformed most of the traditional methods in supervised learning, especially in image classification. However, deep learning methods generally require lots of labeled data to achieve decent performance. Some applications, however, do not have the luxury to obtain lots of labeled data. For instance, for bird classification, an ornithologist typically can only obtain a few pictures per bird species to update the classifier. Such needs of building classifiers from limited labeled data inspire some different research problems, including the few-shot learning problem (Finn et al., 2017; Snell et al., 2017; Rajeswaran et al., 2019; Oreshkin et al., 2018; Vinyals et al., 2016; Lee et al., 2019). In particular, few-shot learning starts with a training dataset that consists of data points for “seen” classes, and is required to classify “unseen” ones in the testing phase accurately based on limited labeled data points from unseen classes.\nCurrently, there are two main frameworks, meta-learning (Finn et al., 2017; Snell et al., 2017; Chen et al., 2019) and transfer learning (Dhillon et al., 2020), that deal with the few-shot learning problem. For transfer learning, the main idea is to train a traditional classifier on the meta-train dataset. In the testing phase, these methods finetune the model on the limited datapoints for the labeled novel classes. For meta-learning frameworks, their main concept is episodic training (Vinyals et al., 2016). For the testing phase of few-shot learning, the learning method is given N novel classes, each containing K labeled data for fine-tuning and Q query data for evaluation. Unlike transfer learning algorithms, episodic training tries to simulate the testing literature in the training phase by sampling episodes in training dataset.\nIn these two years, some transfer-learning methods (Dhillon et al., 2020) with sophisticated design in the finetuning part have a competitive performance to the meta-learning approaches. Moreover, researchers (Lee et al., 2019; Sun et al., 2019; Chen et al., 2019; Oreshkin et al., 2018) have pointed out that combining both the global classifier (pre-training part) in the transfer learning framework and the episodic training concept for the meta-learning framework could lead to better performance. Yet, currently most of the attentions are on the episodic training part (Vinyals et al., 2016; Finn et al., 2017; Snell et al., 2017; Oreshkin et al., 2018; Sun et al., 2019; Lee et al., 2019) and the role of pre-training is still vague.\nMeta-learning and pre-training has both improved a lot in the past few years. However, most of the works focus on the accuracy instead of the efficiency. For meta-learning, to make the progress more efficient, an intuitive way is to reduce the number of episodes. Currently, there are only limited researches (Sun et al., 2019) working on reducing the number of episodes. One of the methods (Chen et al., 2019; Lee et al., 2019) is to apply a better weight initialization method, the one from pre-training, instead of the random initialization. Another method (Sun et al., 2019) is to mimic how people learn. For example, when we are learning dynamic programming, given a knapsack problem with simple constraint and the one with strong constraint, we will learn much more when we solve the problem with strong constraint. Sun et al. (2019) followed the latter idea and crafted the hard episode to decrease amount of necessary episodes.\nIn this work, we study the role of pre-training in meta few-shot learning. We study the pre-training from the disentanglement of the representations. Disentanglement is the property that whether the datapoints within different classes has been mixed together. Frosst et al. (2019) pointed out that instead of the last layer of the model all representations after other layers were entangled. The last layer does the classifier and the rest captures some globally shared information. By analyzing the disentanglement property of episodic training, though the pre-training gives a better representation that benefits the episodic training, the representation becomes more disentangled after episodic training. That is to say, episodic training has spent some effort on making the representation more disentangled. Benefited from the understanding, we design a sophisticated pre-training method that is more disentangled and helps episodic training achieve competitive performance more efficiently. With our pre-training loss, the classical meta-learning algorithm, ProtoNet (Snell et al., 2017), achieves competitive performance to other methods. Our study not only benefits the episodic training but also points out another direction to sharpen and speed-up episodic training.\nTo sum up, there are three main contributions in this work:\n1. A brief study of the role of pre-training in episodic training.\n2. A simple regularization loss that sharpens the classical meta-learning algorithms.\n3. A new aspect for reducing the necessary episodic training episodes." }, { "heading": "2 RELATED WORK", "text": "Few-shot learning tries to mimic the human ability to generalize to novel classes with limited datapoints. In the following, we briefly introduce the recent progress of the transfer-learning framework and two categories of the meta-learning framework. Afterward, we give a brief introduction of the not well studied episode efficiency problem." }, { "heading": "2.1 TRANSFER-LEARNING FRAMEWORK", "text": "In the training phase, the transfer-learning framework trains a classifier on the general classification task across all base classes instead of utilizing episodic training. And for the testing phase, transferlearning methods finetune the model with the limited labeled data. There are several kinds of tricks. Qi et al. (2018) proposed a method to append the mean of the embedding with a given class as a final layer of the classifier. Qiao et al. (2018) used the parameter of the last activation output to predict the classifier for novel classes dynamically. Gidaris & Komodakis (2018) proposed a similar concept with Qiao et al. (2018). They also embedded the weight of base classes during the novel class prediction. Moreover, they introduced an attention mechanism instead of directly averaging among the parameters of each shot. Besides embedding base classes weight to the final classifier, Dhillon et al. (2020) utilized label propagation by the uncertainty on a single prediction to prevent overfitting in the finetune stage, which is quite similar to the classical classification tasks." }, { "heading": "2.2 META-LEARNING FRAMEWORK", "text": "For meta-learning like framework, the main concepts are learning to learn and episodic training (Vinyals et al., 2016). Learning to learn refers to learn from a lot of tasks to benefit the new task learning. To prevent confusion, the original train and test phase are regarded as “meta-train” and “meta-test”. The term “train” and “test” would be referred to the one in each small task. Episodic\ntraining is the process of mimicking the task structure in meta-test during training. If the meta-test phase consists of K support examples and Q query examples from N classes, then we will sample lots of tasks that also have K support examples and Q query examples from N classes in the metatrain phase. Meta-learning algorithms have developed rapidly in recent years. We briefly categorize them into two categories, optimization-based methods, metric-based methods.\nOptimization-based Method Optimization-based methods try to get an embedding that could easily fit subtasks by adding some extra layers. Finn et al. (2017) proposed MAML (Model-Agnostic Meta-Learning), which got a network that was the closest to all the best model in the low-way lowshot tasks. However, MAML might be quite inefficient due to the computation of Hessian matrix. To leverage the issue, iMAML (Rajeswaran et al., 2019) and foMAML (Finn et al., 2017) provides different approximation to avoid the heavy computation. However, MAML still suffers from the high dimensional overfitting issue. LEO model (Rusu et al., 2019) solves the overfitting issue by learning a low dimensional latent space. Instead of aiming to get an embedding that could benefit the latter fully connected layer, MetaOptNet (Lee et al., 2019) aims to get an embedding that could benefit the latter differentiable support vector machine.\nMetric-based Method Instead of learning an embedding that could benefit the latter additional layer, metric-based methods aim to get an embedding that could easily classify the classes by simple metrics. Matching Networks (Vinyals et al., 2016) conducts a cosine similarity metric with a Full Context Encoding module. Prototypical Networks (Snell et al., 2017) replaces the cosine similarity metric with the squared Euclidean distance and computes the mean of the embedding in the supportset as the prototype. Relation Network (Sung et al., 2018) embeds a relation module in the learning metric. Instead of using a consistent metric in the task, TADAM (Oreshkin et al., 2018) designs a task-dependent metric that could dynamically fit new class combinations." }, { "heading": "2.3 MIXED FRAMEWORK", "text": "Some recent works have found that using a global pre-trained classifier as the initialization weight could lead to a better meta-learning result. Sun et al. (2019) used the pre-trained classifier weight as initialization weight and launched a simple gradient-based with restricting the learning process as shift and scale. Meta-Baseline (Chen et al., 2020) also follows the initialization literature and applies cosine similarity metric for the following learning process. Chen et al. (2019) changed the original pre-trained network structure into a weight imprinting taste and a simple gradient-based method for the episodic training part. Triantafillou et al. (2020) also utilized the pre-trained initialization and derived a combination between MAML Finn et al. (2017) and ProtoNet (Snell et al., 2017)." }, { "heading": "2.4 EPISODE REDUCTION METHODS", "text": "Recent researchers have found that a pre-trained classifier leads to better meta-learning results. On the other hand, we could reduce the amount of episodes by using a pre-trained classifier. Besides utilizing the pre-training weight to reduce the number of episodes, Meta Transfer Learning (Sun et al., 2019) proposes the concept of hard episode. For each normal episode, MTL adds the class with the worst performance in a pool. After collecting for a while, MTL creates hard episodes by sampling from the pool.\nInstead of crafting hard episodes, our approach tries to utilize more in the pre-training phase. We propose a simple regularization that could reduce the difference between the embeddings of the pretrained classifier and the episodic training one. It has significantly reduced the number of episodes and achieves a similar (even better) performance for the original algorithms. Moreover, for shallow and deep backbones, it increases the final accuracy." }, { "heading": "3 METHODOLOGY", "text": "No matter in the route of transferring the classifier or the route of episodic training, pre-training serves a crucial role. And in meta few-shot learning, pre-training provides an initialization weight for further episodic training. In recent episodic-based methods, the pre-training model is split into two parts, backbone and classifier. The last linear layer serves as the classifier and maps the embedding\nto logits. Others work as the backbone and transform the raw photo into embedding. After pretraining, the classifier part is directly dropped, since the target classes may be unseen or the order may have changed. Though the split is quite naive, the afterward episodic learning converges faster and better based on the initialization. Thus, previous works conclude that pre-training provides a better representation. However, what makes it better is not clear. What is the role of pre-training in meta few-shot learning? More specifically, what is the character of backbone in pre-training and episodic training?\nIn ths section, we choose prototypical network (Snell et al., 2017) as the representative. Following the analysis of general learning literature by Frosst et al. (2019), we utilize the similar loss to measure the disentanglement and entanglement property of the backbone in pre-training and episodic training. Benefited from our observation, we give an attempt to transfer the computation burden of episodic training by adding an sophisticated loss in pre-training." }, { "heading": "3.1 NOTATION", "text": "In a N -way K-shot few-shot classification task, we are given a small support set of NK labeled data S = {(x1, y1), ..., (xNK , yNK)} where xi ∈ RD and yi ∈ {1, ..., N}. Sn denotes the set of examples labeled with class n." }, { "heading": "3.2 PROTOTYPICAL NETWORK", "text": "Prototypical Network (Snell et al., 2017) is one of the classical metric-based meta few-shot learning algorithms. First, it computes prototypes ci, which are M -dimensional representations ci ∈ RM by averaging the output of the embedding function fθ : RD → RM from the same class set Si.\nci = 1 |Si| ∑\n(x,y)∈Si\nfθ(x) (1)\nThen, the prototypical network calculates a probability distribution for a given data x by softmax over the Euclidean distance d between each prototype and the embedding fθ(x).\npθ(y = n|x) = exp(−d(fθ(x), cn))∑\nn′∈{1,...,N} exp(−d(fθ(x), cn′))\n(2)" }, { "heading": "3.3 SOFT-NEAREST NEIGHBOR LOSS", "text": "”Soft-Nearest Neighbour loss [...] is proposed by Frosst et al. (2019)” Frosst et al. credits Salakhutdinov and Hinton (2007) for the soft nearest neighbour loss, which itself draws inspiration from Goldberger et al. (2005)’s Neighbourhood Component Analysis.\nSoft-Nearest Neighbor Loss (SNN-Loss, Eq.3) is first proposed by Salakhutdinov & Hinton (2007). Then, Frosst et al. (2019) applies the loss to analyze disentanglement of layer. The loss measures the disentanglement property of a representation. Higher loss means more entangled representation. In the original study, general supervised learning, which is the pre-training part in meta few-shot learning, has an entanglement trend in all the layers instead of the last layer. On the other hand, in meta few-shot learning, especially metric-based methods, after the pre-training phase, the last layer is replaced with a metric. This leads to an interesting question. What should the afterward last layer be, entangled or disentangled? By the experiment in Sec.4.4, we find that with sufficient amount of pre-training the representation is less entangled after episodic learning. In other words, the metric prefers a more disentangled representation.\nSoft-Nearest-Neighbor-Loss := −1 b ∑ i∈{1..b}\n∑ j∈{1..b},j 6=i,yi=yj exp(− ∥∥xi − xj∥∥2)∑\nn∈{1..b},n6=i exp(−‖xi − xn‖2)\n(3)\nAlgorithm 1 Regularized Pre-trained Prototypical Network (RP-Proto) Require: Training set D = {(x11,y11), ...., (xNK ,yNK)}, where each ynk = n.\n1: M : metric for judging distance between two vector. 2: fθ: stands for the feature extractor which may output e dimension vector. 3: g: maps the output of fθ to k dimension. 4: b: batch sizes for pretraining. 5: p: pretrained iteration number. 6: α: weight vector for the regularization loss. 7: L: cross entropy loss 8: Initialize Wi with e dimension for i ∈ {1, ...,K} 9: for i ∈ range(p) do\n10: (X,Y )← sample(D, b); 11: lc ← L(g(fθ(X)), Y ) 12: lreg ← 1b ∑ (x,y)∈(X,Y) M(fθ(x),Wy) 13: ltolal ← lc + α× lr 14: fθ, g ← backprop(ltotal, (fθ, g)) 15: end for 16: fθ ← ProtoNet(fθ, D) 17: return fθ" }, { "heading": "3.4 REGULARIZED PRE-TRAINED PROTOTYPICAL NETWORK (RP-PROTO)", "text": "In the previous section, we conclude that the last layer in the backbone would be more disentangled after episodic training. As a result, we wonder whether providing a more disentangled representation of the penultimate layer in the pre-training part could speed up the later episodic training. A naive way to provide a more disentangled representation may be transferring the loss in the episodic training part to pre-training. However, the naive method slows down the pre-training part due to the episode construction. Also, it may require an additional sampler to craft the episode task. To prevent from additional sampler, we could directly compute the episodic training loss among each batch during pre-training. However, the naive method suffers from three critical issues. First, if the batch size is not in the scale of the number of pre-training classes, some class may not exist in the batch. Second, the amount of each class may be non-uniform. Third, the computation of the gradient of a single instance may mix up with other instance in the batch, which makes parallelization harder.\nTo deal with the above issues, we introduce a surrogate loss, which aims for an embedding with more disentanglement. We capture the disentanglement representation from another aspect, instances within the same class should gather together in the embedding space. The surrogate loss lreg is based on the distance between fθ(x) and Wy , where Wy is an M -dimensional learnable parameter.\nlreg(x, y) = d(fθ(x),Wy) (4)\nFor the pre-training phase on the ordinary classification task, fθ corresponds to the network right before mapping to a final K-dimension embedding. Then, we add lreg to the ordinary classification loss with a multiplying weight of α.\nltotal = lclassification(x,y) + α× lreg(x,y) (α = 10C) (5)\nWy is the surrogate mean of each class. Instead of directly summing up all the embeddings, Wy could be calculated by backpropagation. Moreover, when Wy equals to zero, it could be considered as an L2 regularization on the feature space, which scales the embedding. When Wy is learnable, it makes each instance in the same class closer which satisfies the our goal of disentangled representation. We have described the detail in Algo.1.\nRegarding the flaws of the naive approach, in this auxiliary loss schema, a data point doesn’t need to consider the distance of data points in other classes, As a result, it could avoid the large computation effort of the cross-classes datapoints distance and the non-uniform amounts of datapoints issue." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASET DETAILS", "text": "MiniImagenet Proposed by Vinyals et al. (2016) is a widely used benchmark for few-shot learning. It is a subset of ILSVRC-2015 (Deng et al., 2009) with 100 classes, 64 for meta-train, 16 for meta-validation and 20 for meta-test. Each class contains 600 images of size 84x84.\nCIFAR-FS & FC100 Proposed by Bertinetto et al. (2019) and Oreshkin et al. (2018), both are splits between the original classes of CIFAR100 (Krizhevsky et al.). They also follow the miniImagenet structure with 64 base classes, 16 validation classes, and 20 novel classes. The pictures in CIFAR100 are 32 x 32 low resolution. The main difference is that CIFAR-FS randomly splits among the classes, but FC100 splits the classes carefully with less semantic overlap between the base classes and the novel classes. In general, FC100 is more difficult than CIFAR-FS." }, { "heading": "4.2 EXPERIMENT SETUP", "text": "First, we design an experiment to study the role pre-training in meta-fewshot learning. We train several classifiers with sufficient epochs and launch the episodic training with the prototypical network methods. The disentanglement of the penultimate layer in pre-training and the last layer in episodic training are compared. The experiment is evaluated in 5-way 5-shot settings.\nSecond, to evaluate the power of the auxiliary loss and validate our conjecture. We compare the episodic training performance based on the backbones with and without our auxiliary loss in pretraining phase. The experiment is evaluated in both 5-way 5-shot and 5-way 1-shot settings. In more detail, we search the auxiliary loss weighted coefficient α = 10C from C = −1 to −3.5 for each backbone on each dataset." }, { "heading": "4.3 IMPLEMENTATION DETAILS", "text": "Model Architecture For the backbone of ResNet10 and ResNet18, we follow the preprocessing from Chen et al. (2019). We use image sizes with 224x224 for both pre-trained prototypical network and regularization pre-trained prototypical network. For ResNet12 backbone, we follow the implementation from Sun et al. (2019) and use image sizes with 80x80. For the Conv4 backbone, we follow the implementation in prototypical network (Snell et al., 2017) with image size 84x84. The detail parameters are in the appendix.\nPre-training For pre-training the backbone, we follow the settings in Chen et al. (2020). We use a stochastic gradient descent optimizer with a learning rate as 0.001 for all the backbones and the batch size is 128.\nEpisodic training For the episodic training part, we also follow the settings in Chen et al. (2020). We use stochastic gradient descent optimizer with learning rate as 0.001 and turn off the data augmentation part in the preprocess pipeline to reach a more stable result.\nEvaluation For the performance evaluation of episodic training, we turn off all the data augmentation and record the mean of the performance and the 95% confidence intervals among 600 randomly sampled episodes." }, { "heading": "4.4 LAYER DISENTANGLEMENT WITH METRIC", "text": "In this part, we design experiments to figure out the role of pre-training. We utilize the soft-nearest neighbor loss Eq.3 to measure the disentanglement of the last layer in episodic training among episodes. Fig.1 is the soft-nearest loss in episodic training. No matter in the shallow backbone Conv4 or the deep backbone ResNet10, ResNet18 the loss has a decreased trend while the episode increases. For lower soft-nearest-neighbor loss the representation becomes more disentangled. That is to say, in episodic learning, the metric prefers a less entangled representation. This makes sense in the higher level. Since in episodic training, especially metric-based methods, the metric may\nbe geometric related, e.g. the distance to the center of each class. If the representation is highly entangled, it is non-intuitive for the metric to figure the class of each instance.\n4.5 PERFORMANCE EVALUATION\nIn the miniImagenet dataset result (Table.1), our method shows competitive performance to other methods. We find that adding the auxiliary loss could make the shallow network has a similar performance with the large one. Moreover, when the backbone is shallow, we have outperformed all other methods with the same backbone. For experiments on fc100 and cifar-fs, the results are similar to miniImagenet. For detailed results, please check the appendix.\nFor our major conjecture, disentangled representations could benefit the following episodic training, we do a one\nby one comparison between general pre-trained Prototypical Networks and the regularized pretrained one by recording the meta-test performance after every 100 episodes in Fig.2. Instead of ResNet18, all the regularized pre-trained backbones lead to a faster convergent speed by providing better initial accuracy. This could support our conjecture that disentangled representations could boost the speed of episodic training." }, { "heading": "4.6 REGULARIZATION AND DISENTANGLEMENT", "text": "Our goal of the regularization loss is to control the disentanglement of the embedding space. In the previous experiments, our methods indeed benefit the speed of episodic training. However, how the regularization affects the disentanglement and the performance is still not clear. There are two potential reasons for interaction. First, the regularized pre-training leads the model to start in a better initialization point with higher disentanglement and higher accuracy. Second, the regularized term helps the model to start in an initialization space with better gradient property to achieve higher disentanglement and optimal weight.\nFig.3 shows the experiment in 5-way 5-shot miniImagenet. The SNN-loss with regularization is always lower than the original one. However, it doesn’t support the first possible reason, since the\nFigure 3: SNN-loss ablation comparison with regularization. (a) SNN-Loss (b) Accuracy\nFigure 4: Accuracy and SNN-loss with different weighted parameter in episodic training\ninitial SNN-loss is quite similar to the one without regularization. However, after several episodes, the loss severely decreases. This makes the second claim more convincing.\nIn Fig.4a, larger weighted parameter leads to higher disentanglement. And in Fig.4b, there is a large gap for the initialization accuracy for different weighted parameters. However, better initialization accuracy doesn’t imply a better final accuracy. For instance, C = −1.5 has a worse initialization accuracy but converges to the best weight in the end. With suitable weighted parameters, the representation is disentangled properly, the final accuracy has a great improvement. It is similar to the general regularization idea that suitable regularization could lead to performance improvement but extreme regularization may harm the performance." }, { "heading": "5 CONCLUSION", "text": "The disentanglement analysis provides a new aspect to view the role of pre-training in meta fewshot learning. Furthermore, the relationship between the backbone and the classifier is discussed. Though in higher concept, the metric should replace the role of pre-training classifier in the episodic training phase. However, the backbone needs to share the role of the classifier, so the backbone has a disentangled trend in the classifier. Benefited from the observation, we designed an auxiliary loss to transfer the computation burden of episodic training to the pre-training. With the help of the loss, episodic training could achieve competitive performance with fewer episodes." }, { "heading": "A APPENDIX", "text": "A.1 PREPROCESS PIPELINE\nA.2 PEFORMANCE EVALUATION ON FC100 AND CIFAR-FS" } ]
2,020
null
SP:2cfe676c21709d69aa3bab1480440fda0a365c3f
[ "The paper proposes a method, named as RG-flow, which combines the ideas of Renormalization group (RG) and flow-based models. The RG is applied to separate signal statistics of different scales in the input distribution and flow-based idea represents each scale information in its latent variables with sparse prior distribution. Inspired by receptive field from CNNs, the authors visualize the latent space representation, which reveals the progressive semantics at different levels as the instinctive expectation." ]
Flow-based generative models have become an important class of unsupervised learning approaches. In this work, we incorporate the key idea of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, called RG-Flow, which can separate information at different scales of images with disentangled representations at each scale. We demonstrate our method mainly on the CelebA dataset and show that the disentangled representations at different scales enable semantic manipulation and style mixing of the images. To visualize the latent representations, we introduce receptive fields for flow-based models and find that the receptive fields learned by RG-Flow are similar to those in convolutional neural networks. In addition, we replace the widely adopted Gaussian prior distribution by a sparse prior distribution to further enhance the disentanglement of representations. From a theoretical perspective, the proposed method has O(log L) complexity for image inpainting compared to previous generative models with O(L2) complexity.
[]
[ { "authors": [ "Samuel K Ainsworth", "Nicholas J Foti", "Adrian KC Lee", "Emily B Fox" ], "title": "oi-vae: Output interpretable vaes for nonlinear group factor analysis", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Tetsuya Akutagawa", "Koji Hashimoto", "Takayuki Sumimoto" ], "title": "Deep learning and ads/qcd", "venue": "Physical Review D,", "year": 2020 }, { "authors": [ "Sanjeev Arora", "Yuanzhi Li", "Yingyu Liang", "Tengyu Ma", "Andrej Risteski" ], "title": "Linear algebraic structure of word senses, with applications to polysemy", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Mathieu Aubry", "Daniel Maturana", "Alexei A. Efros", "Bryan C. Russell", "Josef Sivic" ], "title": "Seeing 3d chairs: Exemplar part-based 2d-3d alignment using a large dataset of CAD models", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky T.Q. Chen", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Invertible residual networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning, ICML 2019,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Cédric Bény", "Tobias J Osborne" ], "title": "The renormalization group via statistical inference", "venue": "New Journal of Physics,", "year": 2015 }, { "authors": [ "Urs Bergmann", "Nikolay Jetchev", "Roland Vollgraf" ], "title": "Learning texture manifolds with the periodic spatial GAN", "venue": "In Proceedings of the 34th International Conference on Machine Learning, ICML 2017,", "year": 2017 }, { "authors": [ "Johann Brehmer", "Kyle Cranmer" ], "title": "Flows for simultaneous manifold learning and density estimation", "venue": null, "year": 2003 }, { "authors": [ "Tian Qi Chen", "Xuechen Li", "Roger B. Grosse", "David Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Tian Qi Chen", "Jens Behrmann", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Residual flows for invertible generative modeling", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Brian Cheung", "Jesse A. Livezey", "Arjun K. Bansal", "Bruno A. Olshausen" ], "title": "Discovering hidden factors of variation in deep networks", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Iris Cong", "Soonwon Choi", "Mikhail D. Lukin" ], "title": "Quantum convolutional neural networks", "venue": "Nature Physics,", "year": 2019 }, { "authors": [ "Emily L. Denton", "Soumith Chintala", "Arthur Szlam", "Rob Fergus" ], "title": "Deep generative image models using a laplacian pyramid of adversarial networks", "venue": "In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", "year": 2015 }, { "authors": [ "James J DiCarlo", "David D Cox" ], "title": "Untangling invariant object recognition", "venue": "Trends in cognitive sciences,", "year": 2007 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "NICE: non-linear independent components estimation", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Jeff Donahue", "Philipp Krähenbühl", "Trevor Darrell" ], "title": "Adversarial feature learning", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Vincent Dumoulin", "Ishmael Belghazi", "Ben Poole", "Alex Lamb", "Martı́n Arjovsky", "Olivier Mastropietro", "Aaron C. Courville" ], "title": "Adversarially learned inference", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "G. Evenbly", "G. Vidal" ], "title": "Class of highly entangled many-body states that can be efficiently simulated", "venue": "Phys. Rev. Lett.,", "year": 2014 }, { "authors": [ "Michael E. Fisher" ], "title": "Renormalization group theory: Its basis and formulation in statistical physics", "venue": "Rev. Mod. Phys.,", "year": 1998 }, { "authors": [ "Andrew Gambardella", "Atilim Günes Baydin", "Philip H.S. Torr" ], "title": "Transflow learning: Repurposing flow models without retraining", "venue": null, "year": 1911 }, { "authors": [ "Wen-Cong Gan", "Fu-Wen Shu" ], "title": "Holography as deep learning", "venue": "International Journal of Modern Physics D,", "year": 2017 }, { "authors": [ "L.A. Gatys", "A.S. Ecker", "M. Bethge" ], "title": "Image style transfer using convolutional neural networks", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Leon A. Gatys", "Alexander S. Ecker", "Matthias Bethge" ], "title": "Texture synthesis using convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Koji Hashimoto" ], "title": "AdS/CFT correspondence as a deep boltzmann machine", "venue": "Phys. Rev. D,", "year": 2019 }, { "authors": [ "Koji Hashimoto", "Sotaro Sugishita", "Akinori Tanaka", "Akio Tomiya" ], "title": "Deep learning and holographic qcd", "venue": "Phys. Rev. D,", "year": 2018 }, { "authors": [ "Koji Hashimoto", "Hong-Ye Hu", "Yi-Zhuang You" ], "title": "Neural ode and holographic qcd", "venue": null, "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Irina Higgins", "Loı̈c Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Irina Higgins", "David Amos", "David Pfau", "Sébastien Racanière", "Loı̈c Matthey", "Danilo J. Rezende", "Alexander Lerchner" ], "title": "Towards a definition of disentangled", "venue": "representations. CoRR,", "year": 2018 }, { "authors": [ "Emiel Hoogeboom", "Rianne van den Berg", "Max Welling" ], "title": "Emerging convolutions for generative normalizing flows", "venue": "In Proceedings of the 36th International Conference on Machine Learning, ICML 2019,", "year": 2019 }, { "authors": [ "Hong-Ye Hu", "Shuo-Hui Li", "Lei Wang", "Yi-Zhuang You" ], "title": "Machine learning holographic mapping by neural network renormalization group", "venue": "Phys. Rev. Research,", "year": 2020 }, { "authors": [ "Aapo Hyvärinen", "Erkki Oja" ], "title": "Independent component analysis: algorithms and applications", "venue": "Neural networks,", "year": 2000 }, { "authors": [ "Nikolay Jetchev", "Urs Bergmann", "Roland Vollgraf" ], "title": "Texture synthesis with spatial generative adversarial networks", "venue": null, "year": 2016 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and super-resolution", "venue": "In Computer Vision - ECCV 2016 - 14th European Conference, Proceedings, Part II,", "year": 2016 }, { "authors": [ "Leo P. Kadanoff" ], "title": "Scaling laws for ising models near Tc", "venue": "Physics Physique Fizika,", "year": 1966 }, { "authors": [ "Mahdi Karami", "Dale Schuurmans", "Jascha Sohl-Dickstein", "Laurent Dinh", "Daniel Duckworth" ], "title": "Invertible convolutional flow", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tero Karras", "Samuli Laine", "Miika Aittala", "Janne Hellsten", "Jaakko Lehtinen", "Timo Aila" ], "title": "Analyzing and improving the image quality of stylegan", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "In Proceedings of the 35th International Conference on Machine Learning, ICML 2018,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Thomas N. Kipf", "Elise van der Pol", "Max Welling" ], "title": "Contrastive learning of structured world models", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Maciej Koch-Janusz", "Zohar Ringel" ], "title": "Mutual information, neural networks and the renormalization group", "venue": "Nature Physics,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "Cifar-10 (canadian institute for advanced research)", "venue": "URL http://www.cs.toronto.edu/ ̃kriz/cifar.html. Y. LeCun. A theoretical framework for back-propagation", "year": 1988 }, { "authors": [ "Yann LeCun", "Bernhard E. Boser", "John S. Denker", "Donnie Henderson", "Richard E. Howard", "Wayne E. Hubbard", "Lawrence D. Jackel" ], "title": "Handwritten digit recognition with a back-propagation network", "venue": "In Advances in Neural Information Processing Systems", "year": 1989 }, { "authors": [ "Ching Hua Lee", "Xiao-Liang Qi" ], "title": "Exact holographic mapping in free fermion systems", "venue": "Phys. Rev. B,", "year": 2016 }, { "authors": [ "Patrick M. Lenggenhager", "Doruk Efe Gökmen", "Zohar Ringel", "Sebastian D. Huber", "Maciej KochJanusz" ], "title": "Optimal renormalization group transformation from information theory", "venue": "Phys. Rev. X,", "year": 2020 }, { "authors": [ "Shuo-Hui Li", "Lei Wang" ], "title": "Neural network renormalization group", "venue": "Phys. Rev. Lett.,", "year": 2018 }, { "authors": [ "Henry W. Lin", "Max Tegmark", "David Rolnick" ], "title": "Why does deep and cheap learning work so well", "venue": "Journal of Statistical Physics,", "year": 2017 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Rätsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In Proceedings of the 36th International Conference on Machine Learning, ICML 2019,", "year": 2019 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Pankaj Mehta", "David J. Schwab" ], "title": "An exact mapping between the variational renormalization group and deep learning", "venue": null, "year": 2014 }, { "authors": [ "Bruno A Olshausen", "David J Field" ], "title": "Emergence of simple-cell receptive field properties by learning a sparse code for natural images", "venue": null, "year": 1996 }, { "authors": [ "Bruno A Olshausen", "David J Field" ], "title": "Sparse coding with an overcomplete basis set: A strategy employed by v1", "venue": "Vision research,", "year": 1997 }, { "authors": [ "Dan Oprisa", "Peter Toth" ], "title": "Criticality & deep learning ii: Momentum renormalisation group, 2017", "venue": null, "year": 2017 }, { "authors": [ "Fernando Pastawski", "Beni Yoshida", "Daniel Harlow", "John Preskill" ], "title": "Holographic quantum errorcorrecting codes: toy models for the bulk/boundary correspondence", "venue": "Journal of High Energy Physics,", "year": 2015 }, { "authors": [ "Xiao-Liang Qi" ], "title": "Exact holographic mapping and emergent space-time geometry", "venue": "arXiv: High Energy Physics - Theory,", "year": 2013 }, { "authors": [ "Aditya Ramesh", "Youngduck Choi", "Yann LeCun" ], "title": "A spectral regularizer for unsupervised disentanglement", "venue": null, "year": 2018 }, { "authors": [ "Scott E. Reed", "Aäron van den Oord", "Nal Kalchbrenner", "Sergio Gomez Colmenarejo", "Ziyu Wang", "Yutian Chen", "Dan Belov", "Nando de Freitas" ], "title": "Parallel multiscale autoregressive density estimation", "venue": "In Proceedings of the 34th International Conference on Machine Learning, ICML 2017,", "year": 2017 }, { "authors": [ "Danilo Jimenez Rezende", "George Papamakarios", "Sébastien Racanière", "Michael S. Albergo", "Gurtej Kanwar", "Phiala E. Shanahan", "Kyle Cranmer" ], "title": "Normalizing flows on tori and spheres", "venue": null, "year": 2002 }, { "authors": [ "Tim Salimans", "Diederik P. Kingma" ], "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "venue": "In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Robin Tibor Schirrmeister", "Yuxuan Zhou", "Tonio Ball", "Dan Zhang" ], "title": "Understanding anomaly detection with deep invertible networks through hierarchies of distributions and features", "venue": null, "year": 2006 }, { "authors": [ "H. Eugene Stanley" ], "title": "Scaling, universality, and renormalization: Three pillars of modern critical phenomena", "venue": "Rev. Mod. Phys., 71:S358–S366,", "year": 1999 }, { "authors": [ "Brian Swingle" ], "title": "Entanglement renormalization and holography", "venue": "Phys. Rev. D,", "year": 2012 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jonathon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Joshua B. Tenenbaum", "William T. Freeman" ], "title": "Separating style and content with bilinear models", "venue": "Neural Comput.,", "year": 2000 }, { "authors": [ "Dmitry Ulyanov", "Vadim Lebedev", "Andrea Vedaldi", "Victor S. Lempitsky" ], "title": "Texture networks: Feed-forward synthesis of textures and stylized images", "venue": "In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016,", "year": 2016 }, { "authors": [ "Arash Vahdat", "Jan Kautz" ], "title": "NVAE: A deep hierarchical variational autoencoder", "venue": "CoRR, abs/2007.03898,", "year": 2020 }, { "authors": [ "G. Vidal" ], "title": "Class of quantum many-body states that can be efficiently simulated", "venue": "Phys. Rev. Lett., 101:110501,", "year": 2008 }, { "authors": [ "Kenneth G. Wilson" ], "title": "Renormalization group and critical phenomena. i. renormalization group and the kadanoff scaling picture", "venue": "Phys. Rev. B,", "year": 1971 }, { "authors": [ "Yi-Zhuang You", "Xiao-Liang Qi", "Cenke Xu" ], "title": "Entanglement holographic mapping of many-body localized system by spectrum bifurcation renormalization group", "venue": "Phys. Rev. B,", "year": 2016 }, { "authors": [ "Yi-Zhuang You", "Zhao Yang", "Xiao-Liang Qi" ], "title": "Machine learning spatial geometry from entanglement features", "venue": "Phys. Rev. B,", "year": 2018 }, { "authors": [ "Juexiao Zhang", "Yubei Chen", "Brian Cheung", "Bruno A. Olshausen" ], "title": "Word embedding visualization via dictionary learning", "venue": null, "year": 1910 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Computer Vision (ICCV),", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "One of the most important unsupervised learning tasks is to learn the data distribution and build generative models. Over the past few years, various types of generative models have been proposed. Flow-based generative models are a particular family of generative models with tractable distributions (Dinh et al., 2017; Kingma & Dhariwal, 2018; Chen et al., 2018b; 2019; Behrmann et al., 2019; Hoogeboom et al., 2019; Brehmer & Cranmer, 2020; Rezende et al., 2020; Karami et al., 2019). Yet the latent variables are on equal footing and mixed globally. Here, we propose a new flow-based model, RG-Flow, which is inspired by the idea of renormalization group in statistical physics. RGFlow imposes locality and hierarchical structure in bijective transformations. It allows us to access information at different scales in original images by latent variables at different locations, which offers better explainability. Combined with sparse priors (Olshausen & Field, 1996; 1997; Hyvärinen & Oja, 2000), we show that RG-Flow achieves hierarchical disentangled representations.\nRenormalization group (RG) is a powerful tool to analyze statistical mechanics models and quantum field theories in physics (Kadanoff, 1966; Wilson, 1971). It progressively extracts more coarse-scale statistical features of the physical system and decimates irrelevant fine-grained statistics at each scale. Typically, the local transformations used in RG are designed by human physicists and they are not bijective. On the other hand, the flow-based models use cascaded invertible global transformations to progressively turn a complicated data distribution into Gaussian distribution. Here, we would like to combine the key ideas from RG and flow-based models. The proposed RG-flow enables the machine to learn the optimal RG transformation from data, by constructing local invertible transformations and build a hierarchical generative model for the data distribution. Latent representations are introduced at different scales, which capture the statistical features at the corresponding scales. Together, the latent representations of all scales can be jointly inverted to generate the data. This method was recently proposed in the physics community as NeuralRG (Li & Wang, 2018; Hu et al., 2020).\nOur main contributions are two-fold: First, RG-Flow can separate the signal statistics of different scales in the input distribution naturally, and represent information at each scale in its latent vari-\nables z. Those hierarchical latent variables live on a hyperbolic tree. Taking CelebA dataset (Liu et al., 2015) as an example, the network will not only find high-level representations, such as the gender factor and the emotion factor for human faces, but also mid-level and low-level representations. To visualize representations of different scales, we adopt the concept of receptive field from convolutional neural networks (CNN) (LeCun, 1988; LeCun et al., 1989) and visualize the hidden structures in RG-flow. In addition, since the statistics are separated into a hierarchical fashion, we show that the representations can be mixed at different scales. This achieves an effect similar to style mixing. Second, we introduce the sparse prior distribution for latent variables. We find the sparse prior distribution is helpful to further disentangle representations and make them more explainable. The widely adopted Gaussian prior is rotationally symmetric. As a result, each of the latent variables in a flow model usually does not have a clear semantic meaning. By using a sparse prior, we demonstrate the clear semantic meaning in the latent space." }, { "heading": "2 RELATED WORK", "text": "Some flow-based generative models also possess multi-scale latent space (Dinh et al., 2017; Kingma & Dhariwal, 2018), and recently hierarchies of features have been utilized in Schirrmeister et al. (2020), where the top-level feature is shown to perform strongly in out-of-distribution (OOD) detection task. Yet, previous models do not impose hard locality constraint in the multi-scale structure. In Appendix C, the differences between globally connected multi-scale flows and RG-Flow are discussed, and we see that semantic, meaningful receptive fields do not show up in the globally connected cases. Recently, other more expressive bijective maps have been developed (Hoogeboom et al., 2019; Karami et al., 2019; Durkan et al., 2019), and those methods can be incorporated into the proposed structure to further improve the expressiveness of RG-Flow.\nSome other classes of generative models rely on a separate inference model to obtain the latent representation. Examples include variational autoencoders (Kingma & Welling, 2014), adversarial autoencoders (Makhzani et al., 2015), InfoGAN (Chen et al., 2016), and BiGAN (Donahue et al., 2017; Dumoulin et al., 2017). Those techniques typically do not use hierarchical latent variables, and the inference of latent variables is approximate. Notably, recent advances suggest that having hierarchical latent variables may be beneficial (Vahdat & Kautz, 2020). In addition, the coarseto-fine fashion of the generation process has also been discussed in other generative models, such as Laplacian pyramid of adversarial networks (Denton et al., 2015), and multi-scale autoregressive models (Reed et al., 2017).\nDisentangled representations (Tenenbaum & Freeman, 2000; DiCarlo & Cox, 2007; Bengio et al., 2013) is another important aspect in understanding how a model generates images (Higgins et al., 2018). Especially, disentangled high-level representations have been discussed and improved from information theoretical principles (Cheung et al., 2015; Chen et al., 2016; 2018a; Higgins et al., 2017; Kipf et al., 2020; Kim & Mnih, 2018; Locatello et al., 2019; Ramesh et al., 2018). Apart from the high-level representations, the multi-scale structure also lies in the distribution of natural images. If a model can separate information of different scales, then its multi-scale representations can be used to perform other tasks, such as style transfer (Gatys et al., 2016; Zhu et al., 2017), face mixing (Karras et al., 2019; Gambardella et al., 2019; Karras et al., 2020), and texture synthesis (Bergmann et al., 2017; Jetchev et al., 2016; Gatys et al., 2015; Johnson et al., 2016; Ulyanov et al., 2016).\nTypically, in flow-based generative models, Gaussian distribution is used as the prior for the latent space. Due to the rotational symmetry of Gaussian prior, an arbitrary rotation of the latent space would lead to the same likelihood. Sparse priors (Olshausen & Field, 1996; 1997; Hyvärinen & Oja, 2000) was proposed as an important tool for unsupervised learning and it leads to better explainability in various domains (Ainsworth et al., 2018; Arora et al., 2018; Zhang et al., 2019). To break the symmetry of Gaussian prior and further improve the explainability, we introduce a sparse prior to flow-based models. Please refer to Figure 12 for a quick illustration on the difference between Gaussian prior and the sparse prior, where the sparse prior leads to better disentanglement.\nRenormalization group (RG) has a broad impact ranging from particle physics to statistical physics. Apart from the analytical studies in field theories (Wilson, 1971; Fisher, 1998; Stanley, 1999), RG has also been useful in numerically simulating quantum states. The multi-scale entanglement renormalization ansatz (MERA) (Vidal, 2008; Evenbly & Vidal, 2014) implements the hierarchical structure of RG in tensor networks to represent quantum states. The exact holographic mapping (EHM)\n(Qi, 2013; Lee & Qi, 2016; You et al., 2016) further extends MERA to a bijective (unitary) flow between latent product states and visible entangled states. Recently, Li & Wang (2018); Hu et al. (2020) incorporates the MERA structure and deep neural networks to design a flow-base generative model that allows machine to learn the EHM from statistical physics and quantum field theory actions. In quantum machine learning, recent development of quantum convolutional neural networks also (Cong et al., 2019) utilize the MERA structure. The similarity between RG and deep learning has been discussed in several works (Bény, 2013; Mehta & Schwab, 2014; Bény & Osborne, 2015; Oprisa & Toth, 2017; Lin et al., 2017; Gan & Shu, 2017). The information theoretic objective that guides machine-learning RG transforms are proposed in recent works (Koch-Janusz & Ringel, 2018; Hu et al., 2020; Lenggenhager et al., 2020). The meaning of the emergent latent space has been related to quantum gravity (Swingle, 2012; Pastawski et al., 2015), which leads to the exciting development of machine learning holography (You et al., 2018; Hashimoto et al., 2018; Hashimoto, 2019; Akutagawa et al., 2020; Hashimoto et al., 2020)." }, { "heading": "3 METHODS", "text": "Flow-based generative models. Flow-based generative models are a family of generative models with tractable distributions, which allows efficient sampling and exact evaluation of the probability density (Dinh et al., 2015; 2017; Kingma & Dhariwal, 2018; Chen et al., 2019). The key idea is to build a bijective map G(z) = x between visible variables x and latent variables z. Visible variables x are the data that we want to generate, which may follow a complicated probability distribution. And latent variables z usually have simple distribution that can be easily sampled, for example the i.i.d. Gaussian distribution. In this way, the data can be efficiently generated by first sampling z and mapping them to x through x = G(z). In addition, we can get the probability associated with each data sample x,\nlog pX(x) = log pZ(z) log @G(z)\n@z\n. (1)\nThe bijective map G(z) = x is usually composed as a series of bijectors, G(z) = G1 G2 · · · Gn(z), such that each bijector layer Gi has a tractable Jacobian determinant and can be inverted efficiently. The two key ingredients in flow-based models are the design of the bijective map G and the choice of the prior distribution pZ(z).\nStructure of RG-Flow networks. Much of the prior research has focused on designing more powerful bijective blocks for the generator G to improve its expressive power and to achieve better approximations of complicated probability distributions. Here, we focus on designing the architecture that arranges the bijective blocks in a hierarchical structure to separate features of different scales in the data and to disentangle latent representations.\n1\nOur design is motivated by the idea of RG in physics, which progressively separates the coarsegrained data statistics from fine-grained statistics by local transformations at different scales. Let x be the visible variables, or the input image (level-0), denoted as x(0) ⌘ x. A step of the RG transformation extracts the coarse-grained information x(1) to send to the next layer (level-1), and splits out the rest of fine-grained information as auxiliary variables z(0). The procedure can be\ndescribed by the following recursive equation (at level-h for example),\nx(h+1), z(h) = Rh(x (h)), (2)\nwhich is illustrated in Fig. 1(a), where dim(x(h+1)) + dim(z(h)) = dim(x(h)), and the RG transformation Rh can be made invertible. At each level, the transformation Rh is a local bijective map, which is constructed by stacking trainable bijective blocks. We will specify its details later. The split-out information z(h) can be viewed as latent variables arranged at different scales. Then the inverse RG transformation Gh ⌘ R 1h simply generates the fine-grained image,\nx(h) = R 1h (x (h+1) , z(h)) = Gh(x (h+1) , z(h)). (3)\nThe highest-level image x(hL) = GhL(z(hL)) can be considered as generated directly from latent variables z(hL) without referring to any higher-level coarse-grained image, where hL = log2 L log2 m, for the original image of size L⇥L with local transformations acting on kernel size m⇥m. Therefore, given the latent variables z = {z(h)} at all levels h, the original image can be restored by the following nested maps, as illustrated in Fig. 1(b),\nx ⌘ x(0) = G0(G1(G2(· · · , z(2)), z(1)), z(0)) ⌘ G(z), (4) where z = {z0, · · · , zhL}. RG-Flow is a flow-based generative model that uses the above composite bijective map G as the generator.\nTo model the RG transformation, we arrange the bijective blocks in a hierarchical network architecture. Fig. 2(a) shows the side view of the network, where each green or yellow block is a local bijective map. Following the notation of MERA networks, the green blocks are the disentanglers, which reparametrize local variables to reduce their correlations, and the yellow blocks are the decimators, which separate the decimated features out as latent variables. The blue dots on the bottom are the visible variables x from the data, and the red crosses are the latent variables z. We omit color channels of the image in the illustration, since we keep the number of color channels unchanged through the transformation.\nFig. 2(b) shows the top-down view of a step of the RG transformation. The green/yellow blocks (disentanglers/decimators) are interwoven on top of each other. The covering area of a disentangler or decimator is defined as the kernel size m⇥m of the bijector. For example, in Fig. 2(b), the kernel size is 4 ⇥ 4. After the decimator, three fourth of the degrees of freedom are decimated into latent variables (red crosses in Fig. 2(a)), so the edge length of the image is halved.\nAs a mathematical description, for the single-step RG transformation Rh, in each block (p, q) labeled by p, q = 0, 1, . . . , L2hm 1, the mapping from x\n(h) to (x(h+1), z(h)) is given by n y(h)2h(mp+m2 +a,mq+m2 +b) o (a,b)2⇤1m = Rdish ✓n x(h)2h(mp+m2 +a,mq+m2 +b) o (a,b)2⇤1m ◆ n x(h+1)2h(mp+a,mq+b) o\n(a,b)2⇤2m ,\nn z(h)2h(mp+a,mq+b) o\n(a,b)2⇤1m/⇤2m =Rdech\n✓n y(h)2h(mp+a,mq+b) o\n(a,b)2⇤1m\n◆ ,\n(5)\nwhere ⇤km = {(ka, kb) | a, b = 0, 1, . . . , mk 1} denotes the set of pixels in a m ⇥ m square with stride k, and y is the intermediate result after the disentangler but not the decimator. The notation x(h)(i,j) stands for the variable (a vector of all channels) at the pixel (i, j) and at the RG level h (similarly for y and z). The disentanglers Rdish and decimators R dec h can be any bijective neural network. Practically, We use the coupling layer proposed in the Real NVP networks (Dinh et al., 2017) to build them, with a detailed description in Appendix A. By specifying the RG transformation Rh = Rdech Rdish above, the generator Gh ⌘ R 1 h is automatically specified as the inverse transformation.\nTraining objective. After decomposing the statistics into multiple scales, we need to make the latent features decoupled. So we assume that the latent variables z are independent random variables, described by a factorized prior distribution\npZ(z) = Y\nl\np(zl), (6)\nUnder review as a conference paper at ICLR 2021\nx Visible variables\nz Latent variables\n(c)\nG en er at io n C au sa lC on e\nx Visible variables\nz Latent variables\n(d)\nIn fe re nc e C\nau\nsa\nlC\non\ne\nz Latent variables\n(a) (b)\nwhere l labels every element in z, including the RG level, the pixel position and the channel. This prior gives the network the incentive to minimize the mutual information between latent variables. This minimal bulk mutual information (minBMI) principle was previously proposed to be the information theoretic principle that defines the RG transformation (Li & Wang (2018); Hu et al. (2020)).\nStarting from a set of independent latent variables z, the generator G should build up correlations locally at different scales, such that the multi-scale correlation structure can emerge in the resulting image x to model the correlated probability distribution of the data. To achieve this goal, we should maximize the log likelihood for x drawn from the data set. The loss function to minimize reads\nL = Ex⇠pdata(x) log pX(x) = Ex⇠pdata(x) ✓ log pZ(R(x)) + log @R(x)\n@x\n◆ , (7)\nwhere R(x) ⌘ G 1(x) = z denotes the RG transformation, which contains trainable parameters. By optimizing the parameters, the network learns the optimal RG transformation from the data.\nReceptive fields of latent variables. Due to the nature of local transformations in our hierarchical network, we can define the generation causal cone for a latent variable to be the affected area when that latent variable is changed. This is illustrated as the red cone in Fig. 2(c).\nTo visualize the latent space representation, we define the receptive field for a latent variable zl as\nRFl = Ez⇠pZ(z) @G(z)\n@zl\nc\n, (8)\nwhere | · |c denotes the 1-norm on the color channel. The receptive field reflects the response of the generated image to an infinitesimal change of the latent variable zl, averaged over pZ(z). Therefore, the receptive field of a latent variable is always contained in its generation causal cone. Higher-level latent variables have larger receptive fields than those of the lower-level ones. Especially, if the receptive fields of two latent variables do not overlap, which is often the case for lower-level latent variables, they automatically become disentangled in the representation.\nImage inpainting and error correction. Another advantage of the network locality can be demonstrated in the inpainting task. Similar to the generation causal cone, we can define the inference causal cone shown as the blue cone in Fig. 2(d). If we perturb a pixel at the bottom of the blue cone, all the latent variables within the blue cone will be affected, whereas the latent variables outside the cone cannot be affected. An important property of the hyperbolic tree-like network is that the higher level contains exponentially fewer latent variables. Even though the inference causal cone is expanding as we go into higher levels, the number of latent variables dilutes exponentially as well, resulting in a constant number of latent variables covered by the inference causal cone on each level. Therefore, if a small local region on an image is corrupted, only O(log L) latent variables need to be modified, where L is the edge length of the entire image. While for globally connected networks, all O(L2) latent variables have to be varied.\nSparse prior distribution. We have chosen to hard-code the RG information principle by using a factorized prior distribution, i.e. pZ(z) = Q l p(zl). The common practice is to choose p(zl) to be the standard Gaussian distribution, which is spherical symmetric. If we apply any rotation to z, the distribution will remain the same. Therefore, we cannot avoid different features from being mixed under the arbitrary rotation.\nTo overcome this issue, we use an anisotropic sparse prior distribution for pZ(z). In our implementation, we choose the Laplacian distribution p(zl) = 12b exp( |zl|/b), which is sparser compared to Gaussian distribution and breaks the spherical symmetry of the latent space. In Appendix E, we show a two-dimensional pinwheel example to illustrate this intuition. This heuristic method will encourage the model to find more semantically meaningful representations by breaking the spherical symmetry." }, { "heading": "4 EXPERIMENTS", "text": "Synthetic multi-scale datasets. To illustrate RG-Flow’s ability to disentangle representations at different scales and spatially separated representations, we propose two synthetic datasets with multi-scale features, named MSDS1 and MSDS2. Their samples are shown in Appendix B. In each image, there are 16 ovals with different colors and orientations. In MSDS1, all ovals in an image have almost the same color, while their orientations are randomly distributed. So the color is a global feature in MSDS1, and the orientation is a local feature. In MSDS2, on the contrary, the orientation is a global feature, and the color is a local one.\nWe implement RG-Flow as shown in Fig. 2. After training, we find that RG-Flow can easily capture the characteristics of those datasets. Namely, the ovals in each image from MSDS1 have almost the same color; and from MSDS2, the same orientation. Especially, in Fig. 3, we plot the effect of varying latent variables at different levels, together with their receptive fields. For MSDS1, if we vary a high-level latent variable, the color of the whole image will change, which shows that the network has captured the global feature of the dataset. And if we vary a low-level latent variable, the orientation of only the corresponding one oval will change. As the ovals are spatially separated, the low-level representation of different ovals is disentangled. Similarly, for MSDS2, if we vary a high-level latent variable, the orientations of all ovals will change. And if we vary a low-level latent variable, the color of only the corresponding one oval will change.\nFor comparison, we also trained Real NVP on our synthetic datasets. We find that Real NVP fails to learn the global and local characteristics of those datasets. Details can be found in Appendix B.\nHuman face dataset. Next, we apply RG-Flow to more complicated multi-scale datasets. Most of our experiments use the human face dataset CelebA (Liu et al., 2015), and we crop and scale the images to 32 ⇥ 32 pixels. Details of the network and the training procedure can be found in Appendix A. Experiments on other datasets, such as CIFAR-10 (Krizhevsky et al.), and quantitative evaluations can also be found in Appendix G.\nAfter training, the network learns to progressively generate finer-grained images, as shown in Fig. 4(a). The colors in the coarse-grained images are not necessarily the same as those at the same positions in the fine-grained images, because there is no constraint to prevent the RG transformation from mixing color channels.\nReceptive fields. To visualize the latent space representation, we calculate the receptive field for each latent variable, and list some of them in Fig. 4(b). We can see the receptive size is small for low-level variables and large for high-level ones, as indicated from the generation causal cone. In the lowest level (h = 0), the receptive fields are merely small dots. In the second lowest level (h = 1), small structures emerge, such as an eyebrow, an eye, a part of hair, etc. In the middle level (h = 2), we can see eyebrows, eyes, forehead bang structure emerge. In the highest level (h = 3), each receptive field grows to the whole image. We will investigate those explainable latent representations in the next section. For comparison, we show receptive fields of Real NVP in Appendix C. Even though Real NVP has multi-scale structure, since it is not locally constrained, semantic representations at different scales do not emerge.\nLearned features on different scales. In this section, we show that some of these emergent structures correspond to explainable latent features. Flow-based generative model is the maximal encoding procedure, because the core of flow-based generative models is the bijective maps, and they preserves the dimensionality before and after the encoding. Usually, the images in the dataset live on a low dimensional manifold, and we do not need to use all the dimensions to encode such data. In Fig. 4(c) we show the statistics of the strength of receptive fields. We can see most of the latent variables have receptive fields with relatively small strength, meaning that if we change the value of those latent variables, the generated images will not be affected much. We focus on those latent variables with receptive field strength greater than one, which have visible effects on the generated images. We use h to label the RG level of latent variables, for example, the lowest-level latent variables have h = 0, whereas the highest-level latent variables have h = 4. In addition, we will focus on h = 1 (low level), h = 2 (mid level), h = 3 (high level) latent variables. There are a few latent variables with h = 0 that have visible effects, but their receptive fields are only small dots with no emergent structures.\nFor high-level latent representations, we found in total 30 latent variables that have visible effects, and six of them are identified with disentangled and explainable meanings. Those factors are gender, emotion, light angle, azimuth, hair color, and skin color. In Fig. 5(a), we plot the effect of varying those six high-level variables, together with their receptive fields. For the mid-level latent representations, we plot the four leading variables together with their receptive fields in Fig. 5(b), and they control eye, eyebrow, upper right bang, and collar respectively. For the low-level representations, some leading variables control an eyebrow and an eye as shown in Fig. 5(c). We see them achieve better disentangled representations when their receptive fields do not overlap.\nImage mixing in scaling direction. Given two images xA and xB , the conventional image mixing takes a linear combination between zA = G 1(xA) and zB = G 1(xB) by z = zA + (1 )zB with 2 [0, 1] and generates the mixed image from x = G(z). In our model, latent variables z is coordinated by the pixel position (i, j) and the RG level h. The direct access of the latent variable\n703 975 328 352 415 779 914\nz(h)(i,j) at each point enables us to mix the latent variables in a different manner, which may be dubbed as a “hyperbolic mixing”. We consider mixing the large-scale (high-level) features of xA and the small-scale (low-level) features of xB by combining their corresponding latent variables via\nz(h) = ( z(h)A , for h ⇥, z(h)B , for h < ⇥,\n(9)\nwhere ⇥ serves as a dividing line of the scales. As shown in Fig. 6(a), as we change ⇥ from 0 to 3, more low-level information in the blonde-hair image is mixed with the high-level information of the black-hair image. Especially when h = 3, we see the mixed face have similar eyes, nose, eyebrows, and mouth as the blonde-hair image, while the high-level information, such as face orientation and hair color, is taken from the black-hair image. In addition, this mixing is not symmetric under the interchange of zA and zB , see Fig. 6(b) for comparison. This hyperbolic mixing achieves the similar effect of StyleGAN (Karras et al., 2019; 2020) that we can take mid-level information from an image and mix it with the high-level information of another image. In Fig. 6(c), we show more examples of mixing faces.\nImage inpainting and error correction. The existence of the inference causal cone ensures that at most O(log L) latent variables will be affected, if we have a small local corrupted region to be inpainted. In Fig. 7, we show that RG-Flow can faithfully recover the corrupted region (marked as red) only using latent variables locating inside the inference causal cone, which are around one third of all latent variables. For comparison, if we randomly pick the same number of latent variables to modify in Real NVP, it fails to inpaint as shown in Fig. 7 (Constrained Real NVP). To achieve the recovery of similar quality in Real NVP, as shown in Fig. 7 (Real NVP), all latent variables need to be modified, which are of O(L2) order. See Appendix F for more details about the inpainting task and its quantitative evaluations.\nUnder review as a conference paper at ICLR 2021\nGround truth Corrupted Image RG-Flow Real NVP\nGround truth" }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "In this paper, we combined the ideas of renormalization group and sparse prior distribution to design RG-Flow, a probabilistic flow-based generative model. This versatile architecture can be incorporated with any bijective map to achieve an expressive flow-based generative model. We have shown that RG-Flow can separate information at different scales and encode them in latent variables living on a hyperbolic tree. To visualize the latent representations in RG-Flow, we defined the receptive fields for flow-based models in analogy to that in CNN. Taking CelebA dataset as our main example, we have shown that RG-Flow will not only find high-level representations, but also mid-level and low-level ones. The receptive fields serve as a visual guidance for us to find explainable representations. In contrast, the semantic representations of mid-level and low-level structures do not emerge in globally connected multi-scale flow models, such as Real NVP. We have also shown that the latent representations can be mixed at different scales, which achieves an effect similar to style mixing.\nIn our model, if the receptive fields of two latent representations do not overlap, they are naturally disentangled. For high-level representations, we propose to utilize a sparse prior to encourage disentanglement. We find that if the dataset only contains a few high-level factors, such as the 3D Chair dataset (Aubry et al., 2014) shown in Appendix G, it is hard to find explainable high-level disentangled representations, because of the redundant nature of the encoding in flow-based models. Incorporating information theoretic criteria to disentangle high-level representations in the redundant encoding procedure will be an interesting future direction." }, { "heading": "2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December", "text": "7-13, 2015, pp. 3730–3738. IEEE Computer Society, 2015. doi: 10.1109/ICCV.2015.425. URL https://doi.org/10.1109/ICCV.2015.425.\nFrancesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, volume 97 of Proceedings of Machine Learning Research, pp. 4114–4124. PMLR, 2019.\nIlya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019. OpenReview.net, 2019.\nAlireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian J. Goodfellow. Adversarial autoencoders. CoRR, abs/1511.05644, 2015.\nPankaj Mehta and David J. Schwab. An exact mapping between the variational renormalization group and deep learning, 2014.\nBruno A Olshausen and David J Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607, 1996.\nBruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research, 37(23):3311–3325, 1997.\nDan Oprisa and Peter Toth. Criticality & deep learning ii: Momentum renormalisation group, 2017.\nFernando Pastawski, Beni Yoshida, Daniel Harlow, and John Preskill. Holographic quantum errorcorrecting codes: toy models for the bulk/boundary correspondence. Journal of High Energy Physics, 2015(6):149, Jun 2015. ISSN 1029-8479. doi: 10.1007/JHEP06(2015)149.\nAdam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, pp. 8024–8035, 2019.\nXiao-Liang Qi. Exact holographic mapping and emergent space-time geometry. arXiv: High Energy Physics - Theory, 2013.\nAditya Ramesh, Youngduck Choi, and Yann LeCun. A spectral regularizer for unsupervised disentanglement. CoRR, abs/1812.01161, 2018.\nScott E. Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gomez Colmenarejo, Ziyu Wang, Yutian Chen, Dan Belov, and Nando de Freitas. Parallel multiscale autoregressive density estimation. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, volume 70 of Proceedings of Machine Learning Research, pp. 2912–2921. PMLR, 2017.\nDanilo Jimenez Rezende, George Papamakarios, Sébastien Racanière, Michael S. Albergo, Gurtej Kanwar, Phiala E. Shanahan, and Kyle Cranmer. Normalizing flows on tori and spheres. CoRR, abs/2002.02428, 2020.\nTim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, pp. 901, 2016.\nRobin Tibor Schirrmeister, Yuxuan Zhou, Tonio Ball, and Dan Zhang. Understanding anomaly detection with deep invertible networks through hierarchies of distributions and features. CoRR, abs/2006.10848, 2020.\nH. Eugene Stanley. Scaling, universality, and renormalization: Three pillars of modern critical phenomena. Rev. Mod. Phys., 71:S358–S366, Mar 1999. doi: 10.1103/RevModPhys.71.S358.\nBrian Swingle. Entanglement renormalization and holography. Phys. Rev. D, 86:065007, Sep 2012. doi: 10.1103/PhysRevD.86.065007.\nChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pp. 2818–2826. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.308. URL https: //doi.org/10.1109/CVPR.2016.308.\nJoshua B. Tenenbaum and William T. Freeman. Separating style and content with bilinear models. Neural Comput., 12(6):1247–1283, 2000.\nDmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor S. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, volume 48 of JMLR Workshop and Conference Proceedings, pp. 1349–1357. JMLR.org, 2016.\nArash Vahdat and Jan Kautz. NVAE: A deep hierarchical variational autoencoder. CoRR, abs/2007.03898, 2020.\nG. Vidal. Class of quantum many-body states that can be efficiently simulated. Phys. Rev. Lett., 101:110501, Sep 2008. doi: 10.1103/PhysRevLett.101.110501.\nKenneth G. Wilson. Renormalization group and critical phenomena. i. renormalization group and the kadanoff scaling picture. Phys. Rev. B, 4:3174–3183, Nov 1971.\nYi-Zhuang You, Xiao-Liang Qi, and Cenke Xu. Entanglement holographic mapping of many-body localized system by spectrum bifurcation renormalization group. Phys. Rev. B, 93:104205, Mar 2016. doi: 10.1103/PhysRevB.93.104205.\nYi-Zhuang You, Zhao Yang, and Xiao-Liang Qi. Machine learning spatial geometry from entanglement features. Phys. Rev. B, 97:045153, Jan 2018. doi: 10.1103/PhysRevB.97.045153.\nJuexiao Zhang, Yubei Chen, Brian Cheung, and Bruno A. Olshausen. Word embedding visualization via dictionary learning. CoRR, abs/1910.03833, 2019.\nJun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, 2017." } ]
2,020
RG-FLOW: A HIERARCHICAL AND EXPLAINABLE FLOW MODEL BASED ON RENORMALIZATION GROUP AND SPARSE PRIOR
SP:2f7f3a043edf8bbe4164dc748c7fbfc7c7338a02
[ "The authors propose a discriminator-based approach to inverse reinforcement learning (IRL). The discriminator function is trained to attain large values (\"energy\") on trajectories from the current policy and small values on trajectories from an expert policy. The current policy is then improved by using the negative discriminator as a reward signal. The specific discriminator suggested is an autoencoder loss. The authors continue to provide a proof that assuming their discriminator/generator attain a Nash equilibrium, the occupancy measure of the trained policy matches that of the expert policy. They follow up with demonstrating better performance of their approach compared to certain baselines when tested on a number of tasks on Physics simulators." ]
Traditional reinforcement learning methods usually deal with the tasks with explicit reward signals. However, for vast majority of cases, the environment wouldn’t feedback a reward signal immediately. It turns out to be a bottleneck for modern reinforcement learning approaches to be applied into more realistic scenarios. Recently, inverse reinforcement learning (IRL) has made great progress in making full use of the expert demonstrations to recover the reward signal for reinforcement learning. And generative adversarial imitation learning is one promising approach. In this paper, we propose a new architecture for training generative adversarial imitation learning which is so called energy based generative adversarial imitation learning (EB-GAIL). It views the discriminator as an energy function that attributes low energies to the regions near the expert demonstrations and high energies to other regions. Therefore, a generator can be seen as a reinforcement learning procedure to sample trajectories with minimal energies (cost), while the discriminator is trained to assign high energies to these generated trajectories. In detail, EB-GAIL uses an auto-encoder architecture in place of the discriminator, with the energy being the reconstruction error. Theoretical analysis shows our EB-GAIL could match the occupancy measure with expert policy during the training process. Meanwhile, the experiments depict that EB-GAIL outperforms other SoTA methods while the training process for EB-GAIL can be more stable.
[]
[ { "authors": [ "Pieter Abbeel", "Varun Ganapathi", "Andrew Y. Ng" ], "title": "A learning vehicular dynamics, with application to modeling helicopters", "venue": "In Advances in Neural Information Processing Systems,", "year": 2006 }, { "authors": [ "Pieter Abbeel", "Adam Coates", "Morgan Quigley", "Andrew Y. Ng" ], "title": "An application of reinforcement learning to aerobatic helicopter flight", "venue": "In Advances in Neural Information Processing Systems,", "year": 2007 }, { "authors": [ "Pieter Abbeel", "Adam Coates", "Timothy Hunter", "Andrew Y. Ng" ], "title": "Autonomous autorotation of an rc helicopter", "venue": "In International Symposium on Robotics,", "year": 2008 }, { "authors": [ "Pieter Abbeel", "Dmitri Dolov", "Andrew Y. Ng", "Sebastian Thrun" ], "title": "Apprenticeship learning for motion planning with application to parking lot navigation", "venue": "In IEEE/RSj International Conference on Intelligent Robots and Systems,", "year": 2008 }, { "authors": [ "Pieter Abbeel", "Adam Coates", "Andrew Y. Ng" ], "title": "Autonomous helicopter aerobatics through apprenticeship learning", "venue": "The International Journal of Robotics Research,", "year": 2010 }, { "authors": [ "J. Andrew Bagnell" ], "title": "An invitation to imitation", "venue": "Technical report,", "year": 2015 }, { "authors": [ "Enda Barrett", "Stephen Linder" ], "title": "Autonomous hva control, a reinforcement learning approach", "venue": "Machine Learning and Knowledge Discovery in Databases,", "year": 2015 }, { "authors": [ "Jaedeug Choi", "Kee-Eung Kim" ], "title": "Bayesian nonparametric feature construction for inverse reinforcement learning", "venue": "In International Joint Conference on Artificial Intelligence,", "year": 2014 }, { "authors": [ "Adam Coates", "Pieter Abbeel", "Andrew Y. Ng" ], "title": "Learning for control from multiple demonstrations", "venue": "In International Conference on Machine Learning,", "year": 2008 }, { "authors": [ "Rémi Coulom" ], "title": "Reinforcement learning using neural networks, with applications to motor control", "venue": "PhD thesis, Institut National Polytechnique de Grenoble-INPG,", "year": 2002 }, { "authors": [ "Gerald DeJong", "Mark W Spong" ], "title": "Swinging up the acrobot: An example of intelligent control", "venue": "In Proceedings of 1994 American Control Conference-ACC’94,", "year": 1994 }, { "authors": [ "PEK Donaldson" ], "title": "Error decorrelation: a technique for matching a class of functions", "venue": "In Proceedings of the Third International Conference on Medical Electronics,", "year": 1960 }, { "authors": [ "Kenji Doya" ], "title": "Reinforcement learning in continuous time and space", "venue": "Neural computation,", "year": 2000 }, { "authors": [ "Tom Erez", "Yuval Tassa", "Emanuel Todorov" ], "title": "Infinite horizon model predictive control for nonlinear periodic tasks", "venue": "Manuscript under review,", "year": 2011 }, { "authors": [ "Chelsea Finn", "Sergey Levine", "Pieter Abbeel" ], "title": "Guided cost learning: Deep inverse optimal control via policy optimization", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "K Furuta", "T Okutani", "H Sone" ], "title": "Computer control of a double inverted pendulum", "venue": "Computers & Electrical Engineering,", "year": 1978 }, { "authors": [ "Nicolas Heess", "Gregory Wayne", "David Silver", "Timothy Lillicrap", "Tom Erez", "Yuval Tassa" ], "title": "Learning continuous control policies by stochastic value gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ming Jin", "Costas Spanos" ], "title": "Inverse reinforcement learning via deep gaussian process", "venue": "In arXiv:1512.08065v1,", "year": 2015 }, { "authors": [ "H Kimura", "S Kobayashi" ], "title": "Stochastic real-valued reinforcement learning to solve a nonlinear control problem", "venue": "In IEEE SMC’99 Conference Proceedings", "year": 1999 }, { "authors": [ "Yann LeCun", "Sumit Chopra", "Raia Hadsell", "M Ranzato", "F Huang" ], "title": "A tutorial on energy-based learning", "venue": "Predicting structured data,", "year": 2006 }, { "authors": [ "Sergey Levine", "Vladlen Koltun" ], "title": "Guided policy search", "venue": "In International Conference on Machine Learning, pp", "year": 2013 }, { "authors": [ "Sergey Levine", "Zoran Popvić" ], "title": "Feature construction for inverse reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Donald Michie", "Roger A Chambers" ], "title": "Boxes: An experiment in adaptive control", "venue": "Machine intelligence,", "year": 1968 }, { "authors": [ "Andrew Moore" ], "title": "Efficient memory-based learning for robot control", "venue": "Technical report,", "year": 1990 }, { "authors": [ "Richard M Murray", "John Edmond Hauser" ], "title": "A case study in approximate linearization: The acrobat example", "venue": "Electronics Research Laboratory, College of Engineering, University of California,", "year": 1991 }, { "authors": [ "Seshashayee S Murthy", "Marc H Raibert" ], "title": "3d balance in legged locomotion: modeling and simulation for the one-legged case", "venue": "ACM SIGGRAPH Computer Graphics,", "year": 1984 }, { "authors": [ "Andrew Y. Ng", "Stuart Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2000 }, { "authors": [ "Andrew Y. Ng", "Adam Coates", "Mark Diel", "Varun Ganapathi", "Jamie Schulte", "Ben Tse", "Eric Berger", "Eric Liang" ], "title": "Inverted autonomous helicopter flight via reinforcement learning", "venue": "In International Symposium on Experimental Robotics,", "year": 2004 }, { "authors": [ "Quoc Phong Nguyen", "Kian Hsiang Low", "Patrick Jaillet" ], "title": "Inverse reinforcement learning with locally consistent reward functions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Edward M Purcell" ], "title": "Life at low reynolds number", "venue": "American journal of physics,", "year": 1977 }, { "authors": [ "Marc H Raibert", "Jessica K Hodgins" ], "title": "Animation of dynamic legged locomotion", "venue": "In Proceedings of the 18th annual conference on Computer graphics and interactive techniques,", "year": 1991 }, { "authors": [ "Deepak Ramachandran", "Eyal Amir" ], "title": "Bayesian inverse reinforcement learning", "venue": "In International Joint Conference on Artificial Intelligence,", "year": 2007 }, { "authors": [ "C.E. Rasmussen", "C.K.I. Williams" ], "title": "Gaussian Processes for Machine Learning", "venue": null, "year": 2006 }, { "authors": [ "Nathan D. Ratliff", "J. Andrew Bagnell", "Martin A. Zinkevich" ], "title": "Maximum margin planning", "venue": "In International Conference on Machine Learning,", "year": 2006 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "High-dimensional continuous control using generalized advantage estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "Andrew Stephenson" ], "title": "On induced stability. The London, Edinburgh, and Dublin", "venue": "Philosophical Magazine and Journal of Science,", "year": 1908 }, { "authors": [ "Masashi Sugiyama" ], "title": "Statistical Reinforcement Learning: Modern Machine Learning Approaches", "venue": "CRC Press,", "year": 2015 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": null, "year": 1998 }, { "authors": [ "Umar Syed", "Michael Bowling", "Robert E. Schapire" ], "title": "Apprenticeship learning using linear programming", "venue": "In International Conference on Machine Learning,", "year": 2008 }, { "authors": [ "Yuval Tassa", "Tom Erez", "Emanuel Todorov" ], "title": "Synthesis and stabilization of complex behaviors through online trajectory optimization", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Pawel Wawrzynski" ], "title": "Learning to control a 6-degree-of-freedom walking robot", "venue": "In EUROCON 2007-The International Conference on\" Computer as a Tool\",", "year": 2007 }, { "authors": [ "Bernard Widrow" ], "title": "Pattern recognition and adaptive control", "venue": "IEEE Transactions on Applications and Industry,", "year": 1964 }, { "authors": [ "Junbo Zhao", "Machael Mathieu", "Yann LeCun" ], "title": "Energy-based generative adversarial network", "venue": "arXiv preprint arXiv:", "year": 2016 }, { "authors": [ "Brian D. Ziebart", "Andrew Maas", "J. Andrew Bagnell", "Anind K. Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2008 }, { "authors": [ "Brian D. Ziebart", "J. Andrew Bagnell", "Anind K. Dey" ], "title": "Modeling interaction via the principle of maximum causal entropy", "venue": "In International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "Zhao" ], "title": "2016) Let a, b ≥ 0, φ(x) = ax+ b[m− x]. The minimum of φ on [0,+∞) exists and is reached in m if a < b, and it is reached in 0 otherwise", "venue": null, "year": 2016 }, { "authors": [ "Zhao" ], "title": "2016) If p and q are probability densities, and the function 1A(x) = 1 if x ∈ A otherwise", "venue": null, "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "Motivated by applying reinforcement learning algorithms into more realistic tasks, we find that most realistic environments cannot feed an explicit reward signal back to the agent immediately. It becomes a bottleneck for traditional reinforcement learning methods to be applied into more realistic scenarios. So how to infer the latent reward function from expert demonstrations is of great significance. Recently, a lot of great work have been proposed to solve this problem. They are also successfully applied in scientific inquiries, such as Stanford autonomous helicopter Abbeel et al. (2006) Abbeel et al. (2007) Ng et al. (2004) Coates et al. (2008) Abbeel et al. (2008a) Abbeel et al. (2010), as well as practical challenges such as navigation Ratliff et al. (2006) Abbeel et al. (2008b) Ziebart et al. (2008) Ziebart et al. (2010) and intelligent building controls Barrett & Linder (2015).\nThe goal of imitation learning is to mimic the expert behavior from expert demonstrations without access to a reinforcement signal from the environment. The algorithms in this field can be divided into two board categories: behavioral cloning and inverse reinforcement learning. Behavioral cloning formulate this problem as a supervised learning problem which aims at mapping state action pairs from expert trajectories to policy. These methods suffer from the problem of compounding errors (covariate shift) which only learn the actions of the expert but not reason about what the expert is trying to achieve. By the contrast, inverse reinforcement learning recovers the reward function from expert demonstrations and then optimize the policy under such an unknown reward function.\nIn this paper, we propose energy-based generative adversarial imitation learning which views the discriminator as an energy function without explicit probabilistic interpretation. The energy function computed by the discriminator can be viewed as a trainable cost function for the generator, while the discriminator is trained to assign low energy values to the regions of expert demonstrations, and\nhigher energy values outside these regions. We use an auto-encoder to represent the discriminator, and the reconstruction error is thought to be the energy. There are many other choices to learn the energy function, but an auto-encoder is quite efficient.\nOur main contributions are summarized as follows.\n• An EB-GAIL framework with the discriminator using an auto-encoder architecture in which the energy is the reconstruction error. • Theoretical analysis shows that the policy rolls out the trajectories that are indistinguishable\nfrom the distribution of the expert demonstrations by matching the occupancy measure with the expert policy. • Experiments show that EB-GAIL outperforms several SoTA imitation learning algorithms\nwhile the training process for EB-GAIL can be more stable." }, { "heading": "2 BACKGROUND", "text": "In this section, we’ll briefly introduce the basic concepts in direct reinforcement learning Sutton & Barto (1998) Sugiyama (2015), inverse reinforcement learning Ng & Russell (2000), imitation learning Bagnell (2015), and energy based models LeCun et al. (2006)." }, { "heading": "2.1 DIRECT REINFORCEMENT LEARNING", "text": "Reinforcement Learning Sutton & Barto (1998) Sugiyama (2015), which is usually used for sequential decision making problems, can help us to learn from the interactions with the environment. The process for direct reinforcement learning is that at each time step, the agent receives a state st and chooses an action at from the action space A, following a stochastic policy π(at|st). After that, the environment transit to a new state st+1 and gives a scalar reward rt back to the agent. This process continues until the agent reaches a terminal state.\nIn this case, the training model can be thought as a Markov decision process (MDP), which is a tuple (S,A, T, γ,D,R). In this tuple, S is the state space; A is the action space; T = Psa is a probability matrix for state transitions, owing to the environment dynamics; γ ∈ (0, 1] is a discount factor; D is the initial-state transition distribution; and R : S → A is the reward function, which is assumed to be bounded in absolute value by 1. In addition, we also define that a policy π is a mapping from states to probability distributions over actions, which is also called a stochastic policy.\nFor a certain task, the goal of direct reinforcement learning is to maximize the total future reward. For simplification, we define the value function to be a prediction of the total future reward, which can be shown as a discounted future reward: V = ∑∞ t=0 γ\ntRt. Besides, we also define the action value function as Q(s, a) = E[Rt|st = s, at = a], which is the expected return for selecting action a in state s. According to the Bellman Optimality, an optimal action value function Q∗(s, a) is the maximum action value achievable by any policy for state s and action a." }, { "heading": "2.2 INVERSE REINFORCEMENT LEARNING", "text": "The goal of inverse reinforcement learning is to infer the reward signal with respect to the expert demonstrations which are assumed to be the observations of optimal behaviors Ng & Russell (2000).\nIn the past decade, a lot of great work have been proposed towards enlarging the ability of reward function representation. Take some traditional methods for example, in 2010, FIRL Levine & Popvić (2010) was proposed to learn a set of composites features based on logical conjunctions with nonlinear functions for the reward signal. Later, non-parametric methods based on Gaussian Process(GP) Rasmussen & Williams (2006) are proposed to enlarge the function space of latent reward to allow for non-linearity in Levine & Popvić (2010). For undertaking the learning of an abstract structure with smaller data sets, Jin et al. Jin & Spanos (2015) combined deep belief networks and gaussian process to optimize the existing algorithm GP-IRL.\nTo solve the substantial noise in the sensor measurements, some work have been applied here such as bayesian programming, probabilistic graphical model and so on. For example, in 2007, Ramachandran et al. proposed a Bayesian nonparametric approach Ramachandran & Amir (2007)\nto construct the reward function features in IRL, which is so-called Bayesian IRL. Later, Choi et al. Choi & Kim (2014) extend this algorithm by defining a prior on the composite features, which are defined to be the logical conjunctions of the atomic features.\nBy assuming that each trajectory can be generated by multiple locally consistent reward functions, Nguyen et al. used an expectation-maximization (EM) algorithm to learn the different reward functions and the stochastic transitions between them in order to jointly improve the likelihood of the expert’s demonstrated trajectories Nguyen et al. (2015). As a result, the most likely partition of a trajectory into segments that are generated from different locally consistent reward functions selected by EM can be derived. Experiments show that the algorithm outperforms the SoTA EM clustering with maximum likelihood IRL." }, { "heading": "2.3 IMITATION LEARNING", "text": "Imitation learning is a study of algorithms that can mimic the experts’ demonstrations or a teachers’ behavior. Unlike inverse reinforcement learning, the goal of imitation learning is to obtain a policy from teachers’ behaviors rather than to recover the reward function for some certain tasks.\nThe algorithms in imitation learning can be classified into two categories: behavioral cloning, and inverse reinforcement learning. One fatal weakness for behavioral cloning is that these methods can only learn the actions from the teacher rather than learn the motivation from teachers’ behaviors. To solve this problem, inverse reinforcement learning was proposed to recover the reward function for decision making problems. By assuming that the teachers’ behavior is optimal, these methods tend to recover the reward function in a Markov decision process. So when combined with direct reinforcement learning methods, inverse reinforcement learning can realize the process for mimicking the teachers’ behaviors." }, { "heading": "2.4 ENERGY BASED MODEL", "text": "The essence of the energy based model LeCun et al. (2006) is to build a function that maps each point of an input space to a single scalar, which is called “energy”. The learning phase is a data driven process that shapes the energy surface in such a way that the desired configurations get assigned low energies, while the incorrect ones are given high energies. Supervised learning falls into this framework: for each x in the training set, the energy of the pair (x, y) takes low values when y is the correct label and higher values for incorrect y’s. Similarly, when modeling x alone within an unsupervised learning setting, lower energy is attributed to the data manifold. The term contrastive sample is often used to refer to a data point causing an energy pull up, such as the incorrect y’s in supervised learning and points from low data density regions in unsupervised learning.\nDenoted that the energy function is ε, the connection between probability and energy can be built through Gibbs distribution:\nP (y|x) = exp(−βε(y, x))∫ y∈Y exp(−βε(y, x)) , (1)\nthe denominator here is the partition function which represents the total energy in the data space and β is an arbitrary positive constant. The formulation of this connection might seem arbitrary, but other formulation can be obtained by re-defining the energy function." }, { "heading": "3 ENERGY BASED GENERATIVE ADVERSARIAL IMITATION LEARNING", "text": "(EB-GAIL)" }, { "heading": "3.1 METHODOLOGY", "text": "The output of the discriminator goes through an objective functional in order to shape the energy function, assigning low energy to the regions near the expert demonstrations and higher energy to the other regions. In this work, we use an auto-encoder to represent the discriminator and the reconstruction error of the auto-encoder is assumed to be the energy. Meanwhile, we use a margin loss function to train EB-GAIL while one loss function is to train the discriminator and the other loss function is assumed to be the reward function for the reinforcement learning procedure.\nGiven a positive margin margin, a state-action pair χE sampled from expert demonstrations, and a state-action pair χi rolled out by a trained policy, the discriminator loss function LD is formally defined by:\nLD = D(χE) + [margin−D(χi)]+ (2) where [·]+ = max(0, ·). Meanwhile, the reward function for the reinforcement learning procedure is:\nr(χi) = −D(χi) (3)\nMaximizing the total reward for the reinforcement learning is similar to minimizing the second term of LD. In practice, we observe that the loss function can effectively avoid gradient vanishing and mode collapse problems." }, { "heading": "3.2 USING AUTO-ENCODER", "text": "In this work, we propose that the discriminator is structured as an auto-encoder, which assigns low energy to the regions near the expert demonstrations and high energy to the other regions.\nThe discriminator is defined as:\nD(x) = ‖Dec(Enc(x))− x‖ (4)\nAlgorithm 1 Energy based Generative Adversarial Imitation Learning (EB-GAIL) Require: Initial parameters of policy, discriminator θ0, w0.\nExpert trajectories τE ∼ πE . Choose a value for margin.\nEnsure: 1: for i = 0 to N do 2: Sample trajectories: τi ∼ πθi with current policy πθi . 3: Sample state-action pairs χi ∼ τi and χE ∼ πE with same batch size. 4: Update wi to wi+1 by decreasing with the gradient:\nÊχi [∇wiDwi(s, a)] + ÊχE [∇wi [margin−Dwi(s, a)]+]\n5: Take a policy step from θi to θi+1, using the TRPO update rule with the reward function −Dwi(s, a), and the objective function for TRPO is:\nÊτi [(−Dwi(s, a))]− λH(πθi).\n6: end for\nFigure 1 depicts the architecture for EB-GAIL. The reinforcement learning component is trained to roll out trajectories τi, which are the sequences of state-action pairs. The discriminator D takes either expert or generated state-action pairs, and estimates the energy value accordingly. Here, we assume\nthe discriminator D produces non-negative values. There are also many other choices for defining the energy function LeCun et al. (2006), but the auto-encoder is an efficient one.\nAlgorithm 1 depicts the procedure for training EB-GAIL. The first step is to sample the stateaction pairs from expert demonstrations χE ∼ πE . The second step is to sample the state-action pairs from the trajectories rolled out by current policy χi ∼ τi with the same batch size. Then we update the discriminator by decreasing with the gradient: Êχi [∇wiDwi(s, a)]+ÊχE [∇wi [margin− Dwi(s, a)]\n+]. The fourth step is to update the policy assuming that the reward function is−Dwi(s, a). We runs these steps with N iterations until the policy converges." }, { "heading": "3.3 THEORETICAL ANALYSIS", "text": "In this section, we introduce a theoretical analysis for EB-GAIL. We show that if EB-GAIL reaches a Nash equilibrium, then the policy perfectly match the occupancy measure with the expert policy. This section is done in a non-parametric setting. We also assume that D and G have infinite capacity.\nFirstly, we provide a definition for occupancy measure for inverse reinforcement learning.\nDefinition 1. (occupancy measure) The agent rolls out the trajectories which can be divided into state-action pairs (s, a) with policy π. This leads to the definition of occupancy measure ρs,aπ (s, a) and ρsπ(s) as the density of occurrence of states or state-action pairs:\nρs,aπ = ∞∑ t=0 γtP (st = s, at = s|π)\n= π(a|s) ∞∑ t=0 γtP (st = s|π) = π(a|s)ρsπ(s), (5)\nwhere we assume γ = 1 for simplicity in the following paragraph.\nA basic result is that the set of valid occupancy measures D = {ρπ : π ∈ Π} can be written as a feasible set of affine constraints: if p0(s) is the distribution of starting states and P (s′|s, a) is the dynamics model, then D = {ρ : ρ ≥ 0 and ∑ a ρ(s, a) = p0(s) + γ ∑ s′,a P (s|s′, a)ρ(s′, a) ∀s ∈ S}. Proposition 1. (Theorem 2 in Syed et al. (2008)) If ρ ∈ D, then ρ is the occupancy measure for πρ(a|s) = ρ(s,a)∑\na′ ρ(s,a ′) , and πρ is the only policy whose occupancy measure is ρ.\nHere we are justified in writing πρ to denote the unique policy for an occupancy measure ρ. Now, let us define an IRL primitive procedure, which finds a reward function such that the expert performs better than all other policies, with the reward regularized by φ:\nIRLφ(πE) = arg min r∈RS×A φ(r) + (max π∈Π H(π)− Eπ[r(s, a)]) + EπE [r(s, a)] (6)\nWe are interested in the policy given by running reinforcement learning procedure on the reward function which is the output of IRL.\nAnd now we are ready to acquire the policy learned by RL under the reward function recovered by IRL:\nProposition 2. (Proposition 3.2 in Ho & Ermon (2016))\nRL ◦ IRLφ(πE) = arg min π∈Π −H(π) + φ∗(ρπ − ρπE ) (7)\nRemark 1. Proposition 2 tells that φ-regularized inverse reinforcement learning seeks a policy whose occupancy measure is close to the expert’s, which is measured by the convex function φ∗.\nWe present that the discriminator is to minimize dφ(ρπ, ρπE )−H(π), where dφ(ρπ, ρπE ) = φ∗(ρπ− ρπE ) by modifying the IRL regularizer φ, so that dφ(ρπ, ρπE ) smoothly penalizes violations in difference between the occupancy measures. And the generator is to maximize the total reward for a policy, which is E[ ∑T t=0 γ tr(s, a)] = E[ ∑T t=0D(s, a)] (we assume γ = 1 for simplicity).\nProposition 3. Our choice (EB-GAIL) of φ is:\nφ(ρπ − ρπE ) = minEπE [D(s, a)] + Eπ[(margin−D(s, a))+] (8)\nThe proof of this proposition is in Appendix A.1.1. Remark 2. Proposition 3 tells that in our paper, the φ is determined by the loss functional, which is the margin based loss function. And we can further prove that this formulation of φ effectively help the algorithm to match the occupancy measure with expert policy during the training process.\nConsidered the loss functionals in section 3.1 and proposition 3, we define:\nV (G,D) = ∫ χi,χE φ(ρπ − ρπE )ρπρπEdχidχE\n= ∫ χi,χE\n{EπE [D(χE)] + Eπ[(margin−D(χi))+]}︸ ︷︷ ︸ LD ρπρπEdχidχE (9)\nU(G,D) = ∫ χi\nD(χi)︸ ︷︷ ︸ −r(χi) ρπdχi (10)\nObviously, we train the discriminator D to minimize the quantity V and the generator G to minimize the quantity U . A Nash equilibrium of EB-GAIL is a pair (G∗, D∗) that satisfies:\nV (G∗, D∗) ≤ V (G∗, D) ∀D U(G∗, D∗) ≤ U(G,D∗) ∀G (11)\nTheorem 1. If the system (EB-GAIL) reaches a Nash equilibrium, then ρπ = ρπE , and V (G∗, D∗) = m, where m represents margin.\nThe proof of this theorem is in Appendix A.1.2. Remark 3. The proof follows the idea that won’t violate the equation 11 (get conclusion that m ≤ V (G∗, D∗) ≤ m). And therefore, the occupancy measure can be matched (ρπ = ρπE ) when the system reaches a Nash equilibrium. Theorem 1 tells that using EB-GAIL loss functionals can help the trained policy to match the occupancy measure with the expert policy and V (G,D) will converge to margin(m) which indicate that the discriminator will converge while the system reaches a Nash equilibrium.\nTheorem 2. A Nash equilibrium of this system exists and is characterized by (1)ρπ = ρπE , and (2) there exists a constant γ ∈ [0,m] such that D∗(χ) = γ almost everywhere, where m represents margin.\nThe proof of this theorem is in Appendix A.1.3. Remark 4. Theorem 2 tells that if the Nash equilibrium of EB-GAIL exists, then it is characterized by that the trained policy could match the occupancy measure with expert policy (the trajectories rolled by the trained policy will be indistinguishable with the expert demonstrations) and D∗(χ) = γ almost everywhere which indicate that the discriminator could converge." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 TASKS", "text": "The tasks in the presented benchmark can be divided into two categories: basic tasks and locomotion tasks. We briefly describe them in this section. We choose to implement all tasks using physics simulators rather than symbolic equations, since the former approach is less error-prone and permits easy modification of each task. Tasks with simple dynamics are implemented using Box2D, an open-source, freely available 2D physics simulator. Tasks with more complicated dynamics, such as locomotion, are implemented using MuJoCo, a 3D physics simulator with better modeling of contacts." }, { "heading": "4.1.1 BASIC TASKS", "text": "We implement five basic tasks that have been widely analyzed in reinforcement learning and imitation learning literature: Cart-Pole Balancing Stephenson (1908) Donaldson (1960) Widrow (1964) Michie & Chambers (1968); Cart-Pole Swing Up Kimura & Kobayashi (1999) Doya (2000); Mountain Car Moore (1990); Acrobot Swing Up DeJong & Spong (1994) Murray & Hauser (1991) Doya (2000); and Double Inverted Pendulum Balancing Furuta et al. (1978). These relatively low dimensional tasks provide quick evaluations and comparisons of imitation learning algorithms." }, { "heading": "4.1.2 LOCOMOTION TASKS", "text": "In this category, we implement seven locomotion tasks of varying dynamics and difficulty: Swimmer Purcell (1977) Coulom (2002) Levine & Koltun (2013) Schulman et al. (2015a), Hopper Murthy & Raibert (1984) Erez et al. (2011) Levine & Koltun (2013) Schulman et al. (2015a), Walker Raibert & Hodgins (1991) Erez et al. (2011) Levine & Koltun (2013) Schulman et al. (2015a), Half-Cheetah Wawrzynski (2007) Heess et al. (2015), Ant Schulman et al. (2015b), Simple Humanoid Tassa et al. (2012) Schulman et al. (2015b), and Full Humanoid Tassa et al. (2012). The goal for all these tasks is to move forward as quickly as possible. These tasks are more challenging than the basic tasks due to high degrees of freedom. In addition, a great amount of exploration is needed to learn to move forward without getting stuck at local optima. Since we penalize for excessive controls as well as falling over, during the initial stage of learning, when the robot is not yet able to move forward for a sufficient distance without falling, apparent local optima exist including staying at the origin or diving forward slowly." }, { "heading": "4.2 BASELINES", "text": "In this section, we will introduce the the baseline methods used in detail.\nBehavioral Cloning (BC): learning a mapping from state space to action space, with supervised learning methods. Specifically, the algorithm cannot get more information from the expert any more except the expert demonstrations. So the problem of compounding errors will occur and the performance might be poor. But in fact, the performance is quite promising.\nGuided Cost Learning (GCL): the algorithm of Finn et al. (2016), which is actually a sampling based maximum entropy inverse reinforcement learning method with neural networks as the cost function. The reward function search process and policy update process are in the inner loop of the algorithm. So it will need more computation resources.\nGenerative Adversarial Imitation Learning (GAIL): the algorithm of Ho & Ermon (2016) which use a GAN architecture for policy improvement and reward function fitting. GAIL use the discriminator to compute the reward for some state-action pair, and then using the reward function to update the policy." }, { "heading": "4.3 TRAINING SETTING", "text": "We used all the algorithms to train policies of the same neural network architecture for all tasks: two hidden layers of 100 units each, with tanh nonlinearities in between. All networks were always initialized randomly at the start of each trial. For each task, we gave BC, GCL, GAIL and our EB-GAIL exactly the same amount of environment interaction for training. We ran all algorithms 10 times over different random seeds in all environments. More information is depicted in Appendix A.2." }, { "heading": "4.4 RESULTS AND DISCUSSION", "text": "Figure 2 depicts the results which are the scaled rewards for different imitation learning methods. We set the expert trajectory reward is 1.0 and random policy’s reward is 0.0. In basic tasks, BC achieves comparable results compared with GCL and GAIL. And our proposed method EB-GAIL achieves expert performance on these tasks. Obviously, our EB-GAIL outperforms other SoTA methods on this evaluation metric. In locomotion tasks, GCL and GAIL achieve higher rewards than BC except for HalfCheetah and Ant. In these tasks, EB-GAIL obtains better performance than BC, GCL and\nGAIL even with limited expert trajectories (we only use 1 ∼ 10 trajectories in basic tasks and 4 ∼ 25 trajectories in locomotion tasks except for humanoid tasks). And our proposed method EB-GAIL still achieves expert performance and outperform all the compared algorithms. These results definitely show our EB-GAIL outperforms other SoTA methods on basic tasks and MuJoCo environments. Meanwhile, the deviation for our proposed EB-GAIL is also roughly lower than other SoTA methods. It shows the stability of training our EB-GAIL.\nExperiment results tells that our EB-GAIL achieves expert performance on basic tasks and locomotion tasks, and it basically outperforms other SoTA methods. Meanwhile the training process for EB-GAIL is more stable than other SoTA methods." }, { "heading": "5 CONCLUSION", "text": "In this paper, we present an energy based method for generative adversarial imitation learning, which is so-called EB-GAIL. It views the discriminator as an energy function that attributes low energies to the regions near the expert demonstrations and high energies to other regions. To learn the energy of the state-action space, we use the mean square error of an auto-encoder to represent the energy function (there are still many other choices for us to represent the energy function, but the auto-encoder is quite efficient). Theoretical analysis depicts the system could match the occupancy measure with expert policy when the system reaches a Nash equilibrium. Meanwhile it also tells that if the Nash equilibrium of EB-GAIL exists, then it is characterized by that the trained policy could match the occupancy measure with expert policy and the discriminator could converge to a number between 0 and the margin. Experiment results show that our EB-GAIL achieves SoTA performance, for learning a policy that can imitate and even outperform the human experts. Meanwhile, the training procedure for EB-GAIL is more stable than other SoTA methods. As we demonstrated, our method is also quite sample efficient by learning from limited expert demonstrations. We hope that our work can further be applied into more realistic scenarios." }, { "heading": "A APPENDIX", "text": "A.1 OMITTED PROOFS\nA.1.1 PROOF OF PROPOSITION 3\nOur choice (EB-GAIL) of φ is:\nφ(ρπ − ρπE ) = minEπE [D(s, a)] + Eπ[(margin−D(s, a))+] (12)\nProof. This proof will take the JS divergence as an example. And we will prove that the cost regularizer\nφGA(c) =\n{ EπE [g(c(s, a))] if c < 0\n+∞ otherwise (13)\nwhere\ng(x) = { −x− log(1− ex) if x < 0 +∞ otherwise (14)\nsatisfies: φ∗GA(ρπ − ρπE ) = maxEπ[log(D(s, a))] + EπE [log(1−D(s, a))]. (15)\nUsing the logistic loss log(1 + exp(−x)), we see that applying Proposition A.1 in Ho & Ermon (2016), we get:\nφ∗GA(ρπ − ρπE ) = −Rφ(ρπ, ρπE ) = ∑ s,a max γ∈R ρπ(s, a)log( 1 1 + exp(−γ) ) + ρπE (s, a)log(\n1\n1 + exp(γ) )\n= ∑ s,a max γ∈R ρπ(s, a)log( 1 1 + exp(−γ) ) + ρπE (s, a)log(1−\n1\n1 + exp(−γ) )\n= ∑ s,a max γ∈R ρπ(s, a)log(δ(γ)) + ρπE (s, a)log(1− δ(γ)),\n(16)\nwhere δ(x) = 1/(1 + exp(−x)) is the sigmoid function. Because the range of δ is (0, 1), we can write:\nφ∗GA(ρπ − ρπE ) = ∑ s,a max ρπ(s, a)logd+ ρπE (s, a)log(1− d)\n= max ∑ s,a ρπ(s, a)log(D(s, a)) + ρπE (s, a)log(1−D(s, a)) (17)\nwhich is the desired expression.\nObviously, in our EB-GAIL, we choose the expression for φ is:\nminEπE [D(s, a)] + Eπ[(margin−D(s, a))+] (18)\nThis completes the proof.\nA.1.2 PROOF OF THEOREM 1\nLemma 1. Zhao et al. (2016) Let a, b ≥ 0, φ(x) = ax+ b[m− x]+. The minimum of φ on [0,+∞) exists and is reached in m if a < b, and it is reached in 0 otherwise.\nProof. The function φ is defined on [0,+∞), its derivative is defined on [0,+∞)\\{m} and φ′(x) = a− b if x ∈ [0,m) and φ′(x) = a if x ∈ (m,+∞). So when a < b, the function is decreasing on [0,m) and increasing on (m,+∞). Since it is continuous, it has a minimum in m.\nOn the other hand, if a ≥ b the function φ is increasing on [0,+∞), so 0 is a minimum. This completes the proof.\nLemma 2. Zhao et al. (2016) If p and q are probability densities, and the function 1A(x) = 1 if x ∈ A otherwise 1A(x) = 0, then ∫ x 1p(x)<q(x)dx = 0 if and only if ∫ x 1p(x) 6=q(x)dx = 0.\nProof. Let’s assume that ∫ x\n1p(x)<q(x)dx = 0. Then∫ x 1p(x)>q(x)(p(x)− q(x))dx\n= ∫ x (1− 1p(x)≤q(x))(p(x)− q(x))dx\n= ∫ x p(x)dx− ∫ x q(x)dx+ ∫ x 1p(x)≤q(x)(p(x)− q(x))dx\n=1− 1 + ∫ x (1p(x)<q(x) + 1p(x)=q(x))(p(x)− q(x))dx\n= ∫ x\n1p(x)<q(x)(p(x)− q(x))dx+∫ x 1p(x)=q(x)(p(x)− q(x))dx\n=0\n(19)\nSo ∫ x\n1p(x)>q(x)(p(x) − q(x))dx = 0 and since the term in the integral is always non-negative, 1p(x)>q(x)(p(x) − q(x)) = 0 for almost all x. And p(x) − q(x) = 0 implies 1p(x)>q(x) = 0, so 1p(x)>q(x) = 0 almost everywhere.\nThis completes the proof.\nTheorem 1: If the system (EB-GAIL) reaches a Nash equilibrium, then ρπ = ρπE , and V (G ∗, D∗) = m, where m represents margin.\nProof. First we observe that\nV (G∗, D∗) = ∫ χE ρπE (χE)D(χE)dχE + ∫ χi ρπ(χi)[m−D(χi)]+dχi\n= ∫ χE (ρπE (χE)D(χE) + ρπ∗(χE)[m−D(χE)]+)dχE . (20)\nLemma 1 shows: (1) D∗(χ) ≤ m almost everywhere. To verify it, let us assume that there exists a set of measure non-zero such that D∗(χ) > m. Let D̃(χ) = min(D∗(χ),m). Then V (G∗, D̃) < V (G∗, D∗) which violates equation 11. (2) The function φ reaches its minimum in m if a < b and in 0 otherwise. So V (G∗, D) reaches its minimum when we replace D∗(x) by these values. We obtain\nV (G∗, D∗) = m ∫ χE 1ρπE (x)<ρπ∗ (χE)ρπE (χE)dχE+\nm ∫ χE 1ρπE (χE)≤ρπ∗ (χE)ρπ ∗(χE)dχE\n= m ∫ χE (1ρπE (χE)<ρπ∗ (χE)ρπE (χE)+\n(1− 1ρπE (χE)<ρπ∗ (χE))ρπ∗(χE))dχE\n= m ∫ χE ρπ∗(χE)dχE +m ∫ χE 1ρπE (χE)<ρπ∗ (χE)(ρπE (χE)− ρπ∗)dχE\n= m+m ∫ χE 1ρπE (χE)<ρπ∗ (χE)(ρπE (χE)− ρπ∗(χE))dχE .\n(21)\nThe second term in equation 21 is non-positive, so V (G∗, D∗) ≤ m.\nBy putting the ideal generator that generates pdata into the right side of equation 11, we get∫ χE ρπ∗(χE)D ∗(χE)dχE ≤ ∫ χE ρπE (χE)D ∗(χE)dχE . (22)\nThus by equation 20,∫ χE ρπ∗(χE)D ∗(χE)dχE + ∫ χE ρπ∗(χE)[m−D∗(χE)]+dχE ≤ V (G∗, D∗) (23)\nand since D∗(χ) ≤ m, we get m ≤ V (G∗, D∗). Thus, m ≤ V (G∗, D∗) ≤ m, so V (G∗, D∗) = m. According to equation 21, we see that can only happen if ∫ χ 1ρπE (χ)<ρπ(χ)dχ = 0. According to Lemma2, it is true if and only if ρπ = ρπE .\nThis completes the proof.\nA.1.3 PROOF OF THEOREM 2\nA Nash equilibrium of this system exists and is characterized by (1)ρπ = ρπE , and (2) there exists a constant γ ∈ [0,m] such that D∗(x) = γ (almost everywhere).\nProof. The sufficient conditions are obvious. The necessary condition on π∗ comes from theorem 1, and the necessary condition on D∗(χ) ≤ m is from the proof of theorem 1. Let us now assume that D∗(χ) is not constant almost everywhere and find a contradiction. If it is not, then there exists a constant C and a set S of non-zero measure such that ∀χ ∈ S, D∗(χ) ≤ C and ∀χ /∈ S , D∗(χ) > C. In addition we can choose S such that there exists a subset S ′ ⊂ S of non-zero measure such that ρπE (χ) > 0 on S ′. We can build a generator policy ρ0 such that ρπ0(χ) ≤ ρπE (χ) over S and ρπ0(χ) < ρπE (χ) over S ′. We compute\nU(G∗, D∗)− U(G0, D∗) = ∫ χ (ρπE − ρπ0)D∗(χ)dχ\n= ∫ χ (ρπE − ρπ0)(D∗ − C)dχ\n= ∫ S\n(ρπE − ρπ0)(D∗(χ)− C)dχ+∫ RN\\S (ρπE − ρπ0)(D∗(χ)− C)dχ > 0\n(24)\nwhich violates equation 11.\nThis completes the proof.\nA.2 EXPERIMENTS\nA.2.1 LOCOMOTION TASKS\nFigure 3 depicts locomotion tasks’ environments.\nA.2.2 TRAINING SETTING\nFor BC, we split a given dataset of state-action pairs into 70% training data and 30% validation data. The policy is trained with supervised learning, with minibatches of 64 examples, until validation error stops decreasing.\nFor GCL, we use neural network to represent the reward function, and the learning rate for policy update is 0.0001, the learning rate for reward update is 0.001, with minibatches of 64 examples.\nFor GAIL, the discriminator networks used two hidden layers of 100 units each, with tanh nonlinearities as its architecture. And the learning rate for policy update is 0.0001, and the learning rate for discriminator update is 0.001, with minibatches of 64 examples.\nFor our EB-GAIL, the discriminator (auto-encoder) networks used four hidden layers of 100 units each, with tanh nonlinearities as its architecture. And the learning rate for policy update is 0.0001, the learning rate for discriminator (auto-encoder) update is 0.001, and margin is 5, with minibatches of 64 examples." } ]
2,020
null
SP:6b06c93bb2394dae7e4d6e76a8c134b6808a46e9
[ "This paper considers solving rank-constrained convex optimization. This is a fairly general problem that contains several special cases such as matrix completion and robust PCA. This paper presents a local search approach along with an interesting theoretical analysis of their approach. Furthermore, this paper provided extensive simulations to validate their approach. Overall, the paper provided solid justification for their approach." ]
We propose greedy and local search algorithms for rank-constrained convex optimization, namely solving min rank(A)≤r∗ R(A) given a convex function R : Rm×n → R and a parameter r∗. These algorithms consist of repeating two steps: (a) adding a new rank-1 matrix to A and (b) enforcing the rank constraint on A. We refine and improve the theoretical analysis of Shalev-Shwartz et al. (2011), and show that if the rank-restricted condition number of R is κ, a solution A with rank O(r∗ · min{κ log R(0)−R(A ∗) , κ }) and R(A) ≤ R(A∗) + can be recovered, where A∗ is the optimal solution. This significantly generalizes associated results on sparse convex optimization, as well as rank-constrained convex optimization for smooth functions. We then introduce new practical variants of these algorithms that have superior runtime and recover better solutions in practice. We demonstrate the versatility of these methods on a wide range of applications involving matrix completion and robust principal component analysis.
[ { "affiliations": [], "name": "Kyriakos Axiotis" }, { "affiliations": [], "name": "Maxim Sviridenko" } ]
[ { "authors": [ "Kyriakos Axiotis", "Maxim Sviridenko" ], "title": "Sparse convex optimization via adaptively regularized hard thresholding", "venue": "arXiv preprint arXiv:2006.14571,", "year": 2020 }, { "authors": [ "Thierry Bouwmans", "Sajid Javed", "Hongyang Zhang", "Zhouchen Lin", "Ricardo Otazo" ], "title": "On the applications of robust pca in image and video processing", "venue": "Proceedings of the IEEE,", "year": 2018 }, { "authors": [ "Emmanuel J Candès", "Xiaodong Li", "Yi Ma", "John Wright" ], "title": "Robust principal component analysis", "venue": "Journal of the ACM (JACM),", "year": 2011 }, { "authors": [ "Dean Foster", "Howard Karloff", "Justin Thaler" ], "title": "Variable selection is hard", "venue": "In Conference on Learning Theory, pp", "year": 2015 }, { "authors": [ "F Maxwell Harper", "Joseph A Konstan" ], "title": "The movielens datasets: History and context", "venue": "Acm transactions on interactive intelligent systems (tiis),", "year": 2015 }, { "authors": [ "Nicolas Hug" ], "title": "Surprise: A python library for recommender systems", "venue": "Journal of Open Source Software,", "year": 2020 }, { "authors": [ "Prateek Jain", "Ambuj Tewari", "Inderjit S Dhillon" ], "title": "Orthogonal matching pursuit with replacement", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Prateek Jain", "Ambuj Tewari", "Purushottam Kar" ], "title": "On iterative hard thresholding methods for highdimensional m-estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Yehuda Koren", "Robert Bell", "Chris Volinsky" ], "title": "Matrix factorization techniques for recommender systems", "venue": null, "year": 2009 }, { "authors": [ "Daniel D Lee", "H Sebastian Seung" ], "title": "Algorithms for non-negative matrix factorization", "venue": "In Advances in neural information processing systems,", "year": 2001 }, { "authors": [ "Zhouchen Lin", "Minming Chen", "Yi Ma" ], "title": "The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices", "venue": "arXiv preprint arXiv:1009.5055,", "year": 2010 }, { "authors": [ "Per-Gunnar Martinsson", "Vladimir Rokhlin", "Mark Tygert" ], "title": "A randomized algorithm for the decomposition of matrices", "venue": "Applied and Computational Harmonic Analysis,", "year": 2011 }, { "authors": [ "Rahul Mazumder", "Trevor Hastie", "Robert Tibshirani" ], "title": "Spectral regularization algorithms for learning large incomplete matrices", "venue": "The Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Balas Kausik Natarajan" ], "title": "Sparse approximate solutions to linear systems", "venue": "SIAM journal on computing,", "year": 1995 }, { "authors": [ "Christopher C Paige", "Michael A Saunders" ], "title": "Lsqr: An algorithm for sparse linear equations and sparse least squares", "venue": "ACM Transactions on Mathematical Software (TOMS),", "year": 1982 }, { "authors": [ "Yagyensh Chandra Pati", "Ramin Rezaiifar" ], "title": "Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition", "venue": "In Proceedings of 27th Asilomar conference on signals, systems and computers,", "year": 1993 }, { "authors": [ "Steffen Rendle", "Li Zhang", "Yehuda Koren" ], "title": "On the difficulty of evaluating baselines: A study on recommender systems", "venue": null, "year": 1905 }, { "authors": [ "Alex Rubinsteyn", "Sergey Feldman" ], "title": "fancyimpute: Version 0.0.16, May 2016", "venue": "URL https: //doi.org/10.5281/zenodo.51773", "year": 2016 }, { "authors": [ "Shai Shalev-Shwartz", "Nathan Srebro", "Tong Zhang" ], "title": "Trading accuracy for sparsity in optimization problems with sparsity constraints", "venue": "SIAM Journal on Optimization,", "year": 2010 }, { "authors": [ "Shai Shalev-Shwartz", "Alon Gonen", "Ohad Shamir" ], "title": "Large-scale convex minimization with a lowrank constraint", "venue": "In Proceedings of the 28th International Conference on International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Nathan Srebro", "Jason Rennie", "Tommi Jaakkola" ], "title": "Maximum-margin matrix factorization", "venue": "Advances in neural information processing systems,", "year": 2004 }, { "authors": [ "Arthur Szlam", "Yuval Kluger", "Mark Tygert" ], "title": "An implementation of a randomized algorithm for principal component analysis", "venue": "arXiv preprint arXiv:1412.3510,", "year": 2014 }, { "authors": [ "Andrew Tulloch" ], "title": "Fast randomized svd", "venue": "https://research.fb.com/blog/2014/09/ fast-randomized-svd/,", "year": 2014 }, { "authors": [ "Antoine Vacavant", "Thierry Chateau", "Alexis Wilhelm", "Laurent Lequièvre" ], "title": "A benchmark dataset for outdoor foreground/background extraction", "venue": "In Asian Conference on Computer Vision,", "year": 2012 }, { "authors": [ "Tong Zhang" ], "title": "Sparse recovery with orthogonal matching pursuit under rip", "venue": "IEEE Transactions on Information Theory,", "year": 2011 } ]
[ { "heading": null, "text": "rank(A)≤r∗ R(A) given a convex function R : Rm×n →\nR and a parameter r∗. These algorithms consist of repeating two steps: (a) adding a new rank-1 matrix to A and (b) enforcing the rank constraint on A. We refine and improve the theoretical analysis of Shalev-Shwartz et al. (2011), and show that if the rank-restricted condition number of R is κ, a solution A with rank O(r∗ · min{κ log R(0)−R(A ∗) , κ\n2}) and R(A) ≤ R(A∗) + can be recovered, where A∗ is the optimal solution. This significantly generalizes associated results on sparse convex optimization, as well as rank-constrained convex optimization for smooth functions. We then introduce new practical variants of these algorithms that have superior runtime and recover better solutions in practice. We demonstrate the versatility of these methods on a wide range of applications involving matrix completion and robust principal component analysis." }, { "heading": "1 INTRODUCTION", "text": "Given a real-valued convex function R : Rm×n → R on real matrices and a parameter r∗ ∈ N, the rank-constrained convex optimization problem consists of finding a matrix A ∈ Rm×n that minimizes R(A) among all matrices of rank at most r∗:\nmin rank(A)≤r∗ R(A) (1)\nEven though R is convex, the rank constraint makes this problem non-convex. Furthermore, it is known that this problem is NP-hard and even hard to approximate (Natarajan (1995); Foster et al. (2015)).\nIn this work, we propose efficient greedy and local search algorithms for this problem. Our contribution is twofold:\n1. We provide theoretical analyses that bound the rank and objective value of the solutions returned by the two algorithms in terms of the rank-restricted condition number, which is the natural generalization of the condition number for low-rank subspaces. The results are significantly stronger than previous known bounds for this problem.\n2. We experimentally demonstrate that, after careful performance adjustments, the proposed general-purpose greedy and local search algorithms have superior performance to other methods, even for some of those that are tailored to a particular problem. Thus, these algorithms can be considered as a general tool for rank-constrained convex optimization and a viable alternative to methods that use convex relaxations or alternating minimization.\nThe rank-restricted condition number Similarly to the work in sparse convex optimization, a restricted condition number quantity has been introduced as a reasonable assumption on R. If we let ρ+r be the maximum smoothness bound and ρ − r be the minimum strong convexity bound only along rank-r directions of R (these are called rank-restricted smoothness and strong convexity respectively), the rank-restricted condition number is defined as κr =\nρ+r ρ−r . If this quantity is bounded,\none can efficiently find a solutionA withR(A) ≤ R(A∗)+ and rank r = O(r∗ ·κr+r∗ R(0) ) using a greedy algorithm (Shalev-Shwartz et al. (2011)). However, this is not an ideal bound since the rank scales linearly with R(0) , which can be particularly high in practice. Inspired by the analogous literature on sparse convex optimization by Natarajan (1995); Shalev-Shwartz et al. (2010); Zhang (2011); Jain et al. (2014) and more recently Axiotis & Sviridenko (2020), one would hope to achieve a logarithmic dependence or no dependence at all on R(0) . In this paper we achieve this goal by providing an improved analysis showing that the greedy algorithm of Shalev-Shwartz et al. (2011) in fact returns a matrix of rank of r = O(r∗ · κr+r∗ log R(0) ). We also provide a new local search algorithm together with an analysis guaranteeing a rank of r = O(r∗ · κ2r+r∗). Apart from significantly improving upon previous work on rank-restricted convex optimization, these results directly generalize a lot of work in sparse convex optimization, e.g. Natarajan (1995); Shalev-Shwartz et al. (2010); Jain et al. (2014). Our algorithms and theorem statements can be found in Section 2.\nRuntime improvements Even though the rank bound guaranteed by our theoretical analyses is adequate, the algorithm runtimes leave much to be desired. In particular, both the greedy algorithm of Shalev-Shwartz et al. (2011) and our local search algorithm have to solve an optimization problem in each iteration in order to find the best possible linear combination of features added so far. Even for the case that R(A) = 12 ∑ (i,j)∈Ω (M − A)2ij , this requires solving a least squares problem on |Ω| examples and r2 variables. For practical implementations of these algorithms, we circumvent this issue by solving a related optimization problem that is usually much smaller. This instead requires solving n least squares problems with total number of examples |Ω|, each on r variables. This not only reduces the size of the problem by a factor of r, but also allows for a straightforward distributed implementation. Interestingly, our theoretical analyses still hold for these variants. We propose an additional heuristic that reduces the runtime even more drastically, which is to only run a few (less than 10) iterations of the algorithm used for solving the inner optimization problem. Experimental results show that this modification not only does not significantly worsen results, but for machine learning applications also acts as a regularization method that can dramatically improve generalization. These matters, as well as additional improvements for making the local search algorithm more practical, are addressed in Section 2.3.\nRoadmap In Section 2, we provide the descriptions and theoretical results for the algorithms used, along with several modifications to boost performance. In Section 3, we evaluate the proposed greedy and local search algorithms on optimization problems like robust PCA. Then, in Section 4 we evaluate their generalization performance in machine learning problems like matrix completion." }, { "heading": "2 ALGORITHMS & THEORETICAL GUARANTEES", "text": "In Sections 2.1 and 2.2 we state and provide theoretical performance guarantees for the basic greedy and local search algorithms respectively. Then in Section 2.3 we state the algorithmic adjustments that we propose in order to make the algorithms efficient in terms of runtime and generalization performance. A discussion regarding the tightness of the theoretical analysis is deferred to Appendix A.4.\nWhen the dimension is clear from context, we will denote the all-ones vector by 1, and the vector that is 0 everywhere and 1 at position i by 1i. Given a matrix A, we denote by im(A) its column span. One notion that we will find useful is that of singular value thresholding. More specifically,\ngiven a rank-k matrix A ∈ Rm×n with SVD k∑ i=1 σiu ivi> such that σ1 ≥ · · · ≥ σk, as well as an\ninteger parameter r ≥ 1, we define Hr(A) = r∑ i=1 σiu ivi> to be the operator that truncates to the r highest singular values of A." }, { "heading": "2.1 GREEDY", "text": "Algorithm 1 (Greedy) was first introduced in Shalev-Shwartz et al. (2011) as the GECO algorithm. It works by iteratively adding a rank-1 matrix to the current solution. This matrix is chosen as the\nrank-1 matrix that best approximates the gradient, i.e. the pair of singular vectors corresponding to the maximum singular value of the gradient. In each iteration, an additional procedure is run to optimize the combination of previously chosen singular vectors.\nIn Shalev-Shwartz et al. (2011) guarantee on the rank of the solution returned by the algorithm is r∗κr+r∗ R(0) . The main bottleneck in order to improve on the R(0) factor is the fact that the analysis is done in terms of the squared nuclear norm of the optimal solution. As the worst-case discrepancy between the squared nuclear norm and the rank is R(0)/ , their bounds inherit this factor. Our analysis works directly with the rank, in the spirit of sparse optimization results (e.g. Shalev-Shwartz et al. (2011); Jain et al. (2014); Axiotis & Sviridenko (2020)). A challenge compared to these works is the need for a suitable notion of “intersection” between two sets of vectors. The main technical contribution of this work is to show that the orthogonal projection of one set of vectors into the span of the other is such a notion, and, based on this, to define a decomposition of the optimal solution that is used in the analysis.\nAlgorithm 1 Greedy 1: procedure GREEDY(r ∈ N : target rank) 2: function to be minimized R : Rm×n → R 3: U ∈ Rm×0 . Initially rank is zero 4: V ∈ Rn×0 5: for t = 0 . . . r − 1 do 6: σuv> ← H1(∇R(UV >)) . Max singular value σ and corresp. singular vectors u, v 7: U ← (U u) . Append new vectors as columns 8: V ← (V v) 9: U, V ← OPTIMIZE(U, V ) 10: return UV > 11: procedure OPTIMIZE(U ∈ Rm×r, V ∈ Rn×r) 12: X ← arg min\nX∈Rr×r R(UXV >)\n13: return UX, V\nTheorem 2.1 (Algorithm 1 (greedy) analysis). Let A∗ be any fixed optimal solution of (1) for some function R and rank bound r∗, and let > 0 be an error parameter. For any integer r ≥ 2r∗ · κr+r∗ log R(0)−R(A∗) , if we let A = GREEDY(r) be the solution returned by Algorithm 1, then R(A) ≤ R(A∗) + . The number of iterations is r.\nThe proof of Theorem 2.1 can be found in Appendix A.2." }, { "heading": "2.2 LOCAL SEARCH", "text": "One drawback of Algorithm 1 is that it increases the rank in each iteration. Algorithm 2 is a modification of Algorithm 1, in which the rank is truncated in each iteration. The advantage of Algorithm 2 compared to Algorithm 1 is that it is able to make progress without increasing the rank of A, while Algorithm 1 necessarily increases the rank in each iteration. More specifically, because of the greedy nature of Algorithm 1, some rank-1 components that have been added to A might become obsolete or have reduced benefit after a number of iterations. Algorithm 2 is able to identify such candidates and remove them, thus allowing it to continue making progress.\nTheorem 2.2 (Algorithm 2 (local search) analysis). Let A∗ be any fixed optimal solution of (1) for some function R and rank bound r∗, and let > 0 be an error parameter. For any integer r ≥ r∗ · (1 + 8κ2r+r∗), if we let A = LOCAL SEARCH(r) be the solution returned by Algorithm 2, then R(A) ≤ R(A∗) + . The number of iterations is O ( r∗κr+r∗ log R(0)−R(A∗) ) .\nThe proof of Theorem 2.2 can be found in Appendix A.3.\nAlgorithm 2 Local Search 1: procedure LOCAL SEARCH(r ∈ N : target rank) 2: function to be minimized R : Rm×n → R 3: U ← 0m×r . Initialize with all-zero solution 4: V ← 0n×r 5: for t = 0 . . . L− 1 do . Run for L iterations 6: σuv> ← H1(∇R(UV >)) . Max singular value σ and corresp. singular vectors u, v 7: U, V ← TRUNCATE(U, V ) . Reduce rank of UV > by one 8: U ← (U u) . Append new vectors as columns 9: V ← (V v) 10: U, V ← OPTIMIZE(U, V ) 11: return UV > 12: procedure TRUNCATE(U ∈ Rm×r, V ∈ Rn×r) 13: UΣV > ← SVD(Hr−1(UV >)) . Keep all but minimum singular value 14: return UΣ, V" }, { "heading": "2.3 ALGORITHMIC ADJUSTMENTS", "text": "Inner optimization problem The inner optimization problem that is used in both greedy and local search is:\nmin X∈Rr×r\nR(UXV >) . (2)\nIt essentially finds the choice of matrices U ′ and V ′, with columns in the column span of U and V respectively, that minimizes R(U ′V ′>). We, however, consider the following problem instead:\nmin V ∈Rn×r\nR(UV >) . (3)\nNote that the solution recovered from (3) will never have worse objective value than the one recovered from (2), and that nothing in the analysis of the algorithms breaks. Importantly, (3) can usually be solved much more efficiently than (2). As an example, consider the following objective that appears in matrix completion: R(A) = 12 ∑ (i,j)∈Ω (M − A)2ij for some Ω ⊆ [m] × [n]. If we let ΠΩ(·) be an operator that zeroes out all positions in the matrix that are not in Ω, we have ∇R(A) = −ΠΩ(M −A). The optimality condition of (2) now is U>ΠΩ(M −UXV >)V = 0 and that of (3) is U>ΠΩ(M − UV >) = 0. The former corresponds to a least squares linear regression problem with |Ω| examples and r2 variables, while the latter can be decomposed into n independent\nsystems U> ( ∑ i:(i,j)∈Ω 1i1 > i ) UV j = U>ΠΩ (M1j), where the variable is V j which is the j-th column of V . The j-th of these systems corresponds to a least squares linear regression problem with |{i : (i, j) ∈ Ω}| examples and r variables. Note that the total number of examples in all systems is\n∑ j∈[n] |{i : (i, j) ∈ Ω}| = |Ω|. The choice of V here as the variable to be optimized is\narbitrary. In particular, as can be seen in Algorithm 3, in practice we alternate between optimizing U and V in each iteration. It is worthy of mention that the OPTIMIZE FAST procedure is basically the same as one step of the popular alternating minimization procedure for solving low-rank problems. As a matter of fact, when our proposed algorithms are viewed from this lens, they can be seen as alternating minimization interleaved with rank-1 insertion and/or removal steps.\nSingular value decomposition As modern methods for computing the top entries of a singular value decomposition scale very well even for large sparse matrices (Martinsson et al. (2011); Szlam et al. (2014); Tulloch (2014)), the “insertion” step of greedy and local search, in which the top entry of the SVD of the gradient is determined, is quite fast in practice. However, these methods are not suited for computing the smallest singular values and corresponding singular vectors, a step required for the local search algorithm that we propose. Therefore, in our practical implementations we opt to perfom the alternative step of directly removing one pair of vectors from the representation UV >. A simple approach is to go over all r possible removals and pick the one that increases the\nAlgorithm 3 Fast inner Optimization\n1: procedure OPTIMIZE FAST(U ∈ Rm×r, V ∈ Rn×r, t ∈ N : iteration index of algorithm) 2: if t mod 2 = 0 then 3: X ← arg min\nX∈Rm×r R(XV >)\n4: return X,V 5: else 6: X ← arg min\nX∈Rn×r R(UX>)\n7: return U,X\nobjective by the least amount. A variation of this approach has been used by Shalev-Shwartz et al. (2011). However, a much faster approach is to just pick the pair of vectors U1i, V 1i that minimizes ‖U1i‖2‖V 1i‖2. This is the approach that we use, as can be seen in Algorithm 4.\nAlgorithm 4 Fast rank reduction\n1: procedure TRUNCATE FAST(U ∈ Rm×r, V ∈ Rn×r) 2: i← arg min\ni∈[r] ‖U1i‖2‖V 1i‖2 3: return ( U[m],[1,i−1] U[m],[i+1,r] ) , ( V[n],[1,i−1] V[n],[i+1,r] ) . Remove column i\nAfter the previous discussion, we are ready to state the fast versions of Algorithm 1 and Algorithm 2 that we use for our experiments. These are Algorithm 2.3 and Algorithm 5. Notice that we initialize Algorithm 5 with the solution of Algorithm 2.3 and we run it until the value ofR(·) stops decreasing rather than for a fixed number of iterations.\nAlgorithm 2.3 (Fast Greedy). The Fast Greedy algorithm is defined identically as Algorithm 1, with the only difference that it uses the OPTIMIZE FAST routine as opposed to the OPTIMIZE routine.\nAlgorithm 5 Fast Local Search 1: procedure FAST LOCAL SEARCH(r ∈ N : target rank) 2: function to be minimized R : Rm×n → R 3: U, V ← solution returned by FAST GREEDY(r) 4: do 5: Uprev, Vprev ← U, V 6: σuv> ← H1(∇R(UV >)) . Max singular value σ and corresp. singular vectors u, v 7: U, V ← TRUNCATE FAST(U, V ) . Reduce rank of UV > by one 8: U ← (U u) . Append new vectors as columns 9: V ← (V v) 10: U, V ← OPTIMIZE FAST(U, V, t) 11: while R(UV >) < R(UprevV >prev) 12: return UprevV >prev" }, { "heading": "3 OPTIMIZATION APPLICATIONS", "text": "An immediate application of the above algorithms is in the problem of low rank matrix recovery. Given any convex distance measure between matrices d : Rm×n × Rm×n → R≥0, the goal is to find a low-rank matrix A that matches a target matrix M as well as possible in terms of d:\nmin rank(A)≤r∗ d(M,A) This problem captures a lot of different applications, some of which we go over in the following sections." }, { "heading": "3.1 LOW-RANK APPROXIMATION ON OBSERVED SET", "text": "A particular case of interest is when d(M,A) is the Frobenius norm of M − A, but only applied to entries belonging to some set Ω. In other words, d(M,A) = 12‖ΠΩ(M −A)‖ 2 F . We have compared our Fast Greedy and Fast Local Search algorithms with the SoftImpute algorithm of Mazumder et al. (2010) as implemented by Rubinsteyn & Feldman (2016), on the same experiments as in Mazumder et al. (2010). We have solved the inner optimization problem required by our algorithms by the LSQR algorithm Paige & Saunders (1982). More specifically, M = UV > + η ∈ R100×100, where η is some noise vector. We let every entry of U, V, η be i.i.d. normal with mean 0 and the entries of Ω are chosen i.i.d. uniformly at random over the set [100] × [100]. The experiments have three parameters: The true rank r∗ (of UV >), the percentage of observed entries p = |Ω|/104, and the signal-to-noise ratio SNR. We measure the normalized MSE, i.e. ‖ΠΩ(M − A)‖2F /‖ΠΩ(M)‖2F . The results can be seen in Figure 1, where it is illustrated that Fast Local Search sometimes returns significantly more accurate and lower-rank solutions than Fast Greedy, and Fast Greedy generally returns significantly more accurate and lower-rank solutions than SoftImpute." }, { "heading": "3.2 ROBUST PRINCIPAL COMPONENT ANALYSIS (RPCA)", "text": "The robust PCA paradigm asks one to decompose a given matrix M as L + S, where L is a lowrank matrix and S is a sparse matrix. This is useful for applications with outliers where directly computing the principal components of M is significantly affected by them. For a comprehensive survey on Robust PCA survey one can look at Bouwmans et al. (2018). The following optimization problem encodes the above-stated requirements:\nmin rank(L)≤r∗ ‖M − L‖0 (4)\nwhere ‖X‖0 is the sparsity (i.e. number of non-zeros) of X . As neither the rank constraint or the `0 function are convex, Candès et al. (2011) replaced them by their usual convex relaxations, i.e. the nuclear norm ‖ · ‖∗ and `1 norm respectively. However, we opt to only relax the `0 function but not the rank constraint, leaving us with the problem:\nmin rank(L)≤r∗ ‖M − L‖1 (5)\nIn order to make the objective differentiable and thus more well-behaved, we further replace the `1 norm by the Huber loss Hδ(x) = { x2/2 if |x| ≤ δ δ|x| − δ2/2 otherwise , thus getting: minrank(L)≤r∗ ∑ ij Hδ(M − L)ij . This is a problem on which we can directly apply our algorithms. We solve the inner optimization problem by applying 10 L-BFGS iterations.\nIn Figure 2 one can see an example of foreground-background separation from video using robust PCA. The video is from the BMC 2012 dataset Vacavant et al. (2012). In this problem, the lowrank part corresponds to the background and the sparse part to the foreground. We compare three\nalgorithms: Our Fast Greedy algorithm, standard PCA with 1 component (the choice of 1 was picked to get the best outcome), and the standard Principal Component Pursuit (PCP) algorithm (Candès et al. (2011)), as implemented in Lin et al. (2010), where we tuned the regularization parameter λ to achieve the best result. We find that Fast Greedy has the best performance out of the three algorithms in this sample task." }, { "heading": "4 MACHINE LEARNING APPLICATIONS", "text": "" }, { "heading": "4.1 REGULARIZATION TECHNIQUES", "text": "In the previous section we showed that our proposed algorithms bring down different optimization objectives aggressively. However, in applications where the goal is to obtain a low generalization error, regularization is needed. We considered two different kinds of regularization. The first method is to run the inner optimization algorithm for less iterations, usually 2-3. Usually this is straightforward since an iterative method is used. For example, in the caseR(A) = 12‖ΠΩ(M−A)‖ 2 F the inner optimization is a least squares linear regression problem that we solve using the LSQR algorithm. The second one is to add an `2 regularizer to the objective function. However, this option did not provide a substantial performance boost in our experiments, and so we have not implemented it." }, { "heading": "4.2 MATRIX COMPLETION WITH RANDOM NOISE", "text": "In this section we evaluate our algorithms on the task of recovering a low rank matrix UV > after observing ΠΩ(UV > + η), i.e. a fraction of its entries with added noise. As in Section 3.1, we use the setting of Mazumder et al. (2010) and compare with the SoftImpute method. The evaluation metric is the normalized MSE, defined as (\n∑ (i,j)/∈Ω (UV > −A)2ij)/( ∑ (i,j)/∈Ω (UV >)2ij), where A is the\npredicted matrix andUV > the true low rank matrix. A few example plots can be seen in Figure 3 and a table of results in Table 1. We have implemented the Fast Greedy and Fast Local Search algorithms with 3 inner optimization iterations. In the first few iterations there is a spike in the relative MSE of the algorithms that use the OPTIMIZE FAST routine. We attribute this to the aggressive alternating minimization steps of this procedure and conjecture that adding a regularization term to the objective might smoothen the spike. However, the Fast Local Search algorithm still gives the best overall performance in terms of how well it approximates the true low rank matrix UV >, and in particular with a very small rank—practically the same as the true underlying rank." }, { "heading": "4.3 RECOMMENDER SYSTEMS", "text": "In this section we compare our algorithms on the task of movie recommendation on the Movielens datasets Harper & Konstan (2015). In order to evaluate the algorithms, we perform random 80%- 20% train-test splits that are the same for all algorithms and measure the mean RMSE in the test set. If we let Ω ⊆ [m] × [n] be the set of user-movie pairs in the training set, we assume that the true user-movie matrix is low rank, and thus pose (1) with R(A) = 12‖ΠΩ(M − A)‖ 2 F . We make the following slight modification in order to take into account the range of the ratings [1, 5]: We clip the entries of A between 1 and 5 when computing ∇R(A) in Algorithm 2.3 and Algorithm 5. In other words, instead of ΠΩ(A −M) we compute the gradient as ΠΩ(clip(A, 1, 5) −M). This is similar to replacing our objective by a Huber loss, with the difference that we only do so in the steps that we mentioned and not the inner optimization step, mainly for runtime efficiency reasons.\nThe results can be seen in Table 2. We do not compare with Fast Local Search, as we found that it only provides an advantage for small ranks (< 30), and otherwise matches Fast Greedy. For the inner optimization steps we have used the LSQR algorithm with 2 iterations in the 100K and 1M datasets, and with 3 iterations in the 10M dataset. Note that even though the SVD algorithm by Koren et al. (2009) as implemented by Hug (2020) (with no user/movie bias terms) is a highly tuned algorithm for recommender systems that was one of the top solutions in the famous Netflix prize, it has comparable performance to our general-purpose Algorithm 2.3.\nFinally, Table 3 demonstrates the speedup achieved by our algorithms over the basic greedy implementation. It should be noted that the speedup compared to the basic greedy of Shalev-Shwartz et al. (2011) (Algorithm 1) is larger as rank increases, since the fast algorithms scale linearly with rank, but the basic greedy scales quadratically.\nIt is important to note that our goal here is not to be competitive with the best known algorithms for matrix completion, but rather to propose a general yet practically applicable method for rankconstrained convex optimization. For a recent survey on the best performing algorithms in the Movielens datasets see Rendle et al. (2019). It should be noted that a lot of these algorithms have significant performance boost compared to our methods because they use additional features (meta information about each user, movie, timestamp of a rating, etc.) or stronger models (user/movie biases, ”implicit” ratings). A runtime comparison with these recent approches is an interesting avenue for future work. As a rule of thumb, however, Fast Greedy has roughly the same runtime as SVD (Koren et al. (2009)) in each iteration, i.e. O(|Ω|r), where Ω is the set of observable elements and r is the rank. As some better performing approaches have been reported to be much slower than SVD (e.g. SVD++ is reported to be 50-100x slower than SVD in the Movielens 100K and 1M datasets (Hug (2020)), this might also suggest a runtime advantage of our approach compared to some better performing methods." }, { "heading": "5 CONCLUSIONS", "text": "We presented simple algorithms with strong theoretical guarantees for optimizing a convex function under a rank constraint. Although the basic versions of these algorithms have appeared before, through a series of careful runtime, optimization, and generalization performance improvements that we introduced, we have managed to reshape the performance of these algorithms in all fronts. Via our experimental validation on a host of practical problems such as low-rank matrix recovery with missing data, robust principal component analysis, and recommender systems, we have shown that the performance in terms of the solution quality matches or exceeds other widely used and even specialized solutions, thus making the argument that our Fast Greedy and Fast Local Search routines can be regarded as strong and practical general purpose tools for rank-constrained convex optimization. Interesting directions for further research include the exploration of different kinds of regularization and tuning for machine learning applications, as well as a competitive implementation and extensive runtime comparison of our algorithms." }, { "heading": "A APPENDIX", "text": "A.1 PRELIMINARIES AND NOTATION\nGiven an positive integer k, we denote [k] = {1, 2, . . . , k}. Given a matrix A, we denote by ‖A‖F its Frobenius norm, i.e. the `2 norm of the entries of A (or equivalently of the singular values of A). The following lemma is a simple corollary of the definition of the Frobenius norm: Lemma A.1. Given two matrices A ∈ Rm×n,B ∈ Rm×n, we have ‖A + B‖2F ≤ 2 ( ‖A‖2F + ‖B‖2F ) .\nProof. ‖A+B‖2F = ∑ ij (A+B)2ij ≤ 2 ∑ ij (A2ij +B 2 ij) = 2(‖A‖2F + ‖B‖2F )\nDefinition A.2 (Rank-restricted smoothness, strong convexity, condition number). Given a convex function R ∈ Rm×n → R and an integer parameter r, the rank-restricted smoothness of R at rank r is the minimum constant ρ+r ≥ 0 such that for any two matrices A ∈ Rm×n, B ∈ Rm×n such that rank(A−B) ≤ r, we have\nR(B) ≤ R(A) + 〈∇R(A), B −A〉+ ρ + r\n2 ‖B −A‖2F .\nSimilarly, the rank-restricted strong convexity of R at rank r is the maximum constant ρ−r ≥ 0 such that for any two matrices A ∈ Rm×n, B ∈ Rm×n such that rank(A−B) ≤ r, we have\nR(B) ≥ R(A) + 〈∇R(A), B −A〉+ ρ − r\n2 ‖B −A‖2F .\nGiven that ρ+r , ρ − r exist and are nonzero, the rank-restricted condition number of R at rank r is then defined as\nκr = ρ+r ρ−r\nNote that ρ+r is increasing and ρ − r is decreasing in r. Therefore, even though our bounds are proven in terms of the constants ρ + 1\nρ−r and ρ\n+ 2\nρ−r , these quantities are always at most ρ\n+ r\nρ−r = κr as long as r ≥ 2,\nand so they directly imply the same bounds in terms of the constant κr.\nDefinition A.3 (Spectral norm). Given a matrix A ∈ Rm×n, we denote its spectral norm by ‖A‖2. The spectral norm is defined as\n‖A‖2 = max x∈Rn ‖Ax‖2 ‖x‖2 ,\nDefinition A.4 (Singular value thresholding operator). Given a matrix A ∈ Rm×n of rank k, a singular value decomposition A = UΣV > such that Σ11 ≥ Σ22 ≥ · · · ≥ Σkk, and an integer 1 ≤ r ≤ k, we define Hr(A) = UΣ′V >, here Σ′ is a diagonal matrix with\nΣ′ii = { Σii if i ≤ r 0 otherwise" }, { "heading": "In other words, Hr(·) is an operator that eliminates all but the top r highest singular values of a matrix.", "text": "Lemma A.5 (Weyl’s inequality). For any matrix A and integer i ≥ 1, let σi(A) be the i-th largest singular value ofA or 0 if i > rank(A). Then, for any two matricesA,B and integers i ≥ 1, j ≥ 1:\nσi+j−1(A+B) ≤ σi(A) + σj(B)\nA proof of the previous fact can be found e.g. in Fisk (1996). Lemma A.6 (Hr(·) optimization problem). Let A ∈ Rm×n be a rank-k matrix and r ∈ [k] be an integer parameter. Then M = 1λHr(A) is an optimal solution to the following optimization problem:\nmax rank(M)≤r {〈A,M〉 − λ 2 ‖M‖2F } (6)\nProof. Let UΣV > = ∑ i ΣiiUiV > i be a singular value decomposition of A. We note that (6) is equivalent to\nmin rank(M)≤r\n‖A− λM‖2F := f(M) (7)\nNow, note that f( 1λHr(A)) = ‖A−Hr(A)‖ 2 F = k∑ i=r+1 Σ2ii. On the other hand, by applying Weyl’s inequality (Lemma A.5) for j = r + 1,\nf(M) = ‖A− λM‖2F = k+r∑ i=1 σ2i (A− λM) ≥ k+r∑ i=1 (σi+r(A)− σr+1(λM))2 = k∑ i=r+1 Σ2ii ,\nwhere the last equality follows from the fact that rank(A) = k and rank(M) ≤ r. Therefore, M = 1λHr(A) minimizes (7) and thus maximizes (6).\nA.2 PROOF OF THEOREM 2.1 (GREEDY)\nWe will start with the following simple lemma about the Frobenius norm of a sum of matrices with orthogonal columns or rows: Lemma A.7. Let U ∈ Rm×r, V ∈ Rn×r, X ∈ Rm×r, Y ∈ Rn×r be such that the columns of U are orthogonal to the columns of X or the columns of V are orthogonal to the columns of Y . Then ‖UV > +XY >‖2F = ‖UV >‖2F + ‖XY >‖2F .\nProof. If the columns of U are orthogonal to those of X , then U>X = 0 and if the columns of V are orthogonal to those of Y , then Y >V = 0. Therefore in any case 〈UV >, XY >〉 = Tr(V U>XY >) = Tr(U>XY >V ) = 0, implying\n‖UV > +XY >‖2F = ‖UV >‖2F + ‖XY >‖2F + 2〈UV >, XY >〉 = ‖UV >‖2F + ‖XY >‖2F\nAdditionally, we have the following lemma regarding the optimality conditions of (2):\nLemma A.8. Let A = UXV > where U ∈ Rm×r, X ∈ Rr×r, and V ∈ Rn×r, such that X is the optimal solution to (2). Then for any u ∈ im(U) and v ∈ im(V ) we have that 〈∇R(A), uv>〉 = 0.\nProof. By the optimality condition of 2, we have that\nU>∇R(A)V = 0\nNow, for any u = Ux and v = V y we have\n〈∇R(A), uv>〉 = u>∇R(A)v = x>U>∇R(A)V y = 0\nWe are now ready for the proof of Theorem 2.1.\nProof. Let At−1 be the current solution UV > before iteration t− 1 ≥ 0. Let u ∈ Rm and v ∈ Rm be left and right singular vectors of matrix ∇R(A), i.e. unit vectors maximizing |〈∇R(A), uv>〉|. Let Bt = {B|B = At−1 + ηuvT , η ∈ R}. By smoothness we have\nR(At−1)−R(At) ≥ max B∈Bt {R(At−1)−R(B)}\n≥ max B∈Bt\n{ −〈∇R(At−1), B −At−1〉 −\nρ+1 2 ‖B −At−1‖2F } ≥ max\nη\n{ η〈∇R(At−1), uv>〉 − η2\nρ+1 2 } = max\nη\n{ η‖∇R(At−1)‖2 − η2\nρ+1 2 } = ‖∇R(At−1)‖22\n2ρ+1\nwhere ‖ · ‖2 is the spectral norm (i.e. maximum magnitude of a singular value). On the other hand, by strong convexity and noting that\nrank(A∗ −At−1) ≤ rank(A∗) + rank(At−1) ≤ r∗ + r ,\nR(A∗)−R(At−1) ≥ 〈∇R(At−1), A∗ −At−1〉+ ρ−r+r∗\n2 ‖A∗ −At−1‖2F . (8)\nLet At−1 = UV > and A∗ = U∗V ∗>. We let Πim(U) = U(U>U)+U> and Πim(V ) = V (V >V )+V > denote the orthogonal projections onto the images of U and V respectively. We now write\nA∗ = U∗V ∗> = (U1 + U2)(V 1 + V 2)> = U1V 1> + U1V 2> + U2V ∗>\nwhere U1 = Πim(U)U∗ is a matrix where every column of U∗ is replaced by its projection on im(U) and U2 = U∗−U1 and similarly V 1 = Πim(V )V ∗ is a matrix where every column of V ∗ is replaced by its projection on im(V ) and V 2 = V ∗− V 1. By setting U ′ = (−U | U1) and V ′ = (V | V 1) we can write\nA∗ −At−1 = U ′V ′> + U1V 2> + U2V ∗>\nwhere im(U ′) = im(U) and im(V ′) = im(V ). Also, note that\nrank(U1V 2>) ≤ rank(V 2) ≤ rank(V ∗) = rank(A∗) ≤ r∗\nand similarly rank(U2V ∗>) ≤ r∗. So now the right hand side of (8) can be reshaped as\n〈∇R(At−1), A∗ −At−1〉+ ρ−r+r∗\n2 ‖A∗ −At−1‖2F\n= 〈∇R(At−1), U ′V ′> + U1V 2> + U2V ∗>〉+ ρ−r+r∗\n2 ‖U ′V ′> + U1V 2> + U2V ∗>‖2F\nNow, note that since by definition the columns of U ′ are in im(U) and the columns of V ′ are in im(V ), Lemma A.8 implies that 〈∇R(At−1), U ′V ′>〉 = 0. Therefore the above is equal to\n〈∇R(At−1), U1V 2> + U2V ∗>〉+ ρ−r+r∗\n2 ‖U ′V ′> + U1V 2> + U2V ∗>‖2F\n≥ 〈∇R(At−1), U1V 2>〉+ 〈∇R(At−1), U2V ∗>〉+ ρ−r+r∗\n2\n( ‖U1V 2>‖2F + ‖U2V ∗>‖2F ) ≥ 2 min\nrank(M)≤r∗\n{ 〈∇R(At−1),M〉+ ρ−r+r∗\n2 ‖M‖2F\n}\n= −2‖Hr ∗(∇R(At−1))‖2F\n2ρ−r+r∗\n≥ −r∗ ‖∇R(At−1)‖ 2 2\nρ−r+r∗\nwhere the first equality follows by noticing that the columns of V ′ and V 1 are orthogonal to those of V 2 and the columns of U ′ and U1 are orthogonal to those of U2, and applying Lemma A.7. The last equality is a direct application of Lemma A.6 and the last inequality states that the largest squared singular value is not smaller than the average of the top r∗ squared singular values. Therefore we have concluded that\n‖∇R(At−1)‖22 ≥ ρ−r+r∗\nr∗ (R(At−1)−R(A∗))\nPlugging this back into the smoothness inequality, we get\nR(At−1)−R(At) ≥ 1\n2r∗κ (R(At−1)−R(A∗))\nor equivalently\nR(At)−R(A∗) ≤ (\n1− 1 2r∗κ\n) (R(At−1)−R(A∗)) .\nTherefore after L = 2r∗κ log R(A0)−R(A ∗) iterations we have\nR(AT )−R(A∗) ≤ (\n1− 1 2r∗κ\n)L (R(A0)−R(A∗))\n≤ e− L2r∗κ (R(A0)−R(A∗)) ≤\nSince A0 = 0, the result follows.\nA.3 PROOF OF THEOREM 2.2 (LOCAL SEARCH)\nProof. Similarly to Section A.3, we let At−1 be the current solution before iteration t − 1 ≥ 0. Let u ∈ Rm and v ∈ Rm be left and right singular vectors of matrix ∇R(A), i.e. unit vectors maximizing |〈∇R(A), uv>〉| and let\nBt = {B|B = At−1 + ηuvT − σminxy>, η ∈ R},\nwhere σminxy> = At−1 −Hr−1(At−1) is the rank-1 term corresponding to the minimum singular value of At−1. By smoothness we have\nR(At−1)−R(At) ≥ max B∈Bt {R(At−1)−R(B)}\n≥ max B∈Bt\n{ −〈∇R(At−1), B −At−1〉 −\nρ+2 2 ‖B −At−1‖2F } = max\nη∈R\n{ −〈∇R(At−1), ηuv> − σminxy>〉 −\nρ+2 2 ‖ηuv> − σminxy>‖2F } ≥ max\nη∈R\n{ −〈∇R(At−1), ηuv>〉 − η2ρ+2 − σ2minρ + 2 } = max\nη∈R\n{ η‖∇R(At−1)‖2 − η2ρ+2 − σ2minρ + 2 } = ‖∇R(At−1)‖22\n4ρ+2 − σ2minρ+2 ,\nwhere in the last inequality we used the fact that 〈∇R(At−1), xy>〉 = 0 following from Lemma A.8, as well as Lemma A.1.\nOn the other hand, by strong convexity,\nR(A∗)−R(At−1) ≥ 〈∇R(At−1), A∗ −At−1〉+ ρ−r+r∗\n2 ‖A∗ −At−1‖2F .\nLet At−1 = UV > and A∗ = U∗V ∗>. We write\nA∗ = U∗V ∗> = (U1 + U2)(V 1 + V 2)> = U1V 1> + U1V 2> + U2V ∗>\nwhere U1 is a matrix where every column of U∗ is replaced by its projection on im(U) and U2 = U∗ − U1 and similarly V 1 is a matrix where every column of V ∗ is replaced by its projection on im(V ) and V 2 = V ∗ − V 1. By setting U ′ = (−U | U1) and V ′ = (V | V 1) we can write\nA∗ −At−1 = U ′V ′> + U1V 2> + U2V ∗>\nwhere im(U ′) = im(U) and im(V ′) = im(V ). Also, note that\nrank(U1V 2>) ≤ rank(V 2) ≤ rank(V ∗) = rank(A∗) ≤ r∗\nand similarly rank(U2V ∗>) ≤ r∗. So we now have\n〈∇R(At−1), A∗ −At−1〉+ ρ−r+r∗\n2 ‖A∗ −At−1‖2F\n= 〈∇R(At−1), U ′V ′> + U1V 2> + U2V ∗>〉+ ρ−r+r∗\n2 ‖U ′V ′> + U1V 2> + U2V ∗>‖2F\n= 〈∇R(At−1), U1V 2> + U2V ∗>〉+ ρ−r+r∗\n2 ‖U ′V ′> + U1V 2> + U2V ∗>‖2F\n= 〈∇R(At−1), U1V 2> + U2V ∗>〉+ ρ−r+r∗\n2\n( ‖U ′V ′>‖2F + ‖U1V 2>‖2F + ‖U2V ∗>‖2F )\n≥ 〈∇R(At−1), U1V 2>〉+ 〈∇R(At−1), U2V ∗>〉+ ρ−r+r∗\n2\n( ‖U1V 2>‖2F + ‖U2V ∗>‖2F ) + ρ−r+r∗\n2 ‖U ′V ′>‖2F\n≥ 2 min rank(M)≤r∗\n{ 〈∇R(At−1),M〉+ ρ−r+r∗\n2 ‖M‖2F\n} + ρ−r+r∗\n2 ‖U ′V ′>‖2F\n= −2‖Hr ∗(∇R(At−1))‖2F 2ρ−r+r∗ + ρ−r+r∗ 2 ‖U ′V ′>‖2F\n≥ −r∗ ‖∇R(At−1)‖ 2 2 ρ−r+r∗ + ρ−r+r∗ 2 ‖U ′V ′>‖2F\nwhere the second equality follows from the fact that 〈∇R(At−1), uv>〉 = 0 for any u ∈ im(U), v ∈ im(V ), the third equality from the fact that im(U2) ⊥ im(U ′) ∪ im(U1) and im(V 2) ⊥ im(V ′) and by applying Lemma A.7, and the last inequality from the fact that the largest squared singular value is not smaller than the average of the top r∗ squared singular values. Now, note that since rank(U1V 1>) ≤ r∗ < r = rank(UV >),\n‖U ′V ′>‖2F = ‖U1V 1> − UV >‖2F\n= r∑ i=1 σ2i (U 1V 1> − UV >)\n≥ r∑ i=1 (σi+r∗(UV >)− σr∗+1(U1V 1>))2\n= r∑ i=r∗+1 σ2i (UV >)\n≥ (r − r∗)σ2min(UV >) = (r − r∗)σ2min(At−1) ,\nwhere we used the fact that rank(U1V 1>) ≤ r∗ together with Lemma A.5. Therefore we have concluded that\n‖∇R(At−1)‖22 ≥ ρ−r+r∗\nr∗ (R(At−1)−R(A∗)) +\n(ρ−r+r∗) 2(r − r∗)\n2r∗ σ2min\nPlugging this back into the smoothness inequality and setting κ̃ = ρ + 2\nρ− r+r∗\n, we get\nR(At−1)−R(At) ≥ 1\n4r∗κ̃ (R(At−1)−R(A∗)) +\n( ρ−r+r∗(r − r∗)\n8r∗κ̃ − ρ+2\n) σ2min(At−1)\n≥ 1 4r∗κ̃ (R(At−1)−R(A∗))\nas long as r ≥ r∗(1 + 8κ̃2), or equivalently, R(At)−R(A∗) ≤ (\n1− 1 4r∗κ̃\n) (R(At−1)−R(A∗)) .\nTherefore after L = 4r∗κ̃ log R(A0)−R(A ∗) iterations we have\nR(AT )−R(A∗) ≤ (\n1− 1 4r∗κ̃\n)L (R(A0)−R(A∗))\n≤ e− L4r∗κ̃ (R(A0)−R(A∗)) ≤\nSince A0 = 0 and κ̃ ≤ κr+r∗ , the result follows.\nA.4 TIGHTNESS OF THE ANALYSIS\nIt is important to note that the κr+r∗ factor that appears in the rank bounds of both Theorems 2.1 and 2.2 is inherent in these algorithms and not an artifact of our analysis. In particular, such lower bounds based on the restricted condition number have been previously shown for the problem of sparse linear regression. More specifically, Foster et al. (2015) showed that there is a family of instances in which the analogues of Greedy and Local Search for sparse optimization require the sparsity to be Ω(s∗κ′) for constant error > 0, where s∗ is the optimal sparsity and κ′ is the sparsity-restricted condition number. These instances can be easily adjusted to give a rank lower bound of Ω(r∗κr+r∗) for constant error > 0, implying that the κ dependence in Theorem 2.1 is tight for Greedy. Furthermore, specifically for Local Search, Axiotis & Sviridenko (2020) additionally\nshowed that there is a family of instances in which the analogue of Local Search for sparse optimization requires a sparsity of Ω(s∗(κ′)2). Adapting these instances to the setting of rank-constrained convex optimization is less trivial, but we conjecture that it is possible, which would lead to a rank lower bound of Ω(r∗κ2r+r∗) for Local Search.\nWe present the following lemma, which essentially states that sparse optimization lower bounds for Orthogonal Matching Pursuit (OMP, Pati et al. (1993)) (resp. Orthogonal Matching Pursuit with Replacement (OMPR, Jain et al. (2011))) in which the optimal sparse solution is also a global optimum, immediately carry over (up to constants) to rank-constrained convex optimization lower bounds for Greedy (resp. Local Search). Lemma A.9. Let f ∈ Rn → R and x∗ ∈ Rn be an s∗-sparse vector that is also a global minimizer of f . Also, let f have restricted smoothness parameter β at sparsity level s + s∗ for some s ≥ s∗ and restricted strong convexity parameter α at sparsity level s + s∗. Then we can define the rankconstrained problem, with R : Rn×n → R,\nmin rank(A)≤s∗\nR(A) := f(diag(A)) + β\n2 ‖A− diag(A)‖2F , (9)\nwhere diag(A) is a vector containing the diagonal of A. R has rank-restricted smoothness at rank s+s∗ at most 2β and rank-restricted strong convexity at rank s+s∗ at least α. Suppose that we run t iterations of OMP (resp. OMPR) starting from a solution x, to get solution x′, and similarly run t iterations of Greedy (resp. Local Search) starting from solution A = diag(x) (where diag(x) is a diagonal matrix with x on the diagonal) to get solution A′. Then A′ is diagonal and diag(A′) = x′. In other words, in this scenario OMP and Greedy (resp. OMPR and Local Search) are equivalent.\nProof. Note that for any solution  of R we have R(Â) ≥ f(diag(Â)) ≥ f(x∗), with equality only if  is diagonal. Furthermore, rank(diag(x∗)) ≤ s∗, meaning that diag(x∗) is an optimal solution of (9). Now, given any diagonal solution A of (9) such that A = diag(x), we claim that one step of either Greedy or Local Search keeps it diagonal. This is because\n∇R(A) = diag(∇f(x)) + β 2 (A− diag(A)) = diag(∇f(x)) .\nTherefore the largest eigenvalue of ∇R(A) has corresponding eigenvector 1i for some i, which implies that the rank-1 component which will be added is a multiple of 1i1>i . For the same reason the rank-1 component removed by Local Search will be a multiple of 1j1>j for some j. Therefore running Greedy (resp. Local Search) on such an instance is identical to running OMP (resp. OMPR) on the diagonal.\nTogether with the lower bound instances of Foster et al. (2015) (in which the global minimum property is true), it immediately implies a rank lower bound of Ω(r∗κr+r∗) for getting a solution with constant error for rank-constrained convex optimization. On the other hand, the lower bound instances of Axiotis & Sviridenko (2020) give a quadratic lower bound in κ for OMPR. The above lemma cannot be directly applied since the sparse solutions are not global minima, but we conjecture that a similar proof will give a rank lower bound of Ω(r∗κ2r+r∗) for rank-constrained convex optimization with Local Search.\nA.5 ADDENDUM TO SECTION 4" } ]
2,021
LOCAL SEARCH ALGORITHMS FOR RANK- CONSTRAINED CONVEX OPTIMIZATION
SP:9eeb3b40542889b8a8e196f126a11f80e177f031
[ "The paper uses selective training with pseudo labels. Specifically, the method selects the pseudo-labeled data associated with small loss after performing the data augmentation, and then uses the selected data for training the model. Here, the model computes the confidence of the pseudo labels and then puts a threshold to determine the number of the selected samples and ignore inaccurate pseudo labels. Moreover, MixConf, a variation of mixup, for data augmentation is proposed to train a more confidence calibrated model. Finally, experimental results on the standard datasets show the effectiveness of the proposed model compared to SOA SSL methods." ]
We propose a novel semi-supervised learning (SSL) method that adopts selective training with pseudo labels. In our method, we generate hard pseudo-labels and also estimate their confidence, which represents how likely each pseudo-label is to be correct. Then, we explicitly select which pseudo-labeled data should be used to update the model. Specifically, assuming that loss on incorrectly pseudo-labeled data sensitively increase against data augmentation, we select the data corresponding to relatively small loss after applying data augmentation. The confidence is used not only for screening candidates of pseudo-labeled data to be selected but also for automatically deciding how many pseudo-labeled data should be selected within a mini-batch. Since accurate estimation of the confidence is crucial in our method, we also propose a new data augmentation method, called MixConf, that enables us to obtain confidence-calibrated models even when the number of training data is small. Experimental results with several benchmark datasets validate the advantage of our SSL method as well as MixConf.
[]
[ { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin A Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ekin D Cubuk", "Alex Kurakin", "Kihyuk Sohn", "Han Zhang", "Colin Raffel" ], "title": "Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Bo Han", "Quanming Yao", "Xingrui Yu", "Gang Niu", "Miao Xu", "Weihua Hu", "Ivor Tsang", "Masashi Sugiyama" ], "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical Report,", "year": 2009 }, { "authors": [ "Dong-Hyun Lee" ], "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "venue": "In Workshop on challenges in representation learning, ICML,", "year": 2013 }, { "authors": [ "Vishnu Suresh Lokhande", "Songwong Tasneeyapant", "Abhay Venkatesh", "Sathya N Ravi", "Vikas Singh" ], "title": "Generating accurate pseudo-labels in semi-supervised learning and avoiding overconfident predictions via hermite polynomial activations", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Mahdi Pakdaman Naeini", "Gregory Cooper", "Milos Hauskrecht" ], "title": "Obtaining well calibrated probabilities using bayesian binning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Avital Oliver", "Augustus Odena", "Colin A Raffel", "Ekin Dogus Cubuk", "Ian Goodfellow" ], "title": "Realistic evaluation of deep semi-supervised learning algorithms", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Antti Tarvainen", "Harri Valpola" ], "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Sunil Thulasidasan", "Gopinath Chennupati", "Jeff A Bilmes", "Tanmoy Bhattacharya", "Sarah Michalak" ], "title": "On mixup training: Improved calibration and predictive uncertainty for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jesper E Van Engelen", "Holger H Hoos" ], "title": "A survey on semi-supervised learning", "venue": "Machine Learning,", "year": 2020 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Juho Kannala", "Yoshua Bengio", "David Lopez-Paz" ], "title": "Interpolation consistency training for semi-supervised learning", "venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Qin Wang", "Wen Li", "Luc Van Gool" ], "title": "Semi-supervised learning by augmented distribution alignment", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Liheng Zhang", "Guo-Jun Qi" ], "title": "Wcp: Worst-case perturbations for semi-supervised deep learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Semi-supervised learning (SSL) is a powerful technique to deliver a full potential of complex models, such as deep neural networks, by utilizing unlabeled data as well as labeled data to train the model. It is especially useful in some practical situations where obtaining labeled data is costly due to, for example, necessity of expert knowledge. Since deep neural networks are known to be “data-hungry” models, SSL for deep neural networks has been intensely studied and has achieved surprisingly good performance in recent works (Van Engelen & Hoos, 2020). In this paper, we focus on SSL for a classification task, which is most commonly tackled in the literature.\nMany recent SSL methods adopt a common approach in which two processes are iteratively conducted: generating pseudo labels of unlabeled data by using a currently training model and updating the model by using both labeled and pseudo-labeled data. In the pioneering work (Lee, 2013), pseudo labels are hard ones, which are represented by one-hot vectors, but recent methods (Tarvainen & Valpola, 2017; Miyato et al., 2018; Berthelot et al., 2019; 2020; Verma et al., 2019; Wang et al., 2019; Zhang & Qi, 2020) often utilize soft pseudo-labels, which may contain several nonzero elements within each label vector. One simple reason to adopt soft pseudo-labels is to alleviate confirmation bias caused by training with incorrectly pseudo-labeled data, and this attempt seems to successfully contribute to the excellent performance of those methods. However, since soft pseudolabels only provide weak supervisions, those methods often show slow convergence in the training (Lokhande et al., 2020). For example, MixMatch (Berthelot et al., 2019), which is one of the stateof-the-art SSL methods, requires nearly 1,000,000 iterations for training with CIFAR-10 dataset. On the other hand, in this paper, we aim to utilize hard pseudo-labels to design an easy-to-try SSL method in terms of computational efficiency. Obviously, the largest problem to be tackled in this approach is how to alleviate the negative impact caused by training with the incorrect pseudo-labels.\nIn this work, we propose a novel SSL method that adopts selective training with pseudo labels. To avoid to train a model with incorrect pseudo-labels, we explicitly select which pseudo-labeled data should be used to update the model. Specifically, assuming that loss on incorrectly pseudo-labeled data sensitively increase against data augmentation, we select the data corresponding to relatively\nsmall loss after applying data augmentation. To effectively conduct this selective training, we estimate confidence of pseudo labels and utilize it not only for screening candidates of pseudo-labeled data to be selected but also for automatically deciding how many pseudo-labeled data should be selected within a mini-batch. For accurate estimation of the confidence, we also propose a new data augmentation method, called MixConf, that enables us to obtain confidence-calibrated models even when the number of training data is small. Experimental results with several benchmark datasets validate the advantage of our SSL method as well as MixConf." }, { "heading": "2 PROPOSED METHOD", "text": "Figure 2 shows an overview of our method. Given a mini-batch from labeled data and that from unlabeled data, we first generate pseudo labels of the unlabeled data based on predictions of the current model. Let x ∈ Rm, y ∈ {1, 2, ...C}, and f : Rm → RC denote input data, labels, and the classifier to be trained, respectively. Given the input unlabeled data xU, the pseudo label ŷU is generated by simply taking argmax of the classifier’s output f(xU). Then, we conduct selective training using both the labeled data and the pseudo-labeled data. In this training, to alleviate negative effect caused by training with incorrect pseudo-labels, we explicitly select which data should be used to update the model. Below, we describe details of this selective training." }, { "heading": "2.1 SELECTIVE TRAINING WITH PSEUDO LABELS BASED ON CONFIDENCE", "text": "As described previously, the pseudo labels are generated based on the predictions of the current model, and we assume that the confidence of those predictions can be also computed in addition to the pseudo labels. When we use a popular architecture of deep neural networks, it can be obtained by simply taking max of the classifier’s output (Hendrycks & Gimpel, 2016) as:\nci = max j∈{1,2,...,C}\nf(xUi )[j], (1)\nwhere ci is the confidence of the classifier’s prediction on the i-th unlabeled data xUi , and f(x)[j] is the j-th element of f(x). When the model is sufficiently confidence-calibrated, the confidence ci is expected to match the accuracy of the corresponding prediction f(xUi ) (Guo et al., 2017), which means it also matches the probability that the pseudo label ŷUi is correct.\nTo avoid training with incorrect pseudo-labels, we explicitly select the data to be used to train the model based on the confidence. This data selection comprises two steps: thresholding the confidence and selecting relatively small loss calculated with augmented pseudo-labeled data. The first step is\nquite simple; we pick up the pseudo-labeled data that have higher confidence than a certain threshold cthr and discard the remaining. In the second step, MixConf, which will be introduced later but is actually a variant of Mixup (Zhang et al., 2018), is applied to both the labeled and unlabeled data to augment them. As conducted in (Berthelot et al., 2019), we shuffle all data and mix them with the original labeled and pseudo-labeled data, which results in {(x̃Li , p̃Li )} BL i=1 and {(x̃Uj , p̃Uj )} BU j=1, respectively, where p ∈ RC is a vector-style representation of the label that is adopted to represent a mixed label. Then, we calculate the standard cross entropy loss for each mixed data. Finally, we select the mixed data that result in relatively small loss among the all augmented data, and only the corresponding small-loss is minimized to train the model.\nWhy does the small-loss selection work? Our important assumption is that the loss calculated with incorrect labels tends to sensitively increase when the data is augmented. This assumption would be supported by effectiveness of the well-known technique, called test-time augmentation (Simonyan & Zisserman, 2015), in which incorrect predictions are suppressed by taking an average of the model’s outputs over several augmentations. Since we conduct the confidence thresholding, the loss corresponding to the pseudo-labeled data is guaranteed to be smaller than a certain loss level defined by the threshold cthr. However, when we apply data augmentation, that is MixConf, to the pseudo-labeled data, the loss related to incorrectly pseudo-labeled data becomes relatively large, if the above assumption is valid. It means that selecting relatively small loss after applying MixConf leads to excluding incorrect pseudo-labels, and we can safely train the model by using only the selected data.\nHan et al. (2018) and Lokhande et al. (2020) have presented similar idea, called small-loss trick (Han et al., 2018) or speed as a supervisor (Lokhande et al., 2020), to avoid training with incorrect labels. However, their assumption is different from ours; it is that loss of incorrectly labeled data decreases much slower than that of correctly labeled data during training. Due to this assumption, their methods require joint training of two distinct models (Han et al., 2018) or nested loop for training (Lokhande et al., 2020) to confirm which data show relatively slow convergence during training, which leads to substantially large computational cost. On the other hand, since our method focuses on change of loss values against data augmentation, not that during training, we can efficiently conduct the selective training by just utilizing data augmentation in each iteration.\nSince the confidence of the pseudo label represents the probability that the pseudo label is correct, we can estimate how many data we should select based on the confidence by calculating an expected number of the mixed data generated from two correctly labeled data. Specifically, when the averaged confidence within the unlabeled data is equal to cave, the number of the data to be selected can be determined as follows:\nnL = BL + caveBU BL +BU BL, (2)\nnU = min ( BL,\nBL + caveBU BL +BU caveBU\n) , (3)\nwhere nL is for the data generated by mixing the labeled data and shuffled data, and nU is for those generated by mixing the unlabeled data and shuffled data. Here, to avoid too much contribution from the pseudo-labeled data, we restrict nU to be smaller than BL. Within this restriction, we can observe that, if we aim to perfectly balance nL and nU, BU should be set to BL/cave. However, cave cannot be estimated before training and can fluctuate during training. Therefore, for stable training, we set BU = BL/cthr instead and fix it during training.\nFinally, the total loss L to be minimized in our method is formulated as the following equation:\nL = 1 BL nL∑ i=1 l(x̃Ls[i], p̃ L s[i]) + λU 1 BL nU∑ j=1 l(x̃Ut[j], p̃ U t[j]), (4)\nwhere l is the standard cross entropy loss, s and t represent the sample index sorted by loss in an ascending order within each mini-batch, and λU is a hyper-parameter that balances the two terms.\nTo improve the accuracy of pseudo labels as well as their confidence, we can average the model’s outputs over K augmentations to estimate pseudo labels as conducted in (Berthelot et al., 2019). In that case, we conduct MixConf for all augmented pseudo-labeled data, which results in K minibatches each of which containsBU mixed data. Therefore, we need to modify the second term in the\nright-hand side of Eq. (4) to take the average of losses over all the mini-batches. In our experiments, we used K = 4 except for an ablation study." }, { "heading": "2.2 MIXCONF TO OBTAIN BETTER CALIBRATED MODELS", "text": "In the previous section, we assumed that the model is sufficiently confidence-calibrated, but deep neural networks are often over-confident on their predictions in general (Guo et al., 2017). This problem gets bigger in case of training with a small-scale dataset as we will show in our experiments. Consequently, it should occur in our SSL setting, because there are only a small amount of labeled training data in the early stage of the training. If the confidence is over-estimated, incorrect pseudolabels are more likely to be selected to calculate the loss due to loose confidence-thresholding and over-estimated (nL, nU), which should significantly degrade the performance of the trained model. To tackle this problem, we propose a novel data augmentation method, called MixConf, to obtain well-calibrated models even when the number of training data is small. MixConf basically follows the scheme of Mixup, which is known to contribute to model’s calibration (Thulasidasan et al., 2019), but is more carefully designed for confidence calibration.\nFigure 2 shows an overview of MixConf. In a similar way with Mixup, MixConf randomly picks up two samples {(x0, p0), (x1, p1)} from the given training dataset and generates a new training sample (x̃, p̃) by linearly interpolating these samples as the following equations:\nx̃ = λax0 + (1− λa)x1, (5) p̃ = λbp0 + (1− λb)p1, (6)\nwhere λa ∈ [0, 1] and λb ∈ [0, 1] denote interpolation ratios for data and labels, respectively. Note that λa is not restricted to be equal to λb in MixConf, while λa = λb in Mixup. Since Mixup is not originally designed to obtain confidence-calibrated models, we have to tackle the following two questions to obtain better calibrated models by such a Mixup-like data augmentation method:\n• How should we set the ratio for the data interpolation? (In case of Mixup, λa is randomly sampled from the beta distribution)\n• How should we determine the labels of the interpolated data? (In case of Mixup, λb is set to be equal to λa)\nWe first tackle the second question to clarify what kind of property the generated samples should have. Then, we derive how to set λa and λb so that the generated samples have this property." }, { "heading": "2.2.1 HOW TO DETERMINE THE LABELS OF THE INTERPOLATED DATA", "text": "Let us consider the second question shown previously. When the model predicts ŷ for the input x, the expected accuracy of this prediction should equal the corresponding class posterior probability p(ŷ|x). It means that, if the model is perfectly calibrated, its providing confidence should match the class posterior probability. On the other hand, from the perspective of maximizing the prediction accuracy, the error rate obtained by the ideally trained model should match the Bayes error rate, which is achieved when the model successfully predicts the class that corresponds to the maximum class-posterior probability. Considering both perspectives, we argue that maxj f(x)[j] of the ideally trained model should represent the class posterior probability to have the above-mentioned properties. Therefore, to jointly achieve high predictive accuracy and confidence calibration, we adopt the class posterior probability as the supervision of the confidence on the generated data. Specifically, we aim to generate a new sample (x̃, p̃) so that it satisfies p̃ = p(y|x̃). Although it is quite difficult to accurately estimate the class posterior probability for any input data in general, we need to estimate it only for the linearly interpolated data in our method. Here, we estimate it via simple kernel-density estimation based on the original sample pair {(x0, p0), (x1, p1)}. First, we rewrite p(y|x̃) by using Bayes’s theorem as the following equation:\np(y = j|x̃) = πjp(x̃|y = j) p(x̃) , (7)\nwhere πj denotes p(y = j) and j ∈ {1, 2, ..., C}. Then, intead of directly estimating p(y|x̃), we estimate both p(x̃) and p(x̃|y) by using a kernel function k as\np(x̃) = ∑\ni∈{0,1}\n1 2 p(x̃|y = yi), p(x̃|y) = { k(x̃− x0) if y = y0, k(x̃− x1) if y = y1, 0 otherwise.\n(8)\nSince we only use the two samples, (x0, p0) and (x1, p1), for this estimation, πj is set to 1/2 if j ∈ {y0, y1} and 0 otherwise. By substituting Eqs. (8) into Eq. (7), we obtain the following equation:\np(y|x̃) = k(x̃−x0)∑ i∈{0,1} k(x̃−xi) if y = y0, k(x̃−x1)∑ i∈{0,1} k(x̃−xi) if y = y1,\n0 otherwise.\n(9)\nTo make p̃ represent this class posterior probability, we need to set the interpolation ratio λb in Eq. (6) as the following equation:\nλb = p(y = y0|x̃) = k(x̃− x0)∑\ni∈{0,1} k(x̃− xi) . (10)\nOnce we generate the interpolated data, we can determine the labels of the interpolated data by using Eq. (10). Obviously, λb is not necessarily equal to λa, which is different from Mixup." }, { "heading": "2.2.2 HOW TO SET THE RATIO FOR THE DATA INTERPOLATION", "text": "Since we have already formulated p(x̃), we have to carefully set the ratio for the data interpolation so that the resulting interpolated samples follow this distribution. Specifically, we cannot simply use the beta distribution to sample λa, which is used in Mixup, and need to specify an appropriate distribution p(λa) to guarantee the interpolated data follow p(x̃) shown in Eq. (8).\nBy using Eq. (5) and Eq. (8), we can formulate p(λa) as the following equation:\np(λa) = p(x̃) ∣∣∣∣ dx̃dλa ∣∣∣∣ = |x0 − x1| ∑\ni∈{0,1}\n1 2 k(x̃− xi). (11)\nSince the kernel function k is defined in the x-space, it is hard to directly sample λa from the distribution shown in the right-hand side of Eq. (11). To re-formulate this distribution to make it easy to sample, we define another kernel function in the λa-space as\nk′(λa) = |x0 − x1| k(λa(x0 − x1)). (12)\nBy using this new kernel function, we can rewrite p(λa) in Eq. (11) and also λb in Eq. (10) as follows:\np(λa) = ∑\ni∈{0,1}\n1 2 k′(λa − (1− i)), (13)\nλb = k′(λa − 1)∑\ni∈{0,1} k ′(λa − (1− i))\n. (14)\nThis formulation enables us to easily sample λa and to determine its corresponding λb. Note that we need to truncate p(λa) when sampling λa to guarantee that the sampled λa is in the range of [0, 1]. The kernel function k′ should be set manually before training. It corresponds to a hyper-parameter setting of the beta distributon in case of Mixup." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 CONFIDENCE ESTIMATION", "text": "We first conducted experiments to confirm how much MixConf contributes to improve the confidence calibration especially when the number of training data is small. We used object recognition datasets (CIFAR-10 and CIFAR-100 (Krizhevsky, 2009)) and a fashion-product recognition dataset (Fashion MNIST (Xiao et al., 2017)) as in the previous study (Thulasidasan et al., 2019). The number of training data is 50,000 for the CIFAR-10 / CIFAR-100 and 60,000 for the Fashion MNIST. To make a small-scale training dataset, we randomly chose a subset of the original training dataset while keeping class priors unchanged from the original. Using this dataset, we trained ResNet-18 (He et al., 2016) by the Adam optimizer (Kingma & Ba, 2015). After training, we measured the confidence calibration of the trained models at test data by the Expected Calibration Error (ECE) (Naeini et al., 2015):\nECE = M∑ m=1 |Bm| N |acc(Bm)− conf(Bm)| , (15)\nwhere Bm is a set of samples whose confidence fall into m-th bin, acc(Bm) represents an averaged accuracy over the samples in Bm calculated by |Bm|−1 ∑ i∈Bm 1(ŷi = yi), and conf(Bm) repre-\nsents an averaged confidence in Bm calculated by |Bm|−1 ∑\ni∈Bm ĉi. We split the original test data into two datasets, namely 500 samples for validation and the others for testing. Note that the validation was conducted by evaluating the prediction accuracy of the trained model on the validation data, not by the ECE. For each setting, we conducted the experiments five times with random generation of the training dataset and initialization of the model, and its averaged performance will be reported.\nFor our MixConf, we tried the Gaussian kernel and the triangular kernel, and we call the former one MixConf-G and the latter one MixConf-T. The width of the kernel, which is a hyper-parameter of MixConf, is tuned via the validation. For comparison, we also trained the models without Mixup as a baseline method and those with Mixup. Mixup has its hyper-parameter α to determine p(λa) = Beta(α, α), which is also tuned in the experiment.\nFigure 3 shows the ECE of the trained models. The horizontal axis represents the proportion of the original training data used for training, and the vertical axis represents the ECE of the model trained with the corresponding training dataset. In all methods, the ECE increases when the number of training data gets small, which indicates that the over-confidence problem of DNNs gets bigger in case of the small-scale training dataset. As reported in (Thulasidasan et al., 2019), Mixup substantially reduces the ECE compared with the baseline method, but its ECE still increases to some extent when the training dataset becomes small-scale. MixConf-G succeeds in suppressing such increase and achieves lower ECE in all cases. The performance of MixConf-T is not so good as that of MixConf-G especially in case of CIFAR-10/100. Since the actual width of the kernel function in the data space gets small according to the increase of the training data due to smaller |x0 − x1| (see Eq. (12)), the difference between MixConf and Mixup becomes small, which results in similar performance of these methods when the number of the training data is large. Through the almost all settings, MixConf-G with σ = 0.4 performs best. Therefore, we used it in our SSL method in the experiments shown in the next section." }, { "heading": "3.2 SEMI-SUPERVISED LEARNING", "text": "To validate the advantage of our SSL method, we conducted experiments with popular benchmark datasets: CIFAR-10 and SVHN dataset (Netzer et al., 2011). We randomly selected 1,000 or 4,000 samples from the original training data and used them as labeled training data while using the remaining data as unlabeled ones. Following the standard setup (Oliver et al., 2018), we used the WideResNet-28 model. We trained this model by using our method with the Adam optimizer and evaluated models using an exponential moving average of their parameters as in (Berthelot et al., 2019). The number of iterations for training is set to 400,000. The hyper-parameters (λU, cthr) in our method are set to (2, 0.8) for CIFAR-10 and (3, 0.6) for SVHN dataset, unless otherwise noted. We report the averaged error rate as well as the standard deviation over five runs with random selection of the labeled data and random initialization of the model. We compared the performance of our method with those of several recent SSL methods, specifically, Virtual Adversarial Training (VAT) (Miyato et al., 2018), Interpolation Consistency Training (ICT) (Verma et al., 2019), MixMatch (Berthelot et al., 2019), Pseudo-labeling (Lee, 2013), and Hermite-SaaS (Lokhande et al., 2020). Note that the former three methods utilize soft pseudo-labels, while the latter two use hard ones. We did not include ReMixMatch (Berthelot et al., 2020) in this comparison, because it adopts an optimization of data-augmentation policy that heavily utilizes domain knowledge about the classification task, which is not used in the other methods including ours.\nTable 1 shows the test error rates achieved by the SSL methods for each setting. For CIFAR-10, our method has achieved the lowest error rates in both settings. Moreover, our method has shown relatively fast convergence; for example, in case of 1,000 labels, the number of iterations to reach 7.75% in our method was around 160,000, while that in MixMatch is about 1,000,000 as reported in (Berthelot et al., 2019). Lokhande et al. (2020) have reported much faster convergence, but our method outperforms their method in terms of the test error rates by a significant margin. For SVHN dataset, our method has shown competitive performance compared with that of the other state-ofthe-art methods.\nWe also conducted an ablation study and investigated performance sensitivity to the hyperparameters using CIFAR-10 with 1,000 labels. The results are shown in Table 2. When we set K = 1 or use Mixup instead of MixConf, the error rate substantially increases, which indicates that it is important to accurately estimate the confidence of the pseduo labels in our method. On the other hand, the role of the small-loss selection is relatively small, but it shows distinct improvement. Decreasing the value of λU leads to degraded performance, because it tends to induce overfitting to small-scale labeled data. However, if we set too large value to λU, the training often diverges due to overly relying on pseudo labels. Therefore, we have to carefully set λU as large as possible within a range in which the model is stably trained. The confidence threshold cthr is also important; the test error rate varies according to the value of cthr as shown in Fig. 4. Considering to accept pseudolabeled data as much as possible, smaller cthr is preferred, but too small cthr substantially degrade the performance due to increasing a risk of selecting incorrectly pseudo-labeled data to calculate the loss. We empirically found that, when we gradually decrease the value of cthr, the training loss of the trained model drastically decreases at a little smaller cthr than the optimal value as shown by a red line in Fig. 4. This behavior should provide a hint for appropriately setting cthr.\nTable 1: Experimental results on CIFAR-10 and SVHN dataset.\nCIFAR-10 SVHN\nMethod 1,000 labels 4,000 labels 1,000 labels 4,000 labels\nSoft pseudolabels\nVAT (Miyato et al., 2018) 18.68±0.40 11.05±0.31 5.98±0.21 4.20±0.15 ICT (Verma et al., 2019) - 7.66±0.17 3.53±0.07 -\nMixMatch (Berthelot et al., 2019) 7.75±0.32 6.24±0.06 3.27±0.31 2.89±0.06\nHard pseudolabels\nPseudo-labeling (Lee, 2013) 31.53±0.98 17.41±0.37 10.19±0.41 5.71±0.07\nHermite-SaaS (Lokhande et al., 2020) 20.77 10.65 3.57±0.04 -\nOur method 7.13±0.08 5.81±0.12 3.63±0.12 3.23±0.06" }, { "heading": "4 CONCLUSION", "text": "In this paper, we presented a novel SSL method that adopts selective training with pseudo labels. In our method, we explicitly select the pseudo-labeled data that correspond to relatively small loss after the data augmentation is applied, and only the selected data are used to train the model, which leads to effectively preventing the model from training with incorrectly pseudo-labeled data. We estimate the confidence of the pseudo labels when generating them and use it to determine the number of the samples to be selected as well as to discard inaccurate pseudo labels by thresholding. We also proposed MixConf, which is the data augmentation method that enables us to train more confidencecalibrated models even in case of small-scale training data. Experimental results have shown that our SSL method performs on par or better than the state-of-the-art methods thanks to the selective training and MixConf." } ]
2,020
SEMI-SUPERVISED LEARNING BY SELECTIVE TRAINING WITH PSEUDO LABELS VIA CONFIDENCE ESTIMATION
SP:d818bed28daccbda111c39cdc9d097b5755b3d89
[ "Paper provides an evaluation of the reliability of confidence levels of well known uncertainty quantification techniques in deep learning on classification and regression tasks. The question that the authors are trying to answer empirically is: when a model claims accuracy at a confidence level within a certain interval , how often does the actual accuracy fall within that interval? This is conceptually similar to the recent slew of papers seeking to empirically evaluate the softmax calibration of deep models where the question there is how often do predicted probabilities of the winning class reflect the true probability of the correct answer, but in this paper the focus is on confidence level and confidence intervals. " ]
Uncertainty quantification for complex deep learning models is increasingly important as these techniques see growing use in high-stakes, real-world settings. Currently, the quality of a model’s uncertainty is evaluated using point-prediction metrics such as negative log-likelihood or the Brier score on heldout data. In this study, we provide the first large scale evaluation of the empirical frequentist coverage properties of well known uncertainty quantification techniques on a suite of regression and classification tasks. We find that, in general, some methods do achieve desirable coverage properties on in distribution samples, but that coverage is not maintained on out-of-distribution data. Our results demonstrate the failings of current uncertainty quantification techniques as dataset shift increases and establish coverage as an important metric in developing models for real-world applications.
[]
[ { "authors": [ "Rina Foygel Barber", "Emmanuel J Candès", "Aaditya Ramdas", "Ryan J Tibshirani" ], "title": "The limits of distribution-free conditional predictive inference. March 2019", "venue": "URL http://arxiv.org/ abs/1903.04684", "year": 1903 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Arindam Chatterjee", "Soumendra Nath Lahiri" ], "title": "Bootstrapping lasso estimators", "venue": "Journal of the American Statistical Association,", "year": 2011 }, { "authors": [ "Bradley Efron" ], "title": "The jackknife, the bootstrap and other resampling plans", "venue": null, "year": 1982 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Alex Graves" ], "title": "Practical variational inference for neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2011 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In International Conference on Machine,", "year": 2017 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and Out-of-Distribution examples in neural networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "José Miguel Hernández-Lobato", "Ryan P Adams" ], "title": "Probabilistic backpropagation for scalable learning of bayesian neural networks", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "José Miguel Hernández-Lobato", "Yingzhen Li", "Mark Rowland", "Daniel Hernández-Lobato", "Thang Bui", "Richard E Turner" ], "title": "Black-box α-divergence minimization", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "B J K Kleijn", "A W van der Vaart" ], "title": "The Bernstein-Von-Mises theorem under misspecification", "venue": "Electronic Journal of Statistics,", "year": 2012 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Jeremiah Zhe Liu", "Zi Lin", "Shreyas Padhy", "Dustin Tran", "Tania Bedrax-Weiss", "Balaji Lakshminarayanan" ], "title": "Simple and principled uncertainty estimation with deterministic deep learning via distance awareness", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "C Louizos", "M Welling" ], "title": "Multiplicative normalizing flows for variational Bayesian neural networks", "venue": "In International Conference of Machine Learning,", "year": 2017 }, { "authors": [ "Christos Louizos", "Max Welling" ], "title": "Structured and efficient variational deep learning with matrix Gaussian posteriors", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Wesley Maddox", "Timur Garipov", "Pavel Izmailov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "A simple baseline for Bayesian uncertainty in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Radford M Neal" ], "title": "Bayesian Learning for Neural Networks", "venue": null, "year": 1996 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "D Sculley", "Sebastian Nowozin", "Joshua V Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nick Pawlowski", "Andrew Brock", "Matthew C H Lee", "Martin Rajchl", "Ben Glocker" ], "title": "Implicit weight uncertainty in neural networks. 2017", "venue": "URL http://arxiv.org/abs/1711", "year": 2017 }, { "authors": [ "Carlos Riquelme", "George Tucker", "Jasper Snoek" ], "title": "Deep bayesian bandits showdown: An empirical comparison of bayesian deep networks for Thompson sampling", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Rupesh K Srivastava", "Klaus Greff", "Jürgen Schmidhuber" ], "title": "Training very deep networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Joost van Amersfoort", "Lewis Smith", "Yee Whye Teh", "Yarin Gal" ], "title": "Uncertainty estimation using a single deep deterministic neural network", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Larry Wasserman" ], "title": "All of statistics: a concise course in statistical inference", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Yeming Wen", "Paul Vicol", "Jimmy Ba", "Dustin Tran", "Roger Grosse" ], "title": "Flipout: Efficient PseudoIndependent weight perturbations on Mini-Batches", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Florian Wenzel", "Kevin Roth", "Bastiaan S Veeling", "Jakub Świątkowski", "Linh Tran", "Stephan Mandt", "Jasper Snoek", "Tim Salimans", "Rodolphe Jenatton", "Sebastian Nowozin" ], "title": "How good is the bayes posterior in deep neural networks really", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Jiayu Yao", "Weiwei Pan", "Soumya Ghosh", "Finale Doshi-Velez" ], "title": "Quality of uncertainty quantification for bayesian neural network inference", "venue": "URL http://arxiv.org/abs/ 1906.09686", "year": 2019 } ]
[ { "heading": null, "text": "Uncertainty quantification for complex deep learning models is increasingly important as these techniques see growing use in high-stakes, real-world settings. Currently, the quality of a model’s uncertainty is evaluated using point-prediction metrics such as negative log-likelihood or the Brier score on heldout data. In this study, we provide the first large scale evaluation of the empirical frequentist coverage properties of well known uncertainty quantification techniques on a suite of regression and classification tasks. We find that, in general, some methods do achieve desirable coverage properties on in distribution samples, but that coverage is not maintained on out-of-distribution data. Our results demonstrate the failings of current uncertainty quantification techniques as dataset shift increases and establish coverage as an important metric in developing models for real-world applications." }, { "heading": "1 INTRODUCTION", "text": "Predictive models based on deep learning have seen dramatic improvement in recent years (LeCun et al., 2015), which has led to widespread adoption in many areas. For critical, high-stakes domains such as medicine or self-driving cars, it is imperative that mechanisms are in place to ensure safe and reliable operation. Crucial to the notion of safe and reliable deep learning is the effective quantification and communication of predictive uncertainty to potential end-users of a system. Many approaches have recently been proposed that fall into two broad categories: ensembles and Bayesian methods. Ensembles (Lakshminarayanan et al., 2017) aggregate information from many individual models to provide a measure of uncertainty that reflects the ensembles agreement about a given data point. Bayesian methods offer direct access to predictive uncertainty through the posterior predictive distribution, which combines prior knowledge with the observed data. Although conceptually elegant, calculating exact posteriors of even simple neural models is computationally intractable (Yao et al., 2019; Neal, 1996), and many approximations have been developed (Hernández-Lobato & Adams, 2015; Blundell et al., 2015; Graves, 2011; Pawlowski et al., 2017; Hernández-Lobato et al., 2016; Louizos & Welling, 2016; 2017). Though approximate Bayesian methods scale to modern sized data and models, recent work has questioned the quality of the uncertainty provided by these approximations (Yao et al., 2019; Wenzel et al., 2020; Ovadia et al., 2019).\nPrevious work assessing the quality of uncertainty estimates have focused on calibration metrics and scoring rules such as the negative-loglikelihood (NLL), expected calibration error (ECE), and Brier score. Here we provide a complementary perspective based on the notion of empirical coverage, a well-established concept in the statistical literature (Wasserman, 2013) that evaluates the quality of a predictive set or interval instead of a point prediction. Informally, coverage asks the question: If a model produces a predictive uncertainty interval, how often does that interval actually contain the observed value? Ideally, predictions on examples for which a model is uncertain would produce larger intervals and thus be more likely to cover the observed value. More formally, given features xn ∈ Rd and a response yn ∈ R, coverage is defined in terms of a set Ĉn(x) and a level α ∈ [0, 1]. The set Ĉn(x) is said to have coverage at the 1− α level if for all distributions P ∈ Rd × R where (x, y) ∼ P , the following inequality holds:\nP{yn ∈ Ĉn(xn)} ≥ 1− α (1)\nThe set Ĉn(x) can be constructed using a variety of procedures. For example, in the case of simple linear regression a prediction interval for a new point xn+1 can be constructed1 using a simple, closed-form solution. Figure 1 provides a graphical depiction of coverage for two hypothetical regression models.\nA complementary metric to coverage is width, which is the size of the prediction interval or set. Width can provide a relative ranking of different methods, i.e. given two methods with the same level of coverage we should prefer the method that provides intervals with smaller widths.\nContributions: In this study we investigate the empirical coverage properties of prediction intervals constructed from a catalog of popular uncertainty quantification techniques such as ensembling, Monte-Carlo dropout, Gaussian processes, and stochastic variational inference. We assess the coverage properties of these methods on nine regression tasks and two classification tasks with and without dataset shift. These tasks help us make the following contributions:\n• We introduce coverage and width as a natural and interpretable metrics for evaluating predictive uncertainty.\n• A comprehensive set of coverage evaluations on a suite of popular uncertainty quantification techniques.\n• An examination of how dataset shift affects these coverage properties." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Obtaining Predictive Uncertainty Estimates Several lines of work focus on improving approximations of the posterior of a Bayesian neural network (Graves, 2011; Hernández-Lobato & Adams, 2015; Blundell et al., 2015; Hernández-Lobato et al., 2016; Louizos & Welling, 2016; Pawlowski et al., 2017; Louizos & Welling, 2017). Yao et al.\n1A well-known result from the statistics literature (c.f. chapter 13 of Wasserman (2013)) is that the interval is given by ŷn+1± tn−2sy √ 1/n+ (xn+1 − x̄)2/((n− 1)s2x), where ŷn+1 is the predicted value, tn−2 is the 1 − α/2 critical value from a t-distribution with n − 2 degrees of freedom, x̄ is the mean of x in the training data, and sy, sx are the standard deviations for y and x respectively. such that (1) holds asymptotically. However, for more complicated models such as deep learning, closed form solutions with coverage guarantees are unavailable, and constructing these intervals via the bootstrap (Efron, 1982)) can be computationally infeasible or fail to provide the correct coverage (Chatterjee & Lahiri, 2011).\n(2019) provide a comparison of many of these methods and highlight issues with common metrics of comparison, such as test-set log likelihood and RMSE. Good scores on these metrics often indicates that the model posterior happens to match the test data rather than the true posterior (Yao et al., 2019). Maddox et al. (2019) developed a technique to sample the approximate posterior from the first moment of SGD iterates. Wenzel et al. (2020) demonstrated that despite advances in these approximations, there are still outstanding challenges with Bayesian modeling for deep networks.\nAlternative methods that do not rely on estimating a posterior over the weights of a model can also be used to provide uncertainty estimates. Gal & Ghahramani (2016), for instance, demonstrated that Monte Carlo dropout is related to a variational approximation to the Bayesian posterior implied by the dropout procedure. Lakshminarayanan et al. (2017) used ensembling of several neural networks to obtain uncertainty estimates. Guo et al. (2017) established that temperature scaling provides well calibrated predictions on an i.i.d test set. More recently, van Amersfoort et al. (2020) showed that the distance from the centroids in a RBF neural network yields high quality uncertainty estimates. Liu et al. (2020) also leveraged the notion of distance (in this case, the distance from test to train examples) to obtain uncertainty estimates with their Spectral-normalized Neural Gaussian Processes.\nAssessments of Uncertainty Properties under Dataset Shift Ovadia et al. (2019) analyzed the effect of dataset shift on the accuracy and calibration of Bayesian deep learning methods. Their large scale empirical study assessed these methods on standard datasets such as MNIST, CIFAR-10, ImageNet, and other non-image based datasets. Additionally, they used translations, rotations, and corruptions (Hendrycks & Gimpel, 2017) of these datasets to quantify performance under dataset shift. They found stochastic variational inference (SVI) to be promising on simpler datasets such as MNIST and CIFAR-10, but more difficult to train on larger datasets. Deep ensembles had the most robust response to dataset shift.\nTheoretical Coverage Guarantees The Bernstein-von Mises theorem connects Bayesian credible sets and frequentist confidence intervals. Under certain conditions, Bayesian credible sets of level α are asymptotically frequentist confidence sets of level α and thus have the same coverage properties. However, when there is model misspecification, coverage properties no longer hold (Kleijn & van der Vaart, 2012).\nBarber et al. (2019) explored under what conditions conditional coverage guarantees can hold for arbitrary models (i.e. guarantees for P{yn ∈ Ĉn(x|x = xn)}, which are per sample guarantees). They show that even when these coverage properties are not desired to hold for any possible distribution, there are provably no methods that can give such guarantees. By extension, no Bayesian deep learning methods can provide conditional coverage guarantees." }, { "heading": "3 METHODS", "text": "In both the regression and classification settings, we analyzed the coverage properties of prediction intervals and sets of five different approximate Bayesian and non-Bayesian approaches for uncertainty quantification. These include Dropout (Gal & Ghahramani, 2016; Srivastava et al., 2015), ensembles (Lakshminarayanan et al., 2017), Stochastic Variational Inference (Blundell et al., 2015; Graves, 2011; Louizos & Welling, 2016; 2017; Wen et al., 2018), and last layer approximations of SVI and Dropout (Riquelme et al., 2019). Additionally, we considered prediction intervals from linear regression and the 95% credible interval of a Gaussian process with the squared exponential kernel as baselines in regression tasks. For classification, we also considered temperature scaling (Guo et al., 2017) and the softmax output of vanilla deep networks (Hendrycks & Gimpel, 2017)." }, { "heading": "3.1 REGRESSION METHODS AND METRICS", "text": "We evaluated the coverage properties of these methods on nine large real world regression datasets used as a benchmark in Hernández-Lobato & Adams (2015) and later Gal and Ghahramani (Gal & Ghahramani, 2016). We used the training, validation, and testing splits publicly available from Gal and Ghahramani and performed nested cross validation to find hyperparameters and evaluated coverage properties, defined as the fraction of prediction intervals which contained the true value in the test set. On the training sets, we did 100 trials of a random search over hyperparameter space of a multi-layer-perceptron architecture with an Adam optimizer (Kingma & Ba, 2015) and selected hyperparameters based on RMSE on the validation set.\nEach approach required slightly different ways to obtain a 95% prediction interval. For an ensemble of neural networks, we trained N = 40 vanilla networks and used the 2.5% and 97.5% quantiles as the boundaries of the prediction interval. For dropout and last layer dropout, we made 200 predictions per sample and similarly discarded the top and bottom 2.5% quantiles. For SVI, last layer SVI (LL SVI), and Gaussian processes we had approximate variances available for the posterior which we used to calculate the prediction interval. We calculated 95% prediction intervals from linear regression using the closed-form solution.\nThen we calculated two metrics: • Coverage: A sample is considered covered if the true label is contained in this 95% predic-\ntion interval. We average over all samples in a test set to estimate the coverage of a method on this dataset.\n• Width: The width is the average over the test set of the ranges of the 95% prediction intervals.\nCoverage measures how often the true label is in the prediction region while width measures how specific that prediction region is. Ideally, we would have high levels of coverage with low levels of width on in-distribution data. As data becomes increasingly out of distribution, we would like coverage to remain high while width increases to indicate model uncertainty." }, { "heading": "3.2 CLASSIFICATION METHODS AND METRICS", "text": "Ovadia et al. (2019) evaluated model uncertainty on a variety of datasets publicly available. These predictions were made with the five apprxoimate Bayesian methods describe above, plus vanilla neural networks, with and without temperature scaling. We focus on the predictions from MNIST, CIFAR-10, CIFAR-10-C, ImageNet, and ImageNet-C datasets. For MNIST, Ovadia et al. (2019) measured model predictions on rotated and translated versions of the test set. For CIFAR-10, Ovadia et al. (2019) measured model predictions on translated and corrupted versions of the test set. For ImageNet, Ovadia et al. (2019) only analyzed model predictions on the corrupted images of ImageNet-C. Each of these transformations (rotation, translation, or any of the 16 corruptions) has multiple levels of shift. Rotations range from 15 to 180 degrees in 15 degrees increments. Translations shift images every 2 and 4 pixels for MNIST and CIFAR-10, respectively. Corruptions have 5 increasing levels of intensity. Figure 2 shows the effects of the 16 corruptions in CIFAR-10-C at the first, third, and fifth levels of intensity.\nWe calculate the prediction set of a model’s output. Given α ∈ (0, 1), the 1− α prediction set S for a sample xi is the minimum sized set of classes such that∑\nc∈S p(yc|xi) ≥ 1− α (2)\nThis consists of the top ki probabilities such that 1− α probability has been accumulated. Then we can define:\n• Coverage: For each dataset point, we calculate the 1− α prediction set of the label probabilities, then coverage is what fraction of these prediction sets contain the true label.\n• Width: The width of a prediction set is simply the number of labels in the set, |S|. We report the average width of prediction sets over a dataset in our figures.\nAlthough both calibration (Guo et al., 2017) and coverage can involve a probability over a model’s output, calibration only considers the most likely label and it’s corresponding probability, while coverage considers the the top-ki probabilities. In the classification setting, coverage is more robust to label errors as it does not penalize models for putting probability on similar classes." }, { "heading": "4 RESULTS", "text": "" }, { "heading": "4.1 REGRESSION", "text": "Table 1 shows the mean and standard error of coverage levels for the methods we evaluated. In the regression setting, we find high levels of coverage for linear regression, Gaussian processes, SVI, and LL SVI. Ensembles and Dropout had lower levels of coverage, while LL Dropout had the lowest average coverage. Table 2 reports the average width of the 95% prediction interval in terms of standard deviations of the response variable. We see that higher coverage correlates with a higher average width.\nDataset | Method Linear Regression GP Ensemble Dropout LL Dropout SVI LL SVI\nBoston Housing 0.9461 (5.61e-03) 0.9765 (5.05e-03) 0.5912 (1.43e-02) 0.602 (1.64e-02) 0.1902 (2.01e-02) 0.9434 (6.04e-03) 0.9339 (8.48e-03) Concrete 0.9437 (2.68e-03) 0.967 (3.02e-03) 0.5854 (1.04e-02) 0.7282 (1.17e-02) 0.0932 (1.75e-02) 0.9581 (3.61e-03) 0.9443 (6.72e-03) Energy 0.8957 (4.66e-03) 0.8857 (6.96e-03) 0.8669 (5.26e-03) 0.8013 (2.00e-02) 0.2597 (2.75e-02) 0.9773 (3.02e-03) 0.9938 (2.99e-03) Kin8nm 0.9514 (1.20e-03) 0.9705 (1.53e-03) 0.6706 (4.43e-03) 0.8037 (8.15e-03) 0.1984 (1.36e-02) 0.9618 (2.63e-03) 0.9633 (1.36e-03) Naval Propulsion Plant 0.9373 (1.59e-03) 0.9994 (2.12e-04) 0.8036 (5.99e-03) 0.9212 (6.76e-03) 0.2683 (2.51e-02) 0.9797 (1.88e-03) 0.9941 (1.25e-03) Power Plant 0.9646 (1.14e-03) 0.9614 (1.26e-03) 0.4008 (1.12e-02) 0.432 (1.47e-02) 0.1138 (1.41e-02) 0.9626 (1.13e-03) 0.9623 (1.60e-03) Protein Tertiary Structure 0.9619 (4.71e-04) 0.959 (4.72E-04) 0.4125 (2.98e-03) 0.3846 (1.36e-02) 0.1182 (1.35e-02) 0.9609 (2.27e-03) 0.9559 (1.72e-03) Wine Quality Red 0.9425 (2.32e-03) 0.9472 (3.28e-03) 0.3919 (1.18e-02) 0.3566 (1.83e-02) 0.1616 (7.45e-03) 0.9059 (8.19e-03) 0.8647 (8.77e-03) Yacht Hydrodynamics 0.9449 (7.86e-03) 0.9726 (6.73e-03) 0.9161 (7.38e-03) 0.3871 (2.82e-02) 0.2081 (2.54e-02) 0.9807 (6.97e-03) 0.9899 (6.03e-03)\nTable 1: The average coverage of six methods across nine datasets with the standard error over 20 cross validation folds in parentheses.\nDataset | Method Linear Regression GP Ensemble Dropout LL Dropout SVI LL SVI\nBoston Housing 2.0424 (6.87E-03) 1.8716 (1.17E-02) 0.4432 (7.82E-03) 0.6882 (2.19E-02) 0.1855 (2.05E-02) 1.301 (2.56E-02) 1.148 (2.36E-02) Concrete 2.4562 (2.22E-03) 2 (3.32E-03) 0.4776 (9.03E-03) 1.0342 (1.79E-02) 0.1028 (2.04E-02) 1.5116 (1.72E-02) 1.2293 (1.41E-02) Energy 1.144 (2.29E-03) 1.0773 (2.64E-03) 0.2394 (2.56E-03) 0.5928 (1.22E-02) 0.1417 (1.61E-02) 0.8426 (1.73E-02) 0.7974 (1.95E-02) Kin8nm 3.0039 (9.76E-04) 2.3795 (7.02E-03) 0.5493 (2.37E-03) 1.2355 (1.37E-02) 0.2024 (1.22E-02) 1.6697 (7.75E-03) 1.2624 (2.99E-03) Naval Propulsion Plant 1.5551 (7.12E-04) 0.3403 (1.00E-02) 0.6048 (4.86E-03) 1.1593 (6.45E-03) 0.2281 (1.83E-02) 1.3064 (1.38E-01) 0.488 (5.44E-03) Power Plant 1.0475 (7.09E-04) 0.9768 (9.63E-04) 0.2494 (6.72E-03) 0.3385 (1.69E-02) 0.0918 (9.06E-03) 1.0035 (1.88E-03) 0.9818 (3.64E-03) Protein Tertiary Structure 3.3182 (3.21E-04) 3.2123 (3.47E-03) 0.6804 (3.77E-03) 0.9144 (1.41E-02) 0.3454 (1.99E-02) 2.9535 (3.82E-02) 2.6506 (2.20E-02) Wine Quality Red 3.1573 (1.82E-03) 3.1629 (4.07E-03) 0.7763 (1.31E-02) 0.7841 (2.91E-02) 0.3481 (1.61E-02) 2.7469 (2.72E-02) 2.3597 (2.70E-02) Yacht Hydrodynamics 2.3636 (2.89E-03) 1.6974 (7.57E-03) 0.4475 (9.76E-03) 0.5443 (2.22E-02) 0.1081 (9.83E-03) 0.657 (3.54E-02) 0.69 (3.81E-02)\nTable 2: The average width of the posterior prediction interval of six methods across nine datasets with the standard error over 20 cross validation folds in parentheses. Width is reported in terms of standard deviations of the response variable in the training set.\nDataset | Method Linear Regression GP Ensemble Dropout LL Dropout SVI LL SVI Boston Housing 4.0582 (1.22E-01) 3.5397 (2.30E-01) 3.1484 (1.31E-01) 4.9654 (1.27E-01) 3.6281 (1.61E-01) 3.148 (1.97E-01) 3.4223 (1.93E-01) Concrete 7.6025 (1.12E-01) 7.8245 (1.12E-01) 5.8107 (6.18E-02) 10.3653 (9.60E-02) 6.5621 (1.20E-01) 5.6109 (1.47E-01) 6.4618 (1.26E-01) Energy 2.3029 (6.63E-02) 2.7454 (5.68E-02) 1.0912 (1.51E-02) 3.172 (6.01E-02) 1.5211 (6.17E-02) 1.2781 (6.84E-02) 2.7032 (6.47E-02) Kin8nm 0.1199 (8.05E-04) 0.1366 (1.08E-03) 0.0855 (2.72E-04) 0.2027 (5.69E-04) 0.0984 (1.51E-03) 0.0816 (5.40E-04) 0.1091 (1.47E-03) Naval Propulsion Plant 0.0054 (4.94E-05) 6E-04 (3.43E-05) 0.0041 (3.08E-05) 0.0059 (2.29E-05) 0.006 (6.03E-04) 0.0012 (4.25E-05) 0.0042 (5.44E-04) Power Plant 4.5639 (5.67E-02) 4.2551 (3.42E-02) 4.2952 (2.62E-02) 4.5793 (2.68E-02) 4.7594 (6.31E-02) 4.1983 (4.68E-02) 4.2903 (3.56E-02) Protein Tertiary Structure 4.514 (1.06E-02) 4.9695 (7.61E-03) 4.2398 (7.03E-03) 5.2182 (6.82E-03) 4.5219 (2.78E-02) 4.3824 (1.93E-02) 4.6458 (2.94E-02) Wine Quality Red 0.6654 (6.56E-03) 0.6432 (7.79E-03) 0.643 (6.41E-03) 0.664 (5.40E-03) 0.6555 (6.69E-03) 0.647 (8.95E-03) 0.6762 (1.43E-02) Yacht Hydrodynamics 4.4647 (1.40E-01) 4.7585 (1.92E-01) 3.1594 (8.95E-02) 9.4761 (2.64E-01) 2.5562 (2.47E-01) 2.765 (1.66E-01) 2.6775 (9.06E-02)\nTable 3: The average RMSE of six methods across nine datasets with the standard error over 20 cross validation folds in parentheses. These values are comparable to other reported in the literature for these benchmarks (Gal & Ghahramani, 2016), though the intention was not to produce state of the art results, but merely demonstrate the models were trained in a reasonable manner." }, { "heading": "4.2 MNIST", "text": "We begin by calculating coverage and width for predictions from Ovadia et al. (2019) on MNIST and shifted MNIST data. Ovadia et al. (2019) used a LeNet architecture and we refer to their manuscript for more details on their implementation.\nFigure 3 shows how coverage and width co-vary as dataset shift increases. We observe high coverage and low width for all models on training, validation, and non-shifted test set data. The elevated width for SVI on these dataset splits indicate that the posterior predictions of label probabilities were the most diffuse to begin with among all models. In Figure 3, all seven models have at least 0.95 coverage with a 15 degree rotation shift. Most models don’t see an appreciable increase in the average width of the 0.95 prediction set, except for SVI. The average width for SVI jumps to over 2 at 15 degrees rotation. As the amount of shift increases, coverage decreases across all methods in a comparable way. SVI maintains higher levels of coverage, but with a compensatory increase in width.\nIn Figure 3, we observe the same coverage-width pattern at the lowest level of shift, 2 pixels. All methods have at least 0.95 coverage, but only SVI has a distinct jump in the average width of its prediction set. The average width of the prediction set increases slightly then plateaus for all methods but SVI as the amount of translation increases.\nFor this simple dataset, SVI outperforms other models with regards to coverage and width properties. It is the only model that has an average width that corresponds to the amount of shift observed.\nMethod Mean Test SetCoverage (SE) Mean Test Set Width (SE)\nMean Rotation Shift Coverage (SE) Mean Rotation Shift Width (SE) Mean Translation Shift Coverage (SE) Mean Translation Shift Width (SE)\nDropout 0.9987 (6.32E-05) 1.06 (1.38E-04) 0.5519 (2.91E-02) 2.3279 (6.64E-02) 0.5333 (3.54E-02) 2.3527 (6.34E-02) Ensemble 0.9984 (7.07E-05) 1.0424 (2.07E-04) 0.5157 (3.11E-02) 2.0892 (5.44E-02) 0.5424 (3.33E-02) 2.3276 (6.66E-02) LL Dropout 0.9985 (1.05E-04) 1.0561 (1.89E-03) 0.552 (2.93E-02) 2.3162 (6.73E-02) 0.5388 (3.52E-02) 2.3658 (6.66E-02) LL SVI 0.9984 (1.14E-04) 1.0637 (1.65E-03) 0.5746 (2.77E-02) 2.6324 (8.41E-02) 0.535 (3.51E-02) 2.3294 (6.46E-02) SVI 0.9997 (7.35E-05) 1.5492 (2.19E-02) 0.7148 (2.06E-02) 4.8549 (1.44E-01) 0.754 (1.96E-02) 5.6803 (1.99E-01) Temp scaling 0.9986 (1.36E-04) 1.0642 (1.98E-03) 0.5243 (3.10E-02) 2.2683 (6.17E-02) 0.5375 (3.33E-02) 2.347 (6.21E-02) Vanilla 0.9972 (1.16E-04) 1.032 (9.06E-04) 0.4715 (3.28E-02) 1.7492 (3.78E-02) 0.4798 (3.50E-02) 1.801 (3.84E-02)\nTable 4: MNIST average coverage and width for the test set, rotation shift, and translation shift.\nFigure 3: The effect of rotation and translation on coverage and width, respectively, for MNIST." }, { "heading": "4.3 CIFAR-10", "text": "Next, we consider a more complex image dataset, CIFAR-10. Ovadia et al. (2019) trained 20 layer and 50 layer ResNets. Figure 4 shows how all seven models have high coverage levels over all translation shifts. Temperature scaling and ensemble, in particular, have at least 0.95 coverage for every translation. We find that this high coverage comes with increases in width as shift increases. Figure 4 shows that temperature scaling has the highest average width across all models and shifts. All models have the same pattern of width increases, with peak average widths at 16 pixels translation.\nBetween the models which satisfy 0.95 coverage levels on all shifts, ensemble models have lower width than temperature scaling models. Under translation shifts on CIFAR-10, ensemble methods perform the best given their high coverage and lower width.\nAdditionally, we consider the coverage properties of models on 16 different corruptions of CIFAR10 from Hendrycks and Gimpel (Hendrycks & Gimpel, 2017). Figure 5 shows coverage vs. width over varying levels of shift intensity. Models that have more dispersed points to the right have higher\nwidths for the same level of coverage. An ideal model would have a cluster of points above the 0.95 coverage line and be far to the left portion of each facet. For models that have similar levels of coverage, the superior method will have points further to the left.\nFigure 5 demonstrates that at the lowest shift intensity, ensemble models, dropout, temperature scaling, and SVI were able to generally provide high levels of coverage on most corruption types. However, as the intensity of the shift increases, coverage decreases. Ensembles and dropout models have for at least half of their 80 model-corruption evaluations at least 0.95 coverage up to the third intensity level. At higher levels of shift intensity, ensembles, dropout, and temperature scaling consistently have the highest levels of coverage. Although these higher performing methods have similar levels of coverage, they have different widths. See Figure A1 for a further examination of the coverage and widths of these methods.\nMethod Mean Test Set Coverage (SE) Mean Test Set Width (SE) Mean Corruption Coverage (SE) Mean Corruption Width (SE)" }, { "heading": "4.4 IMAGENET", "text": "Finally, we analyze coverage and width on ImageNet and ImageNet-C from Hendrycks & Gimpel (2017). Figure 6 shows similar coverage vs. width plots to Figure 5. We find that over the 16 different corruptions at 5 levels, ensembles, temperature scaling, and dropout models had consistently higher levels of coverage. Unsurprisingly, Figure 6 shows that these methods have correspondingly higher widths. At the first three levels of corruption, ensembling has the lowest level of width of the top performing methods (see Figure A2). However, at the highest two levels of corruption, dropout has lower width than ensembling. None of the methods have a commensurate increase in width to maintain the 0.95 coverage levels seen on in-distribution test data as dataset shift increases.\nMethod Mean Test Set Coverage Mean Test Set Width Mean Corruption Coverage (SE) Mean Corruption Width (SE)" }, { "heading": "5 DISCUSSION", "text": "We have provided the first comprehensive empirical study of the frequentist-style coverage properties of popular uncertainty quantification techniques for deep learning models. In regression tasks, Gaussian Processes were the clear winner in terms of coverage across nearly all benchmarks, with smaller widths than linear regression, whose prediction intervals come with formal guarantees. SVI and LL SVI also had excellent coverage properties across most tasks with tighter intervals than GPs and linear regression. In contrast, the methods based on ensembles and Monte Carlo dropout had significantly worse coverage due to their overly confident and tight prediction intervals. Another interesting finding is that despite higher levels of uncertainty (e.g. larger widths), SVI was also the most accurate model based on RMSE as reported in Table 3.\nIn the classification setting, all methods showed very high coverage in the i.i.d setting (i.e. no dataset shift), as coverage is reflective of top-1 accuracy in this scenario. On MNIST data, SVI had the best performance, maintaining high levels of coverage under slight dataset shift and scaling the width of its prediction intervals more appropriately as shift increased relative to other methods. On CIFAR10 data, ensemble models were superior. They had the highest levels of coverage at the third of five intensity levels on CIFAR-10-C data, while have lower width than the next best method, temperature scaling. Dropout and SVI also had slightly worse coverage levels, but lower widths as well. Last layer dropout and last layer SVI performed poorly, oftentimes having lower coverage than vanilla neural networks.\nIn summary, we find that popular uncertainty quantification methods for deep learning models do not provide good coverage properties under moderate levels of datset shift. Although the width of prediction regions do increase under increasing amounts of shift, these changes are not enough to maintain the levels of coverage seen on i.i.d data. We conclude that the methods we evaluated for uncertainty quantification are likely insufficient for use in high-stakes, real-world applications where dataset shift is likely to occur." }, { "heading": "A APPENDIX", "text": "Code Availability The code and data to reproduce our results will be made available after the anonymous review period.\nMethod Mean Test SetCoverage (SE) Mean Test Set Width (SE)\nMean Translation Shift Coverage (SE) Mean Translation Shift Width (SE)\nDropout 0.9883 (3.79E-04) 1.5778 (2.68E-03) 0.9696 (2.48E-03) 2.0709 (5.11E-02) Ensemble 0.9922 (3.08E-04) 1.4925 (1.52E-03) 0.9806 (1.65E-03) 1.9246 (4.49E-02) LL Dropout 0.9628 (1.40E-03) 1.3007 (3.99E-03) 0.9184 (5.59E-03) 1.6678 (4.16E-02) LL SVI 0.9677 (1.10E-03) 1.2585 (2.60E-03) 0.929 (4.55E-03) 1.5044 (2.61E-02) SVI 0.9789 (6.41E-04) 1.5579 (6.31E-03) 0.9543 (2.89E-03) 1.9286 (3.69E-02) Temp scaling 0.9871 (3.51E-04) 1.5987 (1.19E-02) 0.9707 (1.97E-03) 2.1266 (5.30E-02) Vanilla 0.9686 (6.06E-04) 1.2611 (3.90E-03) 0.9296 (4.36E-03) 1.5064 (2.58E-02)\nTable A1: CIFAR-10 average coverage and width for the test set and translation shift.\n(a) CIFAR-10 coverage (b) CIFAR-10 width\nFigure A1: The effect of corruption intensity on coverage levels in CIFAR-10. This is averaged over 16 different corruption types. As shift intensity increases, coverage decreases and width increases. In general, Dropout, ensembling, and temperature scaling have the highest levels of coverage across corruptions levels.\n(a) ImageNet coverage (b) ImageNet width\nFigure A2: The effect of corruption intensity on coverage levels in ImageNet. This is averaged over 16 different corruption types. As shift intensity increases, coverage decreases and width increases. In general, Dropout, ensembling, and temperature scaling have the highest levels of coverage across corruptions levels." } ]
2,020
DEEP LEARNING UNCERTAINTY QUANTIFICATION PROCEDURES
SP:74f12645ba675ccd4217ebfc0579cb4232406009
[ "This paper proposes a general framework for boosting CNNs performance on different tasks by using'commentary' to learn meta-information. The obtained meta-information can also be used for other purposes such as the mask of objects within spurious background and the similarities among classes. The commentary module would be incorporated into standard networks and be iteratively optimized with the host via the proposed objective. To effectively optimize both the commentary and the standard network, this paper adopts the techniques including implicit function theorem and efficient inverse Hessian approximations. " ]
Effective training of deep neural networks can be challenging, and there remain many open questions on how to best learn these models. Recently developed methods to improve neural network training examine teaching: providing learned information during the training process to improve downstream model performance. In this paper, we take steps towards extending the scope of teaching. We propose a flexible teaching framework using commentaries, learned meta-information helpful for training on a particular task. We present gradient-based methods to learn commentaries, leveraging recent work on implicit differentiation for scalability. We explore diverse applications of commentaries, from weighting training examples, to parameterising label-dependent data augmentation policies, to representing attention masks that highlight salient image regions. We find that commentaries can improve training speed and/or performance, and provide insights about the dataset and training process. We also observe that commentaries generalise: they can be reused when training new models to obtain performance benefits, suggesting a use-case where commentaries are stored with a dataset and leveraged in future for improved model training.
[ { "affiliations": [], "name": "Aniruddh Raghu" }, { "affiliations": [], "name": "Maithra Raghu" }, { "affiliations": [], "name": "Simon Kornblith" } ]
[ { "authors": [ "M.A. Badgeley", "J.R. Zech", "L. Oakden-Rayner", "B.S. Glicksberg", "M. Liu", "W. Gale", "M.V. McConnell", "B. Percha", "T.M. Snyder", "J.T. Dudley" ], "title": "Deep learning predicts hip fracture using confounding patient and healthcare variables", "venue": "NPJ digital medicine,", "year": 2019 }, { "authors": [ "D. Bau", "B. Zhou", "A. Khosla", "A. Oliva", "A. Torralba" ], "title": "Network dissection: Quantifying interpretability of deep visual representations", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Y. Bengio", "J. Louradour", "R. Collobert", "J. Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Y. Fan", "F. Tian", "T. Qin", "X.-Y. Li", "T.-Y. Liu" ], "title": "Learning to teach", "venue": "arXiv preprint arXiv:1805.03643,", "year": 2018 }, { "authors": [ "Y. Fan", "Y. Xia", "L. Wu", "S. Xie", "W. Liu", "J. Bian", "T. Qin", "X.-Y. Li", "T.-Y. Liu" ], "title": "Learning to teach with deep interactions", "venue": null, "year": 2007 }, { "authors": [ "C. Finn", "P. Abbeel", "S. Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "E. Grefenstette", "B. Amos", "D. Yarats", "P.M. Htut", "A. Molchanov", "F. Meier", "D. Kiela", "K. Cho", "S. Chintala" ], "title": "Generalized inner loop meta-learning", "venue": null, "year": 1910 }, { "authors": [ "R. Hataya", "J. Zdenek", "K. Yoshizoe", "H. Nakayama" ], "title": "Meta approach to data augmentation optimization", "venue": "arXiv preprint arXiv:2006.07965,", "year": 2020 }, { "authors": [ "Z. Hu", "B. Tan", "R.R. Salakhutdinov", "T.M. Mitchell", "E.P. Xing" ], "title": "Learning data manipulation for augmentation and weighting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "J. Irvin", "P. Rajpurkar", "M. Ko", "Y. Yu", "S. Ciurea-Ilcus", "C. Chute", "H. Marklund", "B. Haghgoo", "R. Ball", "K. Shpanskaya" ], "title": "Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "L. Jiang", "Z. Zhou", "T. Leung", "L.-J. Li", "L. Fei-Fei" ], "title": "Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "P.W. Koh", "T. Nguyen", "Y.S. Tang", "S. Mussmann", "E. Pierson", "B. Kim", "P. Liang" ], "title": "Concept bottleneck models", "venue": "arXiv preprint arXiv:2007.04612,", "year": 2020 }, { "authors": [ "S. Kornblith", "J. Shlens", "Q.V. Le" ], "title": "Do better imagenet models transfer better", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "H.B. Lee", "H. Lee", "D. Na", "S. Kim", "M. Park", "E. Yang", "S.J. Hwang" ], "title": "Learning to balance: Bayesian meta-learning for imbalanced and out-of-distribution tasks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "S. Liu", "A. Davison", "E. Johns" ], "title": "Self-supervised generalisation with meta auxiliary learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "W. Liu", "B. Dai", "A. Humayun", "C. Tay", "C. Yu", "L.B. Smith", "J.M. Rehg", "L. Song" ], "title": "Iterative machine teaching", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Z. Liu", "P. Luo", "X. Wang", "X. Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "L. Long" ], "title": "Maml-pytorch implementation. https://github.com/dragen1860/ MAML-Pytorch, 2018", "venue": null, "year": 2018 }, { "authors": [ "J. Lorraine", "D. Duvenaud" ], "title": "Stochastic hyperparameter optimization through hypernetworks", "venue": "arXiv preprint arXiv:1802.09419,", "year": 2018 }, { "authors": [ "J. Lorraine", "P. Vicol", "D. Duvenaud" ], "title": "Optimizing millions of hyperparameters by implicit differentiation", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "M. MacKay", "P. Vicol", "J. Lorraine", "D. Duvenaud", "R. Grosse" ], "title": "Self-tuning networks: Bilevel optimization of hyperparameters using structured best-response functions", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "D. Maclaurin", "D. Duvenaud", "R. Adams" ], "title": "Gradient-based hyperparameter optimization through reversible learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "A. Madry", "A. Makelov", "L. Schmidt", "D. Tsipras", "A. Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "A. Navon", "I. Achituve", "H. Maron", "G. Chechik", "E. Fetaya" ], "title": "Auxiliary learning by implicit differentiation", "venue": null, "year": 2020 }, { "authors": [ "A. Raghu", "M. Raghu", "S. Bengio", "O. Vinyals" ], "title": "Rapid learning or feature reuse? towards understanding the effectiveness of maml", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "M. Raghu", "C. Zhang", "J. Kleinberg", "S. Bengio" ], "title": "Transfusion: Understanding transfer learning for medical imaging", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "M. Ren", "W. Zeng", "B. Yang", "R. Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "arXiv preprint arXiv:1803.09050,", "year": 2018 }, { "authors": [ "O. Ronneberger", "P. Fischer", "T. Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computer-assisted intervention,", "year": 2015 }, { "authors": [ "J. Shu", "Q. Xie", "L. Yi", "Q. Zhao", "S. Zhou", "Z. Xu", "D. Meng" ], "title": "Meta-weight-net: Learning an explicit mapping for sample weighting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "S. Suwajanakorn", "N. Snavely", "J.J. Tompson", "M. Norouzi" ], "title": "Discovery of latent 3d keypoints via end-to-end geometric reasoning", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "P. Welinder", "S. Branson", "T. Mita", "C. Wah", "F. Schroff", "S. Belongie", "P. Perona" ], "title": "Caltech-ucsd birds", "venue": null, "year": 2010 }, { "authors": [ "J.K. Winkler", "C. Fink", "F. Toberer", "A. Enk", "T. Deinlein", "R. Hofmann-Wellenhof", "L. Thomas", "A. Lallas", "A. Blum", "W. Stolz" ], "title": "Association between surgical skin markings in dermoscopic images and diagnostic performance of a deep learning convolutional neural network for melanoma recognition", "venue": "JAMA dermatology,", "year": 2019 }, { "authors": [ "L. Wu", "F. Tian", "Y. Xia", "Y. Fan", "T. Qin", "L. Jian-Huang", "T.-Y. Liu" ], "title": "Learning to teach with dynamic loss functions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "C. Zhang", "S. Bengio", "M. Hardt", "B. Recht", "O. Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "H. Zhang", "M. Cisse", "Y.N. Dauphin", "D. Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "B. Zhou", "A. Lapedriza", "A. Khosla", "A. Oliva", "A. Torralba" ], "title": "Places: A 10 million image database for scene recognition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "X. Zhu" ], "title": "Machine teaching: An inverse problem to machine learning and an approach toward optimal education", "venue": null, "year": 2015 }, { "authors": [ "Koh" ], "title": "The student network for this study was pretrained on ImageNet", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Training, regularising, and understanding complex neural network models is challenging. There remain central open questions on making training faster and more data-efficient (Kornblith et al., 2019; Raghu et al., 2019a;b), ensuring better generalisation (Zhang et al., 2016) and improving transparency and robustness (Bau et al., 2017; Madry et al., 2017). A promising approach for addressing these questions is learning to teach (Zhu, 2015), in which learned auxiliary information about a task is provided to a neural network to inform the training process and help downstream objectives. Examples include providing auxiliary training targets (Liu et al., 2019; Navon et al., 2020; Pham et al., 2020) and reweighting training examples to emphasise important datapoints (Fan et al., 2020; Jiang et al., 2018; Ren et al., 2018; Shu et al., 2019).\nLearning to teach approaches have achieved promising results in vision and language applications (Jiang et al., 2018; Ren et al., 2018; Shu et al., 2019; Hu et al., 2019) using a handful of specific modifications to the training process. In this paper, we take steps towards generalising these approaches, introducing a flexible and effective learning to teach framework using commentaries. Commentaries represent learned meta-information helpful for training a model on a task, and once learned, such commentaries can be reused as is to improve the training of new models. We demonstrate that commentaries can be used for applications ranging from speeding up training to gaining insights into the neural network model. Specifically, our contributions are:\n1. We formalise the notion of commentaries, providing a unified framework for learning metainformation that can be used to improve network training and examine model learning. 2. We present gradient-based methods to learn commentaries by optimising a network’s validation loss, leveraging recent work in implicit differentiation to scale to larger models. 3. We use commentaries to define example-weighting curricula, a common method of teaching neural networks. We show that these learned commentaries hold interpretable insights, lead to speedups in training, and improve performance on few-shot learning tasks. ∗Work done while interning at Google.\n4. We define data augmentation policies with label-dependent commentaries, and obtain insights into the design of effective augmentation strategies and improved performance on benchmark tasks as compared to baselines. 5. We parameterise commentaries as attention masks to find important regions of images. Through qualitative and quantitative evaluation, we show these masks identify salient image regions and can be used to improve the robustness of neural networks to spurious background correlations. 6. We show that learned commentaries can generalise: when training new models, reusing learned commentaries can lead to learning speed/performance improvements. This suggests a use-case for commentaries: being stored with a dataset and leveraged to improve training of new models." }, { "heading": "2 TEACHING WITH COMMENTARIES", "text": "Definition: We define a commentary to be learned information helpful for (i) training a model on a task or (ii) providing insights on the learning process. We envision that commentaries, once learned, could be stored alongside a dataset and reused as is to assist in the training of new models. Appendix A explores a simple instantiation of commentaries for Celeb-A (Liu et al., 2015), to provide intuition of the structures that commentaries can encode.\nFormally, let t(x, y, i;φ) denote a commentary that is a function of a data point x, prediction target y, and iteration of training i, with parameters φ. The commentary may be represented in a tabular fashion for every combination of input arguments, or using a neural network that takes these arguments as inputs. The commentary is used to train a student network n(x; θ) with parameters θ." }, { "heading": "2.1 LEARNING COMMENTARIES", "text": "We now describe algorithms to learn commentaries 1. Throughout, we denote the training set as DT , the validation set as DV and the loss function (e.g. cross-entropy) as L. With θ denoting the parameters of the student network and φ denoting the commentary parameters, we let θ̂, φ̂ be the respective optimised parameters. We seek to find φ̂ such that the student network’s validation loss, LV , is minimised. As the commentary is used during the training of the student network, LV implicitly depends on φ, enabling the use of gradient-based optimisation algorithms to find φ̂.\nAlgorithm 1: Backpropagation Through Training: When student network training has a small memory footprint, we optimise commentary parameters by iterating the following process, detailed in Algorithm 1: (1) train a student and store the computation graph during training; (2) compute the student’s validation loss; (3) calculate the gradient of this loss w.r.t. the commentary parameters by backpropagating through training; (4) update commentary parameters using gradient descent.\nBy optimizing the commentary parameters over the entire trajectory of student learning, we encourage this commentary to be effective when used in the training of new student networks. This supports the goal of the commentary being stored with the dataset and reused in future model learning.\nAlgorithm 2: Large-Scale Commentary Learning with Implicit Differentiation: When training the student model has a large memory footprint, backpropagating through training to obtain exact commentary parameter gradients is too memory expensive. We therefore leverage the Implicit Function Theorem (IFT) and efficient inverse Hessian approximation to obtain approximate gradients, following Lorraine et al. (2020).\nThe gradient of the validation loss w.r.t. the commentary parameters can be expressed as:\n∂LV ∂φ = ∂LV ∂θ̂ × ∂θ̂ ∂φ . (3)\nThe first term on the right hand side in equation 3 is simple to compute, but the second term is expensive. Under fixed-point and regularity assumptions on student and commentary parameters( θ̂(φ), φ ) , the IFT allows expressing this second term ∂θ̂∂φ as the following product:\n∂θ̂ ∂φ = − [ ∂2LT ∂θ ∂θT ]−1 × ∂ 2LT ∂θ ∂φT ∣∣∣ θ̂(φ) , (4)\n1Code at https://github.com/googleinterns/commentaries\nAlgorithm 1 Commentary Learning through Backpropagation Through Training. 1: Initialise commentary parameters φ 2: for t = 1, . . . , T meta-training steps do 3: Initialise student network n(x; θ) with parameters θ0 4: Train student network with N steps of gradient descent to optimise:\nLT (θ, φ) = Ex,y∼DT [ L̃ (n (x; θ) , t (· ;φ) , y) ] , (1)\nwhere L̃ is a loss function adjusted from L to incorporate the commentary, and LT (θ, φ) is the expected adjusted loss over the training data. Output: θ̂, the optimised parameters of student network (implicitly a function of φ, θ̂(φ)).\n5: Compute validation loss: LV (φ) = Ex,y∼DV [ L ( n(x; θ̂ (φ)), y )] (2)\n6: Compute ∂LV (φ)∂φ , by backpropagating through the N steps of student training, and update φ. 7: end for 8: Output: φ̂, the optimised parameters of the commentary.\nAlgorithm 2 Commentary Learning through Implicit Differentiation. 1: Initialise commentary parameters φ and student network parameters θ 2: for t = 1, . . . ,M do 3: Compute the student network’s training loss, LT (θ, φ), equation 1. 4: Compute the gradient of this loss w.r.t the student parameters θ. 5: Perform a single gradient descent update on the parameters to obtain θ̂ (implicitly a function\nof φ, θ̂(φ)). 6: Compute the student network’s validation loss, LV (φ), equation 2. 7: Compute ∂LV\n∂θ̂ .\n8: Approximately compute ∂θ̂∂φ with equation 4, using a truncated Neumann series with a single term and implicit vector-Jacobian products (Lorraine et al., 2020). 9: Compute the overall derivative ∂LV∂φ using steps (7) and (8), and update φ.\n10: Set θ ← θ̂. 11: end for 12: Output: φ̂, the optimised parameters of the commentary.\ni.e., a product of an inverse Hessian and a matrix of mixed partial derivatives. Following Lorraine et al. (2020), we efficiently approximate this product using a truncated Neumann series and implicit vector-Jacobian products. Leveraging this approximation then yields a second method for commentary learning, described in Algorithm 2. Since a single term in the Neumann series is sufficient for learning, each iteration of this algorithm has similar time complexity to a single iteration of training.\nIn this method, commentary parameters are learned jointly with student parameters, avoiding training a single student model multiple times. This approach therefore scales to millions of commentary parameters and large student models (Hataya et al., 2020; Lorraine et al., 2020). However, since the commentary is not directly optimised over the entire trajectory of learning, its generalisability to new models is not ensured. We examine this in our experiments, demonstrating that commentaries learned in this manner can indeed generalise to training new student networks." }, { "heading": "3 COMMENTARIES FOR EXAMPLE WEIGHTING CURRICULA", "text": "We now explore our first main application of commentaries: encoding a separate weight for each training example at each training iteration. Since the commentaries are a function of the training iteration, they can encode curriculum structure, so we refer to them as curriculum commentaries.\nWe specify these weights using a commentary neural network (or teacher network) t(x, i;φ) → [0, 1] that produces a weight for every training example at every iteration of training of the student network. When training a student network, using the notation of §2.1, the commentary is incorporated in the training loss as: L̃ = t(x, i;φ) · L ( n(x; θ), y ) , where L(·) is the original loss function for the task. The validation loss is unweighted." }, { "heading": "3.1 SYNTHETIC EXAMPLE: ROTATED MNIST DIGITS", "text": "We first learn example weight curriculum commentaries on a synthetic MNIST binary classification problem. Each example in the dataset is a rotated MNIST digit ‘1’, with variable rotation angle that defines the class. We generate two datasets: the non-overlapping dataset and the overlapping dataset. In the non-overlapping dataset, the rotation angle for each example from class 1 and class 0 is drawn from non-overlapping distributions Uniform[15, 45] and Uniform[−45,−15] respectively. In the overlapping dataset, the rotation angles are drawn from overlapping distributions Uniform[−5, 30] and Uniform[−30, 5] respectively (Figure 1). We use two block CNNs as both the commentary neural network and student network. The commentary network takes as input the image and the iteration of student training, and outputs a weight for each example in the batch. We learn commentary parameters by backpropagating through student training (Algorithm 1, §2.1), and use 500 gradient steps for inner optimisation (i.e., N = 500). Implementation is with the higher library (Grefenstette et al., 2019). Further details in Appendix B.1.\nResults: Figure 1 visualises the two datasets and plots the learned example weights as a function of rotation at iteration 500 of the student training. When classes do not overlap (left), the example weights are highest for those examples near to the decision boundary (small rotation magnitude). When the classes do overlap (right), the more representative examples further from the boundary are upweighted and ambiguous examples in the overlap region are downweighted: a sensible result. We perform further analysis of the learned example weighting curriculum in Appendix B.1, demonstrating that the learned curricula in both cases are meaningful. Overall, these results demonstrate that the learned commentaries capture interesting and intuitive structure." }, { "heading": "3.2 COMMENTARIES FOR CIFAR10 AND CIFAR100", "text": "We now learn example weighting curriculum commentaries on CIFAR10 and CIFAR100. The commentary network is again the two block CNN architecture, and when training the commentary network, the student network is also a two block CNN. We use Algorithm 1, §2.1 once more, with 1500 gradient steps in the inner optimisation: N = 1500. For evaluation, the trained commentary network is used to produce example weights for (i) 2 block CNN (ii) ResNet-18 (iii) ResNet-34 student networks, all trained for 25000 steps, considering 3 random initialisations, to assess generalisability. Further details in Appendix B.2.\nExample weighting commentaries improve learning speed. Figure 2 shows accuracy curves on the test sets of CIFAR10/100 for the two block CNN student with example weight curricula (orange line), a baseline (green line, no example weights) and an ablation (blue line, example weights without curriculum structure, meaning the commentary network only takes the image x and not the training iteration i as an argument when outputting weights). On both datasets, the networks trained using the curriculum commentaries obtain better performance than the baseline and ablation over approximately 25000 steps of training (10 epochs), and have superior learning curves.\nExample weighting commentaries generalise to longer training times and across architectures. At training time, the commentary network was learned to produce example weights for the two block CNN student for 1500 inner update steps (N = 1500, Algorithm 1). Figure 2 shows that the learned example weights lead to student network learning speedups well-beyond this point, suggesting generalisability of the commentaries to longer training times. In addition, when the same commentary network is used to produce example weights for ResNet-18/34 students (Figure 3), we also observe a learning speedup, suggesting that the commentaries can generalise across architectures." }, { "heading": "3.3 COMMENTARIES FOR FEW-SHOT LEARNING", "text": "Finally, we use example weight commentaries for few-shot learning. We start with the MAML algorithm (Finn et al., 2017), which learns a student parameter initialisation that allows fast adaptation on a new learning task using a support set of examples for that task. To incorporate example weighting in MAML, at training time, we jointly learn the MAML initialisation and a commentary network to provide per-example weights for each example in the support set, as a function of the inner loop step number. At test time, we follow the standard MAML procedure, and also incorporate the example weights when computing the support set loss and the resulting updates. Both networks use the 4-conv backbone structure from the original MAML paper. Details are in Appendix B.3.\nWe evaluate a standard MAML baseline and our commentary variant on standard few-shot learning benchmarks: (i) training/testing on MiniImageNet (MIN); and (ii) training on MIN and testing on CUB-200-2011 (CUB). Results are shown in Table 1, specifying the experimental setting (N -way\nK-shot), and the dataset used for training/testing. In all experiments, incorporating example weighting can improve on the MAML baseline, suggesting the utility of these commentaries in few-shot learning. Further experiments on other benchmark datasets (CIFAR-FS/SVHN) showing similar trends are in the appendix (Table B.1)." }, { "heading": "4 COMMENTARIES FOR DATA AUGMENTATION", "text": "We now investigate label-dependent commentaries that parameterise data augmentation policies. We consider an augmentation scheme where pairs of images are blended together with a proportion dependent on the classes of the two examples. At each training iteration, we:\n• Sample two examples and their labels, (x1, y1) and (x2, y2) from the training set. • Obtain the blending proportion λ = t(y1, y2;φ), and form a new image xm = λx1 + (1− λ)x2,\nand class ym equivalently. • Use this blended example-label pair (xm, ym) when computing the training loss. To compute the validation loss, use only unblended examples from the validation set.\nFor classification problems with N classes, the teacher t(y1, y2;φ) outputs an N ×N matrix. This augmentation scheme is inspired by mixup (Zhang et al., 2018). However, we blend with a deterministic proportion, depending on pairs of labels, rather than drawing a blending factor from a Beta distribution. In doing so, we more finely control the augmentation policy.\nAugmentation Commentaries on MNIST: We learn an augmentation commentary model t on MNIST by direct backpropagating through the training of a 2-block CNN student network (Algorithm 1, §2.1). In the learned augmentation, the error rate on class i is correlated (Pearson correlation= −0.54) with the degree of blending of other digits into an example of class i: lower error on class i implies that other digits are blended more heavily into it. On MNIST, this means that the class that has on average the lowest error rate (class 1) has other digits blended into it significantly (Figure 4 left); classes that have on average higher error rate (e.g., class 7, class 8) rarely have other digits blended in (Figure 4 right). Further details in Appendix C.1." }, { "heading": "4.1 AUGMENTATION COMMENTARIES FOR CIFAR10 AND CIFAR100", "text": "We next learn and evaluate augmentation commentaries on CIFAR10 and CIFAR100. We evaluate the effect of these augmentation commentaries on improving generalisation performance for a standard student network architecture. Since this is a memory intensive process, we use the implicit differentiation method (Algorithm 2, § 2.1) to ensure computational tractability. We learn the commentaries jointly with a ResNet-18 student network. Once the commentary is learned, we fix it and\nVisualizing the learned blending proportions on two CIFAR10 classes (left), we see that both classes are most blended with others that are visually similar (truck-automobile, and cat-dog), which may help the network differentiate between them. On CIFAR100 (right), considering the top 5 blended classes in two cases, we observe again the presence of visually similar classes that may be confused (seal-otter, and squirrel-mouse), but also visually unrelated classes. These may provide extra learning signal with each example.\ntrain three new students with different random initialisations to evaluate the commentary’s efficacy, assessing the commentary’s generalisability.\nResults: Table 2 shows test accuracy for different augmentation policies on CIFAR10 and 100. We compare the learned commentary to using only standard data augmentations for CIFAR10/100 (No commentary) and a random initialisation for the commentary matrix (Random commentary). We also compare to mixup (Zhang et al., 2018). We observe that the learned commentary is competitive with mixup and improves on other baselines across both tasks. In the appendix, we compare to an ablation that destroys the structure of the learned commentary grid by shuffling it, and find that the unshuffled commentary does better (Table C.1).\nFurther Analysis: In Figure 5, we visualise: (i) for two CIFAR10 classes, the blending fractions (defined as (1 − λ)) associated with the other classes (left); and (ii) for two CIFAR100 classes, the blending fractions associated with the five most highly blended classes. For CIFAR10, we see that other classes that are visually similar and therefore sources of misclassification are blended more significantly. On CIFAR100, we see that within the top 5 blended classes, there are classes that are visually similar, but there are also blended classes that have far less visual similarity, which could be blended in to obtain more learning signal per-example. Further analysis in Appendix C.2." }, { "heading": "5 ATTENTION MASK COMMENTARIES FOR INSIGHTS AND ROBUSTNESS", "text": "We study whether commentaries can learn to identify key (robust) features in the data, formalising this problem as one of learning commentaries which act as attention masks. We learn commentary attention masks on a variety of image datasets: an MNIST variant, CIFAR10/100, medical chest X-rays, and Caltech-UCSD Birds (CUB)-200-2011, where we find that the learned commentaries identify salient image regions. Quantitatively, we demonstrate the effectiveness of attention mask commentaries over baselines in ensuring robustness to spurious correlations.\nFormally, we learn a commentary network t(x;φ)→ [i, j] to output the centre of a 2D Gaussian that is then computed and used (with predefined standard deviation depending on the input image size, see Appendix D) as a pixelwise mask for the input image before feeding it through a student network. We denote the mask based on t(x;φ) as m(x, t). Our goal is to learn masks that highlight the most\nimportant regions of the image for training and testing, so the masks are used both at train time and test time. We therefore have that L̃ = L = x-ent (n (x m (x, t) ; θ) , y). The commentary network is a U-Net (Ronneberger et al., 2015) with an output layer from KeypointNet (Suwajanakorn et al., 2018). Commentary parameters are learned using Algorithm 2, §2.1, for a ResNet-18 student." }, { "heading": "5.1 QUALITATIVE AND QUANTITATIVE ANALYSIS ON IMAGE DATASETS", "text": "Masks for Coloured MNIST: We learn masks on a dataset where each image has two MNIST digits, coloured red and blue, with the red digit determining the label of the image. As seen in Figure 6 left, the commentary selectively attends to the red digit and not the blue digit.\nMasks for Chest X-rays: Using a dataset of chest X rays (Irvin et al., 2019), we train a student network to detect cardiomegaly, a condition where an individual’s heart is enlarged. Learned masks are centered on the chest cavity, around the location of the heart (Figure 6), which is a clinically relevant region. These masks could be used in medical imaging problems to prevent models relying on spurious background features (Badgeley et al., 2019; Winkler et al., 2019).\nMasks for CIFAR10/100: The learned masks on CIFAR10/100 (Figure 6) attend to important image regions that define the class, such as the faces of animals/the baby, wheels/body of vehicles, and the humps of the camel. In the appendix (Table D.1) we show quantitatively that the learned masks are superior to other masking baselines, and also provide further qualitative examples." }, { "heading": "5.2 MASK COMMENTARIES FOR ROBUSTNESS TO BACKGROUND CORRELATIONS", "text": "Using the task introduced in Koh et al. (2020), we now demonstrate that mask commentaries can provide robustness to spurious background correlations. We take the CUB-200-2011 dataset (Welinder et al., 2010) and the associated fine-grained classification problem to classify the image as one of 200 bird species. Using the provided segmentation masks with the dataset, the background of each image is replaced with a background from the Places dataset (Zhou et al., 2017). For the training and validation sets, there is a specific mapping between the CUB class and the Places class for the background, but for the testing set, this mapping is permuted, so that the background features are now spuriously correlated with the actual image class.\nWe first learn an attention mask commentary network, and for evaluation, use this learned network when training a new ResNet-18 student (pretrained on ImageNet, as in Koh et al. (2020)). We assess student performance on the validation and test sets (with the same and different background mappings as the training set, respectively), considering three random seeds.\nResults: Learned masks are shown in Figure 7; the masks are mostly focused on the bird in the image. In a quantitative evaluation (Table 3), we see that using the masks helps model performance significantly on the test set as compared to a baseline that does not use masks. This suggests that the masks are indeed helping networks to rely on more robust features in the images. The validation accuracy drop is expected since the model input is a limited region of the image." }, { "heading": "6 RELATED WORK", "text": "Learning to Teach: Neural network teaching and curriculum learning have been proposed as early as Bengio et al. (2009); Zhu (2015). Recent work examining teaching includes approaches to: select, weight, and manipulate training examples (Fan et al., 2018; Liu et al., 2017; Fan et al., 2020; Jiang et al., 2018; Ren et al., 2018; Shu et al., 2019; Hu et al., 2019); adjust loss functions (Wu et al., 2018); and learn auxiliary tasks/labels for use in training (Liu et al., 2019; Navon et al., 2020; Pham et al., 2020). In contrast to most of these methods, our work on commentaries aims to unify several related approaches and serve as a general learning framework for teaching. This enables applications in both standard settings such as example weighting, and also novel use-cases (beyond model performance) such as attention masks for interpretability and robustness. These diverse applications also provide insights into the training process of neural networks. Furthermore, unlike many earlier works that jointly learn teacher and student models, we also consider a different use case for learned commentaries: instead of being used in the training of a single model alone, we demonstrate that commentaries can be stored with a dataset and reused when training new models.\nLearning with Hypergradients: Our algorithm for learning commentaries uses hypergradients — derivatives of a model’s validation loss with respect to training hyperparameters. Prior work has proposed different approaches to compute hypergradients, including memory-efficient exact computation in Maclaurin et al. (2015), and approximate computation in Lorraine et al. (2020); Lorraine and Duvenaud (2018); MacKay et al. (2019). We build on Lorraine et al. (2020), which utilises the implicit function theorem and approximate Hessian matrix inversion for efficiency, to scale commentary learning to larger models." }, { "heading": "7 CONCLUSION", "text": "In this paper, we considered a general framing for teaching neural networks using commentaries, defined as meta-information learned from the dataset/task. We described two gradient-based methods to learn commentaries and three methods of applying commentaries to assist learning: example weight curricula, data augmentation, and attention masks. Empirically, we show that the commentaries can provide insights and result in improved learning speed and/or performance on a variety of datasets. In addition, we demonstrate that once learned, these commentaries can be reused to improve the training of new models. Teaching with commentaries is a proof-of-concept idea, and we hope that this work will motivate larger-scale applications of commentaries and inspire ways of automatically re-using training insights across tasks and datasets." }, { "heading": "B EXAMPLE WEIGHTING CURRICULA", "text": "We provide further details about the experiments using commentaries to define example weighting curricula.\nB.1 ROTATED MNIST\nDataset: Both the overlapping and non-overlapping datasets are generated to have 10000 training examples, 5000 validation examples, and 5000 test examples.\nNetwork architectures: CNNs with two blocks of convolution, batch normalisation, ReLU, and max pooling are used for both the student network and commentary network. The commentary network additionally takes in the iteration of student training, which is normalised and passed in as an extra channel in the input image (i.e., the MNIST image has two channels, instead of a single channel). The commentary network uses sigmoid activation to produce an example weight for every data point in the batch.\nTraining details: We train both networks using the Adam optimiser, with a learning rate of 1e-4 for the student, and 1e-3 for the commentary network. The student network is trained for 500 inner optimisation steps, with a batch size of 10. We train for 20 commentary network iterations. Training is implemented using the higher library (Grefenstette et al., 2019).\nResults: Figure B.1 visualises for both datasets: the relation between the learned example weight and the rotation of the digit at the end of student training (iteration 500) following binning. Considering these final example weights, for the non-overlapping case, examples with higher weight are those with smaller rotation and thus closer to the decision boundary (i.e., resembling support vectors) – these provide the most information to separate the classes. Lower weighted examples are those with greater rotation. When classes overlap, the weights are maximised for examples that better characterize each class, and are not in the overlap region – these are most informative of the class label.\nNow considering the learned curriculum (Figure B.2), for the non-overlapping dataset, we observe that the rank correlation between the example weight and the rotation magnitude decreases over the course of student training iteration. This is intuitively sensible, since the example weights first prioritise the easy examples (large rotation), and then learn to focus on the harder ones (small rotation) that provide most information to separate the classes. By contrast, in the overlapping case, the\ncurriculum has examples that are further from the boundary (larger rotation magnitude) consistently weighted most highly, which is sensible as these are the most informative of the class label.\nOverall, the results on this synthetic case demonstrate that the learned commentaries capture interesting and intuitive structure.\nB.2 CIFAR10/100\nNetwork architectures: We use the two block CNN from the MNIST experiments for the commentary network, and as the student network when training the commentary network. We employ the same strategy for encoding the iteration of training. At testing time, we evaluate this commentary network by teaching three different student network architectures: two block CNN, ResNet-18, and ResNet-34.\nTraining details: We train both networks using the Adam optimiser, with a learning rate of 1e-4 for the student, and 1e-3 for the commentary network. During commentary network learning, the student network is trained for 1500 inner optimisation steps, with a batch size of 8, and is reset to random intialisation after each commentary network gradient step. We train for 100 commentary network iterations. Training is implemented using the higher library (Grefenstette et al., 2019). At testing time, we use a batch size of 64; the small batch size at training time is due to GPU memory constraints.\nB.3 FEW-SHOT LEARNING (FSL)\nSetup: The MAML algorithm finds a network parameter initialisation that can then be used for adaptation to new tasks using a support set of examples. The commentary network here is trained to provide example weights for each example in the support set, at each iteration of inner loop adaptation (i.e., the example weights are not used at meta-testing time). We jointly learn the MAML initialisation and the commentary network parameters; intuitively, this represents learning an initialisation that is able to adapt to new tasks given examples and associated weights.\nDataset details: We use standard datasets used to evaluate FSL methods, and the associated splits between training and testing tasks from prior work (Lee et al., 2019; Long, 2018). We evaluate on two out-of-distribution settings, namely: training the few-shot learner on CIFAR-FS and testing on SVHN; and training the few-shot learner on MiniImageNet and testing on CUB-200.\nNetwork architectures: Both the commentary and the student networks use a 4-block CNN architecture commonly seen in prior work (Finn et al., 2017). The student network takes a given image as input. The commentary network takes as input the support set image and the class label. The one-hot labels are converted into a 64 dimensional vector using a fully connected layer, concatenated with input image representations, then passed through two more fully connected layers to produce the output. This output is passed through a sigmoid to ensure example weights lie in the range [0, 1]. These weights are normalised to ensure a mean weight of 1 across the entire support set, which helped stability.\nTraining details: We use Adam with a learning rate of 1e-3 to learn both the commentary network parameters and the student network initialisation point for MAML. A meta-batch size of 4 is used for meta training. We use SGD with a learning rate of 1e-2 for the inner loop adaptation. At metatraining time, we use 5 inner loop updates. At meta-test time, we use 15 inner loop updates (similar to some other methods, to allow for more finetuning). For evaluation, we create 1000 different test time tasks (randomly generated) and we compute mean/95% CI accuracy on this set of tasks. We use the higher library (Grefenstette et al., 2019).\nResults: We evaluate a standard MAML baseline and our commentary variant on standard few-shot learning benchmarks: (i) training/testing on MiniImageNet (MIN) and CIFAR-FS (in-distribution testing); and (ii) training on MIN and CIFAR-FS and testing on CUB-200-2011 (CUB), and SVHN (out-of-distribution testing). Results are shown in Table B.1. Each row specifies the experimental setting (N -way K-shot), the dataset used for training, and the dataset used for testing. In all experiments, incorporating example weighting can improve on the MAML baseline, suggesting the utility of these commentaries in few-shot learning." }, { "heading": "C DATA AUGMENTATION", "text": "We provide further details about the experiments using commentaries to define data augmentation policies.\nC.1 MNIST\nNetwork and training details: The 2 block CNN is used as the student network. Denoting each entry of the commentary grid as φi,j , we initialised each entry to 0. The blending proportion is formed as: λi,j = 1−0.5× sigmoid(φi,j). This is to ensure that the blending proportion is between 0.5 and 1; this implies that blended image contains more of the first image (class i) than the second (class j). Without this restriction, certain blending combinations could ‘flip’, making the results harder to interpret. The inner optimisation uses SGD with a learning rate of 1e-3, and had 500 gradient steps. We used 50 outer optimisation steps to learn the commentary parameters, using Adam with a learning rate of 1e-1. The commentary parameters were learned with the higher library (Grefenstette et al., 2019).\nFurther Detail on MNIST Augmentation Commentaries: We learn an augmentation commentary model t on MNIST, represented as a 10×10 matrix. This commentary is learned by backpropagating through the inner optimisation, using a 2-block CNN student network. For each outer optimisation update, we use 500 steps of inner loop training with the augmented dataset, and then compute the validation loss on the unblended data to obtain commentary parameter gradients.\nWe find a trend in the learned augmentation relating the error rates and blending proportions. Consider a single image xi, label i, and other images xj , label ∀j 6= i. Averaging the blending proportions (computed as 0.5 × sigmoid(φi,j)) over j, the error rate on class i is correlated (Pearson correlation= −0.54) with the degree of blending of other digits into an example of class i; that is, lower error rate on class i implies that other digits are blended more heavily into it. On MNIST, this means that the class that has on average the lowest error rate (class 1) has other digits blended into it most significantly (seen in Figure 4 left). On the other hand, classes that have on average higher error rate (e.g., class 7, class 8) rarely have other digits blended in (Figure 4 right).\nC.2 CIFAR 10/100\nNetwork and training details: The student network is a ResNet18. We use the method from Lorraine et al. (2020) to learn the commentary parameters. The commentary parameters are initialised in the same way as for the MNIST experiments. These parameters are learned jointly with a student, and we alternate updates to the commentary parameters and the student parameters. We use 1 Neumann step to approximate the inverse Hessian when using the IFT. For commentary learning, we use Adam with a LR of 1e-3 as the inner optimiser, and Adam with a LR of 1e-2 as the outer optimiser.\nFor evaluation, we train three randomly initialised students using the fixed commentary parameters. This training uses SGD with common settings for CIFAR (starting LR 1e-1, weight decay of 5e-4,\ndecaying LR after 30, 60, and 80 epochs by a factor of 10). We use standard CIFAR augmentations in addition to the learned augmentations at this evaluation phase.\nBaselines: We compare to using no commentary (just the standard CIFAR augmentation policy, random crops and flips), a random commentary (where an augmentation grid is constructed by uniformly sampling blending proportions in the range [0.5, 1]), mixup (Zhang et al., 2018) with blending proportion drawn from Beta(1,1), and a shuffled version of our method where the commentary grid is shuffled at the start of evaluation (destroying the structure, but preserving the scale of augmentation).\nResults: Table C.1 shows model accuracy for different augmentation policies on CIFAR10 and 100. We compare the learned commentary to using only standard data augmentations for CIFAR10/100 (No commentary), shuffling the learned commentary grid, using a random initialisation for the commentary tensor, and mixup (Zhang et al., 2018). We observe that the learned commentary is competitive with mixup and improves on other baselines. The fact that the learned commentary does better than the shuffled grid implies that the structure of the grid is also important, not just the scale of augmentations learned.\nVisualising the policy: For CIFAR10, we visualize the full augmentation policy in the form of a blending grid, shown in Figure C.1. Each entry represents how much those two classes are blended, with scale on left. This corresponds to 0.5 × sigmoid(φi,j), with φi,j representing an entry in the commentary grid." }, { "heading": "D ATTENTION MASKS", "text": "Datasets and Mask Information: We use a number of datasets to evaluate the learned masks, including: Coloured MNIST (a synthetic MNIST variant), CheXpert (a dataset of chest X-ray images from (Irvin et al., 2019)), CIFAR10/100, and CUB-200-2011. More details:\n• The Coloured MNIST dataset is formed by randomly sampling two MNIST digits from the dataset, and choosing one to be red and one to be blue. The red digit determines the image label. The two digits are randomly placed on two different quadrants of a 56×56 grid. The standard deviation of the mask is set to be 15 pixels.\n• The CheXpert dataset has large X-ray radiograph images. Each image is resized to be 320× 200. The mask standard deviation is 50 pixels.\n• CIFAR10/100 masks are set to be 15 pixels standard deviation. • CUB-200-2011 images are resized to be 224× 224, and the mask standard deviation is 50\npixels.\nNetwork architectures: The student network was a ResNet18, and the commentary network was a U-Net (Ronneberger et al., 2015) with an output layer from KeypointNet (Suwajanakorn et al., 2018). This takes a probability mass function defined spatially, and the (x, y) centre of the mask is computed as the mean in both spatial dimensions. Producing the mean in this manner significantly helped stability rather than regressing a real value.\nTraining details: We use the method from Lorraine et al. (2020) to learn the commentary parameters. These parameters are learned jointly with a student, and we alternate updates to the commentary parameters and the student parameters. We use 1 Neumann step to approximate the inverse Hessian when using the IFT. When the commentary network is learned, we use Adam with LR 1e-4 for both inner and outer optimisations. We found balancing this learning rate to be important in the resulting stability of optimisation and quality of learned masks.\nWhen evaluating the learned masks, we trained three new ResNet-18 students with different random seeds, fixing the commentary network. For CIFAR10/100, for evaluation, we use SGD with common settings for CIFAR (starting LR 1e-1, weight decay of 5e-4, decaying LR after 30, 60, and 80 epochs by a factor of 10). We use standard CIFAR augmentations for this also.\nVisualizing Masks: Figure D.1 shows masks from the main text and further additional examples.\nBaselines for CIFAR experiments: For the random mask baseline, for each example in a batch, we select a centre point randomised over the whole image, then form the mask by considering a gaussian centered at that point with standard deviation 15 (same size as masks from commentary network). This resembles very aggressive random cropping. For the permuted learned mask, we use the learned commentary network to predict masks for all images. Then, we permute the maskimage pairs, so that they no longer match up. We then train with these permuted pairs and assess performance. Our goal is to understand what happens when we have random masks with a similar overall spatial distribution to the real masks.\nCIFAR Masking Quantitative Analysis: We compare masks from the learned commentary network to two baselines: randomly chosen mask regions for each image (of the same scale as the learned masks, but with the centre point randomised over the input image), and permuted masks\n(where we shuffle the learned mask across all the data points). Table D.1 shows the results. Especially on CIFAR100, the learned mask improves noticeably on the other masking methods in both test accuracy and loss. This suggests that overall, the masks are highlighting more informative image regions. We do not expect using the masks on standard problems to result in improved held-out performance, because the backgrounds of images may contain relevant information to the classification decision.\nFurther details on robustness study: The dataset was generated using the open source code from Koh et al. (2020). The student network for this study was pretrained on ImageNet, as in Koh et al. (2020). To train student models at the evaluation stage, we used SGD with a learning rate of 0.01, decayed by a factor of 10 after 50 and 90 epochs. We used nesterov momentum (0.9) and weight decay of 5e-5." } ]
2,021
null
SP:19e2493d7bdb4be73c3b834affdb925201243aef
[ "It is well-known that neural networks (NN) perform very well in various areas and in particular if one looks at computer vision convolutional neural networks perform very well. Although convolutional neural networks (CNN) are limited in their architecture (since they only allow nearest-neighbour connections) compared to fully-connected NNs (FCNN), their superiority in performance is unclear. In this paper they answer the following fundamental question: can one formally show that CNNs are better than FCNNs for a specific learning task? In this direction they answer in the affirmative.  In particular, more than just giving an example, they show that an interesting property called locality, instead of other parameters like parameter and efficiency weight sharing is the reason for its superior performance. " ]
Convolutional neural networks (CNN) exhibit unmatched performance in a multitude of computer vision tasks. However, the advantage of using convolutional networks over fully-connected networks is not understood from a theoretical perspective. In this work, we show how convolutional networks can leverage locality in the data, and thus achieve a computational advantage over fully-connected networks. Specifically, we show a class of problems that can be efficiently solved using convolutional networks trained with gradient-descent, but at the same time is hard to learn using a polynomial-size fully-connected network.
[ { "affiliations": [], "name": "FULLY-CONNECTED NETWORKS" }, { "affiliations": [], "name": "Eran Malach" }, { "affiliations": [], "name": "Shai Shalev-Shwartz" } ]
[ { "authors": [ "Emmanuel Abbe", "Colin Sandon" ], "title": "Provable limitations of deep learning", "venue": "arXiv preprint arXiv:1812.06369,", "year": 2018 }, { "authors": [ "Peter L Bartlett", "Nick Harvey", "Christopher Liaw", "Abbas Mehrabian" ], "title": "Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks", "venue": "J. Mach. Learn. Res.,", "year": 2019 }, { "authors": [ "Avrim Blum", "Merrick Furst", "Jeffrey Jackson", "Michael Kearns", "Yishay Mansour", "Steven Rudich" ], "title": "Weakly learning dnf and characterizing statistical query learning using fourier analysis", "venue": "In Proceedings of the twenty-sixth annual ACM symposium on Theory of computing,", "year": 1994 }, { "authors": [ "Avrim Blum", "Adam Kalai", "Hal Wasserman" ], "title": "Noise-tolerant learning, the parity problem, and the statistical query model", "venue": "Journal of the ACM (JACM),", "year": 2003 }, { "authors": [ "Guy Bresler", "Dheeraj Nagaraj" ], "title": "A corrective view of neural networks: Representation, memorization and learning", "venue": "arXiv preprint arXiv:2002.00274,", "year": 2020 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "arXiv preprint arXiv:1312.6203,", "year": 2013 }, { "authors": [ "Alon Brutzkus", "Amir Globerson" ], "title": "Globally optimal gradient descent for a convnet with gaussian inputs", "venue": "arXiv preprint arXiv:1702.07966,", "year": 2017 }, { "authors": [ "Yu-hsin Chen", "Ignacio Lopez-Moreno", "Tara N Sainath", "Mirkó Visontai", "Raziel Alvarez", "Carolina Parada" ], "title": "Locally-connected and convolutional neural networks for small footprint speaker recognition", "venue": "In Sixteenth Annual Conference of the International Speech Communication Association,", "year": 2015 }, { "authors": [ "Nadav Cohen", "Amnon Shashua" ], "title": "Inductive bias of deep convolutional networks through pooling geometry", "venue": "arXiv preprint arXiv:1605.06743,", "year": 2016 }, { "authors": [ "Nadav Cohen", "Or Sharir", "Yoav Levine", "Ronen Tamari", "David Yakira", "Amnon Shashua" ], "title": "Analysis and design of convolutional networks via hierarchical tensor decompositions", "venue": "arXiv preprint arXiv:1705.02302,", "year": 2017 }, { "authors": [ "Amit Daniely" ], "title": "Sgd learns the conjugate kernel class of the network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Amit Daniely", "Eran Malach" ], "title": "Learning parities with neural networks", "venue": "arXiv preprint arXiv:2002.07400,", "year": 2020 }, { "authors": [ "S Ben Driss", "Mahmoud Soua", "Rostom Kachouri", "Mohamed Akil" ], "title": "A comparison study between mlp and convolutional neural network models for character recognition", "venue": "In Real-Time Image and Video Processing 2017,", "year": 2017 }, { "authors": [ "Simon Du", "Jason Lee", "Yuandong Tian", "Aarti Singh", "Barnabas Poczos" ], "title": "Gradient descent learns one-hidden-layer cnn: Don’t be afraid of spurious local minima", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "arXiv preprint arXiv:1808.05377,", "year": 2018 }, { "authors": [ "Chrisantha Fernando", "Dylan Banarse", "Malcolm Reynolds", "Frederic Besse", "David Pfau", "Max Jaderberg", "Marc Lanctot", "Daan Wierstra" ], "title": "Convolution by evolution: Differentiable pattern producing networks", "venue": "In Proceedings of the Genetic and Evolutionary Computation Conference 2016,", "year": 2016 }, { "authors": [ "Alexander Golovnev", "Mika Göös", "Daniel Reichman", "Igor Shinkar" ], "title": "String matching: Communication, circuits, and learning", "venue": "arXiv preprint arXiv:1709.02034,", "year": 2017 }, { "authors": [ "Eric Kauderer-Abrams" ], "title": "Quantifying translation-invariance in convolutional neural networks", "venue": "arXiv preprint arXiv:1801.01450,", "year": 2017 }, { "authors": [ "Osman Semih Kayhan", "Jan C van Gemert" ], "title": "On translation invariance in cnns: Convolutional layers can exploit absolute spatial location", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Michael Kearns" ], "title": "Efficient noise-tolerant learning from statistical queries", "venue": "Journal of the ACM (JACM),", "year": 1998 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zhouhan Lin", "Roland Memisevic", "Kishore Konda" ], "title": "How far can we go without convolution: Improving fully-connected networks", "venue": "arXiv preprint arXiv:1511.02580,", "year": 2015 }, { "authors": [ "Wen Liu", "Hong Chen", "Zhongliang Deng", "Xinyu Zheng", "Xiao Fu", "Qianqian Cheng" ], "title": "Lc-dnn: Local connection based deep neural network for indoor localization with csi", "venue": "IEEE Access,", "year": 2020 }, { "authors": [ "Eran Malach", "Shai Shalev-Shwartz" ], "title": "A provably correct algorithm for deep learning that actually works", "venue": "arXiv preprint arXiv:1803.09522,", "year": 2018 }, { "authors": [ "Eran Malach", "Shai Shalev-Shwartz" ], "title": "When hardness of approximation meets hardness of learning", "venue": "arXiv preprint arXiv:2008.08059,", "year": 2020 }, { "authors": [ "Elchanan Mossel" ], "title": "Deep learning and hierarchal generative models", "venue": "arXiv preprint arXiv:1612.09057,", "year": 2016 }, { "authors": [ "Elchanan Mossel", "Ryan O’Donnell", "Rocco P Servedio" ], "title": "Learning juntas", "venue": "In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing,", "year": 2003 }, { "authors": [ "Behnam Neyshabur" ], "title": "Towards learning convolutions from scratch", "venue": "arXiv preprint arXiv:2007.13657,", "year": 2020 }, { "authors": [ "Roman Novak", "Lechao Xiao", "Jaehoon Lee", "Yasaman Bahri", "Greg Yang", "Jiri Hron", "Daniel A Abolafia", "Jeffrey Pennington", "Jascha Sohl-Dickstein" ], "title": "Bayesian deep convolutional networks with many channels are gaussian processes", "venue": "arXiv preprint arXiv:1810.05148,", "year": 2018 }, { "authors": [ "Tomaso Poggio", "Fabio Anselmi", "Lorenzo Rosasco" ], "title": "I-theory on depth vs width: hierarchical function composition", "venue": "Technical report, Center for Brains, Minds and Machines (CBMM),", "year": 2015 }, { "authors": [ "Tomaso Poggio", "Hrushikesh Mhaskar", "Lorenzo Rosasco", "Brando Miranda", "Qianli Liao" ], "title": "Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review", "venue": "International Journal of Automation and Computing,", "year": 2017 }, { "authors": [ "Shai Shalev-Shwartz", "Ohad Shamir", "Shaked Shammah" ], "title": "Failures of gradient-based deep learning", "venue": "arXiv preprint arXiv:1703.07950,", "year": 2017 }, { "authors": [ "Shai Shalev-Shwartz", "Ohad Shamir", "Shaked Shammah" ], "title": "Weight sharing is crucial to succesful optimization", "venue": "arXiv preprint arXiv:1706.00687,", "year": 2017 }, { "authors": [ "Shai Shalev-Shwartz" ], "title": "Online learning and online convex optimization", "venue": "Foundations and trends in Machine Learning,", "year": 2011 }, { "authors": [ "Ohad Shamir" ], "title": "Distribution-specific hardness of learning neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Mahdi Soltanolkotabi", "Adel Javanmard", "Jason D Lee" ], "title": "Theoretical insights into the optimization landscape of over-parameterized shallow neural networks", "venue": "IEEE Transactions on Information Theory,", "year": 2018 }, { "authors": [ "Gregor Urban", "Krzysztof J Geras", "Samira Ebrahimi Kahou", "Ozlem Aslan", "Shengjie Wang", "Rich Caruana", "Abdelrahman Mohamed", "Matthai Philipose", "Matt Richardson" ], "title": "Do deep convolutional nets really need to be deep and convolutional", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Yuchen Zhang", "Percy Liang", "Martin J Wainwright" ], "title": "Convexified convolutional neural networks", "venue": "In International Conference on Machine Learning,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Convolutional neural networks (LeCun et al., 1998; Krizhevsky et al., 2012) achieve state-of-the-art performance on every possible task in computer vision. However, while the empirical success of convolutional networks is indisputable, the advantage of using them is not well understood from a theoretical perspective. Specifically, we consider the following fundamental question:\nWhy do convolutional networks (CNNs) perform better than fully-connected networks (FCNs)?\nClearly, when considering expressive power, FCNs have a big advantage. Since convolution is a linear operation, any CNN can be expressed using a FCN, whereas FCNs can express a strictly larger family of functions. So, any advantage of CNNs due to expressivity can be leveraged by FCNs as well. Therefore, expressive power does not explain the superiority of CNNs over FCNs.\nThere are several possible explanations to the superiority of CNNs over FCNs: parameter efficiency (and hence lower sample complexity), weight sharing, and locality prior. The main result of this paper is arguing that locality is a key factor by proving a computational separation between CNNs and FCNs based on locality. But, before that, let’s discuss the other possible explanations.\nFirst, we observe that CNNs seem to be much more efficient in utilizing their parameters. A FCN needs to use a greater number of parameters compared to an equivalent CNN: each neuron of a CNN is limited to a small receptive field, and moreover, many of the parameters of the CNN are shared. From classical results in learning theory, using a large number of param-\neters may result in inferior generalization. So, can the advantage of CNNs be explained simply by counting parameters?\nTo answer this question, we observe the performance of CNN and FCN based architecture of various widths and depths trained on the CIFAR-10 dataset. For each architecture, we observe the final test accuracy versus the number of trainable parameters. The results are shown in Figure 1. As can be seen, CNNs have a clear advantage over FCNs, regardless of the number of parameters used. As is often observed, a large number of parameters does not hurt the performance of neural networks, and so parameter efficiency cannot explain the advantage of CNNs. This is in line with various theoretical works on optimization of neural networks, which show that over-parameterization is beneficial for convergence of gradient-descent (e.g., Du et al. (2018); Soltanolkotabi et al. (2018); Li & Liang (2018)).\nThe superiority of CNNs can be also attributed to the extensive weight sharing between the different convolutional filters. Indeed, it has been previously shown that weight sharing is important for the optimization of neural networks (Shalev-Shwartz et al., 2017b). Moreover, the translation-invariant nature of CNNs, which relies on weight sharing, is often observed to be beneficial in various signal processing tasks (Kauderer-Abrams, 2017; Kayhan & Gemert, 2020). So, how much does the weight sharing contribute to the superiority of CNNs over FCNs?\nTo understand the effect of weight sharing on the behavior of CNNs, it is useful to study locallyconnected network (LCN) architectures, which are similar to CNNs, but have no weight sharing between the kernels of the network. While CNNs are far more popular in practice (also due to the fact that they are much more efficient in terms of model size), LCNs have also been used in different contexts (e.g., Bruna et al. (2013); Chen et al. (2015); Liu et al. (2020)). It has been recently observed that in some cases, the performance of LCNs is on par with CNNs (Neyshabur, 2020). So, even if weight sharing explains some of the advantage of CNNs, it clearly doesn’t tell the whole story.\nFinally, a key property of CNN architectures is their strong utilization of locality in the data. Each neuron in a CNN is limited to a local receptive field of the input, hence encoding a strong locality bias. In this work we demonstrate how CNNs can leverage the local structure of the input, giving them a clear advantage in terms of computational complexity. Our results hint that locality is the principal property that explains the advantage of using CNNs.\nOur main result is a computational separation result between CNNs and FCNs. To show this result, we introduce a family of functions that have a very strong local structure, which we call k-patterns. A k-pattern is a function that is determined by k consecutive bits of the input. We show that for inputs of n bits, when the target function is a (log n)-pattern, training a CNN of polynomial size with gradient-descent achieves small error in polynomial time. However, gradient-descent will fail to learn (log n)-patterns, when training a FCN of polynomial-size." }, { "heading": "1.1 RELATED WORK", "text": "It has been empirically observed that CNN architectures perform much better than FCNs on computer vision tasks, such as digit recognition and image classification (e.g., Urban et al. (2017); Driss et al. (2017)). While some works have applied various techniques to improve the performance of FCNs (Lin et al. (2015); Fernando et al. (2016); Neyshabur (2020)), there is still a gap between performance of CNNs and FCNs, where the former give very good performance “out-of-the-box”. The focus of this work is to understand, from a theoretical perspective, why CNNs give superior performance when trained on input with strong local structure.\nVarious theoretical works show the advantage of architectures that leverage local and hierarchical structure. The work of Poggio et al. (2015) shows the advantage of using deep hierarchical models over wide and shallow functions. These results are extended in Poggio et al. (2017), showing an exponential gap between deep and shallow networks, when approximating locally compositional functions. The works of Mossel (2016); Malach & Shalev-Shwartz (2018) study learnability of deep hierarchical models. The work of Cohen et al. (2017) analyzes the expressive efficiency of convolutional networks via hierarchical tensor decomposition. While all these works show that indeed CNNs powerful due to their hierarchical nature and the efficiency of utilizing local structure, they do not explain why these models are superior to fully-connected models.\nThere are a few works that provide a theoretical analysis of CNN optimization. The works of Brutzkus & Globerson (2017); Du et al. (2018) show that gradient-descent can learn a shallow CNN with a single filter, under various distributional assumptions. The work of Zhang et al. (2017)\nshows learnability of a convex relaxation of convolutional networks. While these works focus on computational properties of learning CNNs, as we do in this work, they do not compare CNNs to FCNs, but focus only on the behavior of CNNs. The works of Cohen & Shashua (2016); Novak et al. (2018) study the implicit bias of simplified CNN models. However, these result are focused on generalization properties of CNNs, and not on computational efficiency of the optimization." }, { "heading": "2 DEFINITIONS AND NOTATIONS", "text": "Let X = {±1}n be our instance space, and let Y = {±1} be the label space. Throughout the paper, we focus on learning a binary classification problem using the hinge-loss: `(ŷ, y) = max{1 yŷ, 0}. Given some distribution D over X , some target function f : X ! Y and some hypothesis h : X ! Y , we define the loss of h with respect to f on the distribution D by:\nLf,D(h) = E x⇠D [`(h(x), f(x))]\nThe goal of a supervised learning algorithm is, given access to examples sampled from D and labeled by f , to find a hypothesis h that minimizes Lf,D(h). We focus on the gradient-descent (GD) algorithm: given some parametric hypothesis class H = {hw : w 2 Rq}, gradient-descent starts with some (randomly initialized) hypothesis hw(0) and, for some learning rate ⌘ > 0, updates:\nw(t) = w(t 1) ⌘rwLf,D(hw(t 1))\nWe compare the behavior of gradient-descent, when learning two possible neural network architectures: a convolutional network (CNN) and a fully-connected network (FCN). Definition 1. A convolutional network hu,W,b is defined as follows:\nhu,W,b(x) = n kX\nj=1\nD u(j), (Wxj...j+k 1 + b) E\nfor activation function , with kernel W 2 Rq⇥k, bias b 2 Rq and readout layer u(1), . . . ,u(n) 2 Rq . Note that this is a standard depth-2 CNN with kernel k, stride 1 and q filters. Definition 2. A fully-connected network hu,w,b is defined as follows:\nhu,w,b(x) = qX\ni=1\nui\n⇣D w(i),x E + bi ⌘\nfor activation function , first layer w(1), . . . ,w(q) 2 Rn, bias b 2 Rq and second layer u 2 Rq .\nWe demonstrate the advantage of CNNs over FCNs by observing a problem that can be learned using CNNs, but is hard to learn using FCNs. We call this problem the k-pattern problem: Definition 3. A function f : X ! Y is a k-pattern, if for some g : {±1}k ! Y and index j⇤:\nf(x) = g(xj⇤...j⇤+k 1)\nNamely, a k-pattern is a function that depends only on a small pattern of consecutive bits of the input. The k-pattern problem is the problem of learning k-patterns: for some k-pattern f and some distribution D over X , given access to D labeled by f , find a hypothesis h with Lf,D(h) ✏. We note that a similar problem has been studied in Golovnev et al. (2017), providing results on PAC learnability of a related target class.\n3 CNNS EFFICIENTLY LEARN (log n)-PATTERNS\nThe main result in this section shows that gradient-descent can learn k-patterns when training convolutional networks for poly(2k, n) iterations, and when the network has poly(2k, n) neurons:\nTheorem 4. Assume we uniformly initialize W (0) ⇠ {±1/k}q⇥k, bi = 1/k 1 and u(0,j) = 0 for every j. Assume the activation satisfies | | c, | 0 | 1, for some constant c. Fix some > 0, some k-pattern f and some distribution D over X . Then, if q > 2k+3 log(2k/ ), with probability at least 1 over the initialization, when training a convolutional network hu,W,b using gradient descent with ⌘ = p np qT we have:\n1\nT\nTX\nt=1\nLf,D(hu(t),W (t),b) 2cn2k22k\nq + 2(2kk)2 p qn + c 2 n 1.5p q T\nBefore we prove the theorem, observe that the above immediately implies that when k = O(log n), gradient-descent can efficiently learn to solve the k-pattern problem, when training a CNN: Corollary 5. Let k = O(log n). Then, running GD on a CNN with q = O(✏ 2n3 log2 n) neurons for T = O(✏ 2n3 log n) iterations, using a sample S ⇠ D of size O(✏ 2nkq log(nkq/ )), learns the k-pattern problem up to accuracy ✏ w.p. 1 .\nProof. Sample S ⇠ D, and let bD be the uniform distribution over S. Then, from Theorem 4 and the choice of q and T there exists t 2 [T ] with L\nf, bD(hu(t),W (t),b) ✏/2, i.e. GD finds a hypothesis with train loss at most ✏/2. Now, using the fact the VC dimension of depth-2 ReLU networks with W weights is O(W logW ) (see Bartlett et al. (2019)), we can bound the generalization gap by ✏/2.\nTo prove Theorem 4, we show that, for a large enough CNN, the k-pattern problem becomes linearly separable, after applying the first layer of the randomly initialized CNN: Lemma 6. Assume we uniformly initialize W ⇠ {±1/k}q⇥k and bi = 1/k 1. Fix some > 0. Then if q > 2k+3 log(2k/ ), w.p. 1 over the choice of W , for every k-pattern f there exist u⇤(1), . . . ,u⇤(n k) 2 Rq with u⇤(j⇤) 2 k+1\nkp q and u⇤(j) = 0 for j 6= j⇤, s.t. hu⇤,W,b = f(x).\nProof. Fix some z 2 {±1}k, then for every w(i) ⇠ {±1/k}k, we have: P ⇥ sign(w(i)) = z ⇤ = 2 k.\nDenote by Jz ✓ [q] the subset of indexes satisfying signw(i) = z, for every i 2 Jz, and note that EW |Jz| = q2 k. From Chernoff bound:\nP ⇥ |Jz| q2 k /2 ⇤ e q2 k/8 2 k\nby choosing q > 2k+3 log(2k/ ). So, using the union bound, w.p. at least 1 , for every z 2 {±1}k we have |Jz| q2 k 1. By the choice of bi we have ( ⌦ w(i), z ↵ + bi) = (1/k)1{signw(i) = z}.\nNow, fix some k-pattern f , where f(x) = g(xj⇤,...,j⇤+k 1). For every i 2 Jz we choose u ⇤(j⇤) i\n= k\n|Jz|g(z) and u ⇤(j) = 0 for every j 6= j⇤. Therefore, we get:\nhu⇤,W,b(x) = n kX\nj=1\nD u⇤(j), (Wxj...j+k 1 + b) E = X\nz2{±1}k i2Jz\nu⇤(j ⇤)\ni\n⇣D w(i),xj⇤...j⇤+k 1 E + bi ⌘\n= X\nz2{±1}k 1{z = xj⇤...j⇤+k 1}g(z) = g(xj⇤...j⇤+k 1) = f(x)\nNote that by definition of u⇤(j ⇤) we have u⇤(j⇤) 2 = P z2{±1}k P i2Jz k 2 |Jz|2 4 (2 k k)2 q .\nComment 7. Admittedly, the initialization assumed above is non-standard, but is favorable for the analysis. A similar result can be shown for more natural initialization (e.g., normal distribution), using known results from random features analysis (for example, Bresler & Nagaraj (2020)).\nFrom Lemma 6 and known results on learning linear classifiers with gradient-descent, solving the k-pattern problem can be achieved by optimizing the second layer of a randomly initialized CNN. However, since in gradient-descent we optimize both layers of the network, we need a more refined analysis to show that full gradient-descent learns to solve the problem. We follow the scheme introduced in Daniely (2017), adapting it our setting.\nWe start by showing that the first layer of the network does not deviate from the initialization during the training: Lemma 8. We have u(T,j) ⌘Tpq for all j 2 [n k], and W (0) W (T ) c⌘2T 2n p qk\nWe can now bound the difference in the loss when the weights of the first layer change during the training process: Lemma 9. For every u⇤ we have:\nLf,D(hu⇤,W (T ),b) Lf,D(hu⇤,W (0),b) c⌘2T 2nkpq\nn kX\nj=1\nu⇤(j)\nThe proofs of Lemma 8 and Lemma 9 are shown in the appendix.\nFinally, we use the following result on the convergence of online gradient-descent to show that gradient-descent converges to a good solution. The proof of the Theorem is given in Shalev-Shwartz et al. (2011), with an adaptation to a similar setting in Daniely & Malach (2020). Theorem 10. (Online Gradient Descent) Fix some ⌘, and let f1, . . . , fT be some sequence of convex functions. Fix some ✓1, and update ✓t+1 = ✓t ⌘rft(✓t). Then for every ✓⇤ the following holds:\n1\nT\nTX\nt=1\nft(✓t) 1\nT\nTX\nt=1\nft(✓ ⇤) +\n1\n2⌘T k✓\n⇤ k 2 + k✓1k\n1\nT\nTX\nt=1\nkrft(✓t)k+ ⌘ 1\nT\nTX\nt=1\nkrft(✓t)k 2\nProof of Theorem 4. From Lemma 6, with probability at least 1 over the initialization, there exist u⇤(1), . . . ,u⇤(n k) 2 Rq with u⇤(1) 2 k+1 kp\nq and\nu⇤(j) = 0 for j > 1 such that\nhu⇤,W (0),b(x) = f(x), and so Lf,D(hu⇤,W (0),b) = 0. Using Theorem 10, since Lf,D(hu,W,b) is convex with respect to u, we have:\n1\nT\nTX\nt=1\nLf,D(hu(t),W (t),b)\n 1\nT\nTX\nt=1\nLf,D(hu⇤,W (t),b) + 1\n2⌘T\nn kX\nj=1\nu⇤(j) 2 + ⌘ 1\nT\nTX\nt=1\n@\n@u Lf,D(fu(t),W (t),b)\n2\n 1\nT\nTX\nt=1\nLf,D(hu⇤,W (t),b) + 2(2kk)2\nq⌘T + c2⌘nq = (⇤)\nUsing Lemma 9 we have:\n(⇤) 1\nT\nTX\nt=1\nLf,D(hu⇤,W (0),b) + c⌘ 2 T 2 nk\np q\nn kX\nj=1\nu⇤(j) +\n2(2kk)2\nq⌘T + c2⌘nq\n 2c⌘2T 2nk22k + 2(2kk)2\nq⌘T + c2⌘nq\nNow, choosing ⌘ = p np qT we get the required." }, { "heading": "3.1 ANALYSIS OF LOCALLY-CONNECTED NETWORKS", "text": "The above result shows that polynomial-size CNNs can learn (log n)-patterns in polynomial time. As discussed in the introduction, the success of CNNs can be attributed to either the weight sharing\nor the locality-bias of the architecture. While weight sharing may contribute to the success of CNNs in some cases, we note that it gives no benefit when learning k-patterns. Indeed, we can show a similar positive result for locally-connected networks (LCN), which have no weight sharing.\nObserve the following definition of a LCN with one hidden-layer: Definition 11. A locally-connected network hu,w,b is defined as follows:\nhu,W,b(x) = n kX\nj=1\nD u(j), (W (j)xj...j+k 1 + b (j)) E\nfor some activation function , with W (1) , . . . ,W (q) 2 Rq⇥k, bias b(1), . . . ,b(q) 2 Rq and readout layer u(1), . . . ,u(n) 2 Rq .\nNote that the only difference from Definition 1 is the fact that the weights of the first layer are not shared. It is easy to verify that Theorem 4 can be modified in order to show a similar positive result for LCN architectures. Specifically, we note that in Lemma 6, which is the core of the Theorem, we do not use the fact that the weights in the first layer are shared. So, LCNs are “as good as” CNNs for solving the k-pattern problem. This of course does not resolve the question of comparing between LCN and CNN architectures, which we leave for future work.\n4 LEARNING (log n)-PATTERNS WITH FCN\nIn the previous section we showed that patterns of size log n are efficiently learnable, when using CNNs trained with gradient-descent. In this section we show that, in contrast, gradient-descent fails to learn (log n)-patterns using fully-connected networks, unless the size of the network is superpolynomial (namely, unless the network is of size n⌦(logn)). For this, we will show an instance of the k-pattern problem that is hard for fully connected networks. We take D to be the uniform distribution over X , and let f(x) = Q\ni2I xi, where I is some set of k consecutive bits. Specifically, we take I = {1, . . . , k}, although the same proof holds for any choice of I . In this case, we show that the initial gradient of the network is very small, when a fully-connected network is initialized from a permutation invariant distribution. Theorem 12. Assume | | c, | 0| 1. Let W be some permutation invariant distribution over Rn, and assume we initialize w(1), . . . ,w(q) ⇠ W and initialize u such that |ui| 1 and for all x we have hu,w(x) 2 [ 1, 1]. Then, the following holds:\n• Ew⇠W @ @W Lf,D(hu,w,b) 2 2 qn ·min n n 1 k 1 , n 1 k 1 1o\n• Ew⇠W @ @uLf,D(hu,w,b) 2 2 c 2 q n k 1\nFrom the above result, if k = ⌦(log n) then the average norm of initial gradient is qn ⌦(logn). Therefore, unless q = n⌦(logn), we get that with overwhelming probability over the randomness of the initialization, the gradient is extremely small. In fact, if we run GD on a finite-precision machine, the true population gradient is effectively zero. A formal argument relating such bound on the gradient norm to the failure of gradient-based algorithms has been shown in various previous works (e.g. Shamir (2018); Abbe & Sandon (2018); Malach & Shalev-Shwartz (2020)).\nThe key for proving Theorem 12 is the following observation: since the first layer of the FCN is initialized from a symmetric distribution, we observe that if learning some function that relies on k bits of the input is hard, then learning any function that relies on k bits is hard. Using Fourier analysis (e.g., Blum et al. (1994); Kearns (1998); Shalev-Shwartz et al. (2017a)), we can show that learning k-parities (functions of the form x 7! Q i2I xi) using gradient-descent is hard. Since an arbitrary k-parity is hard, then any k-parity, and specifically a parity of k consecutive bits, is also hard. That is, since the first layer is initialized symmetrically, training a FCN on the original input is equivalent to training a FCN on an input where all the input bits are randomly permuted. So, for a FCN, learning a function that depends on consecutive bits is just as hard as learning a function that depends on arbitrary bits (a task that is known to be hard).\nProof of Theorem 12. Denote I0 = Q\ni2I0 xi, so f(x) = I with I = {1, . . . , k}. We begin by calculating the gradient w.r.p. to w(i)\nj :\n@\n@w(i) j\nLf,D(hu,w,b) = E D\n\" @\n@w(i) j\n`(hu,w,b(x), f(x))\n# = E\nD\nh xjui 0 ⇣D w(i),x E + bi ⌘ I(x) i\nFix some permutation ⇡ : [n] ! [n]. For some vector x 2 Rn we denote ⇡(x) = (x⇡(1), . . . , x⇡(n)), for some subset I ✓ [n] we denote ⇡(I) = [j2I{⇡(j)}. Notice that we have for all x, z 2 Rn: I(⇡(x)) = ⇡(I) and h⇡(x), zi = ⌦ x,⇡ 1(z)\n↵ . Denote ⇡(hu,w,b)(x) =P\nk i=1 ui ( ⌦ ⇡(w(i)),x ↵ + bi). Denote ⇡(D) the distribution of ⇡(x) where x ⇠ D. Notice that since D is the uniform distribution, we have ⇡(D) = D. From all the above, for every permutation ⇡ with ⇡(j) = j we have:\n@\n@w(i) j\nL ⇡(I),D(hu,w,b) = Ex⇠D\nh xjui 0 ⇣D w(i),x E + bi ⌘ ⇡(I)(x) i\n= E x⇠⇡(D)\nh xjui 0 ⇣D w(i),⇡ 1(x) E + bi ⌘ I(x) i\n= E x⇠D\nh xjui 0 ⇣D ⇡(w(i)),x E + bi ⌘ I(x) i =\n@\n@w(i) j\nL I ,D(⇡(hu,w,b))\nFix some I ✓ [n] with |I| = k and j 2 [n]. Now, let Sj be a set of permutations satisfying:\n1. For all ⇡1,⇡2 2 Sj with ⇡1 6= ⇡2 we have ⇡1(I) 6= ⇡2(I).\n2. For all ⇡ 2 Sj we have ⇡(j) = j.\nNote that if j /2 I then the maximal size of such Sj is n 1 k , and if j 2 I then the maximal size is n 1 k 1 . Denote gj(x) = xjui 0( ⌦ w(i),x ↵ + bi). We denote the inner-product h , iD =\nEx⇠D [ (x) (x)] and the induced norm k kD = p h , iD. Since { I0}I0✓[n] is an orthonormal basis w.r.p. to h·, ·iD from Parseval’s equality we have:\nX\n⇡2Sj\n@\n@w(i) j\nL I ,D(⇡(hu,w,b))\n!2 = X\n⇡2S\n@\n@w(i) j\nL ⇡(I),D(hu,w,b)\n!2\n= X\n⇡2S\n⌦ gj , ⇡(I) ↵2 D \nX\nI0✓[n]\nhgj , I0i 2 D = kgjk 2 D 1\nSo, from the above we get that, taking Sj of maximal size:\nE ⇡⇠Sj\n@\n@w(i) j\nL I ,D(⇡(hu,w,b)) !2 |Sj | 1 min (✓ n 1\nk\n◆ 1 , ✓ n 1\nk 1\n◆ 1)\nNow, for some permutation invariant distribution of weights W we have:\nE w⇠W\n@\n@w(i) j\nL I ,D(hu,w,b)\n!2 = E\nw⇠W E ⇡⇠Sj\n@\n@w(i) j\nL I ,D(⇡(hu,w,b)) !2 |Sj | 1\nSumming over all neurons we get:\nE w⇠W\n@\n@W L I ,D(hu,w,b)\n2\n2\n qn ·min\n(✓ n 1\nk\n◆ 1 , ✓ n 1\nk 1\n◆ 1)\nWe can use a similar argument to bound the gradient of u. We leave the details to the appendix.\n0 10 20 30 0.5\n0.6\n0.7\n0.8\nepoch\nac cu\nra cy\nn=5\nFCN\nCNN\nLCN\n0 10 20 30\n0.5\n0.6\n0.7\n0.8\nepoch\nn=13\n0 10 20 30\n0.5\n0.6\n0.7\n0.8\nepoch\nn=19" }, { "heading": "5 NEURAL ARCHITECTURE SEARCH", "text": "So far, we showed that while the (log n)-pattern problem can be solved efficiently using a CNN, this problem is hard for a FCN to solve. Since the CNN architecture is designed for processing consecutive patterns of the inputs, it can easily find the pattern that determines the label. The FCN, however, disregards the order of the input bits, and so it cannot enjoy from the fact that the bits which determine the label are consecutive. In other words, the FCN architecture needs to learn the order of the bits, while the CNN already encodes this order in the architecture.\nSo, a FCN fails to recover the k-pattern since it does not assume anything about the order of the input bits. But, is it be possible to recover the order of the bits prior to training the network? Can we apply some algorithm that searches for an optimal architecture to solve the k-pattern problem? Such motivation stands behind the thriving research field of Neural Architecture Search algorithms (see Elsken et al. (2018) for a survey).\nUnfortunately, we claim that if the order of the bits is not known to the learner, no architecture search algorithm can help in solving the k-pattern problem. To see this, it is enough to observe that when the order of the bits is unknown, the k-pattern problem is equivalent to the k-Junta problem: learning a function that depends on an arbitrary (not necessarily consecutive) set of k bits from the input. Learning k-Juntas is a well-studied problem in the literature of learning theory (e.g., Mossel et al. (2003)). The best algorithm for solving the (log n)-Junta problem runs in time nO(logn), and no poly-time algorithm is known for solving this problem. Moreover, if we consider statistical-query algorithms (a wide family of algorithms, that only have access to estimations of query function on the distribution, e.g. Blum et al. (2003)), then existing lower bounds show that the (log n)-Junta problem cannot be solved in polynomial time (Blum et al., 1994)." }, { "heading": "6 EXPERIMENTS", "text": "In the previous sections we showed a simplistic learning problem that can be solved using CNNs and LCNs, but is hard to solve using FCNs. In this problem, the label is determined by a few consecutive bits of the input. In this section we show some experiments that validate our theoretical results. In these experiments, the input to the network is a sequence of n MNIST digits, where each digit is scaled and cropped to a size of 24 ⇥ 8. We then train three different network architectures: FCN, CNN and LCN. The CNN and LCN architectures have kernels of size 24 ⇥ 24, so that 3 MNIST digits fit in a single kernel. In all the architectures we use a single hidden-layer with 1024 neurons, and ReLU activation. The networks are trained with AdaDelta optimizer for 30 epochs 1.\n1In each epoch we randomly shuffle the sequence of the digits.\nIn the first experiment, the label of the example is set to be the parity of the sum of the 3 consecutive digits located in the middle of the sequence. So, as in our theoretical analysis, the label is determined by a small area of consecutive bits of the input. Figure 3 shows the results of this experiment. As can be clearly seen, the CNN and LCN architectures achieve good performance regardless of the choice of n, where the performance of the FCN architectures critically degrades for larger n, achieving only chance-level performance when n = 19. We also observe that LCN has a clear advantage over CNN in this task. As noted, our primary focus is on demonstrating the superiority of locality-based architectures, such as CNN and LCN, and we leave the comparison between the two to future work.\nOur second experiment is very similar to the first, but instead of taking the label to be the parity of 3 consecutive digits, we calculate the label based on 3 digits that are far apart. Namely, we take the parity of the first, middle and last digits of the sequence. The results of this experiment are shown in Figure 4. As can be seen, for small n, FCN performs much better than CNN and LCN. This demonstrates that when we break the local structure, the advantage of CNN and LCN disappears, and using FCN becomes a better choice. However, for large n, all architectures perform poorly.\nAcknowledgements: This research is supported by the European Research Council (TheoryDL project). We thank Tomaso Poggio for raising the main question tackled in this paper and for valuable discussion and comments" } ]
2,021
null
SP:b7b4e29defc84ee37a5a4dcaf2d393363c153b52
[ "This paper studies short, chaotic time series and uses the Taken's theorem to discover the causality between two time series. The main challenge is that for short time series, the delay embedding is not possible. Thus, the authors propose to fit a latent neural ODE and theoretically argue that they can use the Neural ODE embeddings in place of the delay maps. The authors provide two sets of experiments, both on simulation data. Unfortunately, they never tested the algorithm on real data." ]
Discovering causal structures of temporal processes is a major tool of scientific inquiry because it helps us better understand and explain the mechanisms driving a phenomenon of interest, thereby facilitating analysis, reasoning, and synthesis for such systems. However, accurately inferring causal structures within a phenomenon based on observational data only is still an open problem. Indeed, this type of data usually consists in short time series with missing or noisy values for which causal inference is increasingly difficult. In this work, we propose a method to uncover causal relations in chaotic dynamical systems from short, noisy and sporadic time series (that is, incomplete observations at infrequent and irregular intervals) where the classical convergent cross mapping (CCM) fails. Our method works by learning a Neural ODE latent process modeling the state-space dynamics of the time series and by checking the existence of a continuous map between the resulting processes. We provide theoretical analysis and show empirically that Latent-CCM can reliably uncover the true causal pattern, unlike traditional methods.
[ { "affiliations": [], "name": "Edward De Brouwer" }, { "affiliations": [], "name": "Adam Arany" }, { "affiliations": [], "name": "Yves Moreau" } ]
[ { "authors": [ "Mohammad Taha Bahadori", "Yan Liu" ], "title": "Granger causality analysis in irregular time series", "venue": "In Proceedings of the 2012 SIAM International Conference on Data Mining,", "year": 2012 }, { "authors": [ "Zsigmond Benkő", "Adám Zlatniczki", "Dániel Fabó", "András Sólyom", "Loránd Erőss", "András Telcs", "Zoltán Somogyvári" ], "title": "Complete inference of causal relations in dynamical systems", "venue": "arXiv preprint arXiv:1808.10806,", "year": 2018 }, { "authors": [ "Edwin V Bonilla", "Kian M Chai", "Christopher Williams" ], "title": "Multi-task gaussian process prediction", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Yonghong Chen", "Govindan Rangarajan", "Jianfeng Feng", "Mingzhou Ding" ], "title": "Analyzing multiple nonlinear time series with extended granger causality", "venue": "Physics letters A,", "year": 2004 }, { "authors": [ "Adam Thomas Clark", "Hao Ye", "Forest Isbell", "Ethan R Deyle", "Jane Cowles", "G David Tilman", "George Sugihara" ], "title": "Spatial convergent cross mapping to detect causal relationships from short time series", "venue": null, "year": 2015 }, { "authors": [ "Juan C Cuevas-Tello", "Peter Tiňo", "Somak Raychaudhury", "Xin Yao", "Markus Harva" ], "title": "Uncovering delayed patterns in noisy and irregularly sampled time series: an astronomy application", "venue": "Pattern Recognition,", "year": 2010 }, { "authors": [ "Edward De Brouwer", "Jaak Simm", "Adam Arany", "Yves Moreau" ], "title": "Gru-ode-bayes: Continuous modeling of sporadically-observed time series", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Heba Elsegai" ], "title": "Granger-causality inference in the presence of gaps: An equidistant missing-data problem for non-synchronous recorded time series data", "venue": "Physica A: Statistical Mechanics and its Applications,", "year": 2019 }, { "authors": [ "Clive WJ Granger" ], "title": "Investigating causal relations by econometric models and cross-spectral methods", "venue": "Econometrica: journal of the Econometric Society,", "year": 1969 }, { "authors": [ "Yu Huang", "Zuntao Fu", "Christian LE Franzke" ], "title": "Detecting causality from time series in a machine learning framework", "venue": "Chaos: An Interdisciplinary Journal of Nonlinear Science,", "year": 2020 }, { "authors": [ "Aapo Hyvärinen", "Kun Zhang", "Shohei Shimizu", "Patrik O Hoyer" ], "title": "Estimation of a structural vector autoregression model using non-gaussianity", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Huanfei Ma", "Kazuyuki Aihara", "Luonan Chen" ], "title": "Detecting causality from nonlinear dynamics with short-term time series", "venue": "Scientific reports,", "year": 2014 }, { "authors": [ "Alexander G. de G. Matthews", "Mark van der Wilk", "Tom Nickson", "Keisuke. Fujii", "Alexis Boukouvalas", "Pablo León-Villagrá", "Zoubin Ghahramani", "James Hensman" ], "title": "GPflow: A Gaussian process library using TensorFlow", "venue": "Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Alard Roebroeck", "Elia Formisano", "Rainer Goebel" ], "title": "Mapping directed influence over the brain using granger causality and fmri", "venue": null, "year": 2005 }, { "authors": [ "Nikolai F Rulkov", "Mikhail M Sushchik", "Lev S Tsimring", "Henry DI Abarbanel" ], "title": "Generalized synchronization of chaos in directionally coupled chaotic systems", "venue": "Physical Review E,", "year": 1995 }, { "authors": [ "Jakob Runge", "Peer Nowack", "Marlene Kretschmer", "Seth Flaxman", "Dino Sejdinovic" ], "title": "Detecting and quantifying causal associations in large nonlinear time series datasets", "venue": "Science Advances,", "year": 2019 }, { "authors": [ "L Schiatti", "Giandomenico Nollo", "G Rossato", "Luca Faes" ], "title": "Extended granger causality: a new tool to identify the structure of physiological networks", "venue": "Physiological measurement,", "year": 2015 }, { "authors": [ "Karin Schiecke", "Britta Pester", "Martha Feucht", "Lutz Leistritz", "Herbert Witte" ], "title": "Convergent cross mapping: Basic concept, influence of estimation parameters and practical application", "venue": "In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC),", "year": 2015 }, { "authors": [ "Shohei Shimizu", "Patrik O Hoyer", "Aapo Hyvärinen", "Antti Kerminen" ], "title": "A linear non-gaussian acyclic model for causal discovery", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "George Sugihara", "Robert May", "Hao Ye", "Chih-hao Hsieh", "Ethan Deyle", "Michael Fogarty", "Stephan Munch" ], "title": "Detecting causality in complex", "venue": "ecosystems. science,", "year": 2012 }, { "authors": [ "Floris Takens" ], "title": "Detecting strange attractors in turbulence", "venue": "Dynamical systems and turbulence,", "year": 1980 }, { "authors": [ "David J Thomson" ], "title": "Time series analysis of holocene climate data", "venue": "Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences,", "year": 1990 }, { "authors": [ "Yunqian Wang", "Jing Yang", "Yaning Chen", "Philippe De Maeyer", "Zhi Li", "Weili Duan" ], "title": "Detecting the causal effect of soil moisture on precipitation using convergent cross mapping", "venue": "Scientific reports,", "year": 2018 }, { "authors": [ "Hao Ye", "Ethan R Deyle", "Luis J Gilarranz", "George Sugihara" ], "title": "Distinguishing time-delayed causal interactions using convergent cross mapping", "venue": "Scientific reports,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Inferring a right causal model of a physical phenomenon is at the heart of scientific inquiry. It is fundamental to how we understand the world around us and to predict the impact of future interventions (Pearl, 2009). Correctly inferring causal pathways helps us reason about a physical system, anticipate its behavior in previously unseen conditions, design changes to achieve some objective, or synthesize new systems with desirable behaviors. As an example, in medicine, causality inference could allow predicting whether a drug will be effective for a specific patient, or in climatology, to assess human activity as a causal factor in climate change. Causal mechanisms are best uncovered by making use of interventions because this framework leads to an intuitive and robust notion of causality. However, there is a significant need to identify causal dependencies when only observational data is available, because such data is more readily available as it is more practical and less costly to collect (e.g., relying on observational studies when interventional clinical trials are not yet available).\nHowever, real-world data arising from less controlled environment than, for instance, clinical trials poses many challenges for analysis. Confounding and selection bias come into play, which bias standard statistical estimators. If no intervention is possible, some causal configurations cannot be identified. Importantly, with real-world data comes the major issue of missing values. In particular, when collecting longitudinal data, the resulting time series are often sporadic: sampling is irregular\n⇤Both authors contributed equally †Corresponding author\nin time and across dimensions leading to varying time intervals between observations of a given variable and typically multiple missing observations at any given time. This problem is ubiquitous in various fields, such as healthcare (De Brouwer et al., 2019), climate science (Thomson, 1990), or astronomy (Cuevas-Tello et al., 2010).\nA key problem in causal inference is to assess whether one temporal variable is causing another or is merely correlated with it. From assessing causal pathways for neural activity (Roebroeck et al., 2005) to ecology (Sugihara et al., 2012) or healthcare, it is a necessary step to unravel underlying generating mechanisms. A common way to infer causal direction between two temporal variables is to use Granger causality (Granger, 1969), which defines “predictive causality” in terms of the predictability of one time series from the other. A key requirement of Granger causality is then separability (i.e., that information about causes are not contained in the caused variable itself). This assumption holds in purely stochastic linear systems, but fails in more general cases (such as weakly coupled nonlinear dynamical systems) (Sugihara et al., 2012). To address this nonseparability issue, Sugihara et al. (Sugihara et al., 2012) introduced the Convergent Cross Mapping (CCM) method, which is based on the theory of chaotic dynamical systems, particularly on Takens’ theorem. This method has been applied successfully in various fields such as ecology, climatology (Wang et al., 2018), and neuroscience (Schiecke et al., 2015). However, as the method relies on embedding the time series under study with time lags, it is highly sensitive to missing values and usually requires long uninterrupted time series. This method is thus not applicable in settings with repeated short sporadic time series, despite their occurrence in many practical situations.\nTo address this important limitation, we propose to learn the causal dependencies between time series by checking the existence of convergent cross mappings between latent processes of those time series. Using a joint model across all segments of sporadically observed time series and forcing the model to learn the inherent dynamic of the data, we show that our method can detect causal relationship from short and sporadic time series, without computing delay embeddings. To learn a continuous time latent representation of the system’s state-space, we leverage GRU-ODE-Bayes (De Brouwer et al., 2019), a recently introduced filtering method that extends the Neural ODE model (Chen et al., 2018). Importantly for causal inference, the filtering nature of the model makes sure no future information can leak into the past. We then check the existence of continuous maps between the learnt latent representations and infer the causal direction accordingly.\nIn a series of increasingly challenging test cases, our method accurately detects the correct causal dependencies with high confidence, even when fed very few observations, and outperforms competing methods such as multi-spatial CCM or CCM with multivariate Gaussian process interpolation." }, { "heading": "2 RELATED WORK", "text": "CCM to address failure of Granger causality. Granger causality (Granger, 1969) provided the first significant framework to infer causal dependencies from time series. Relying on predictability between dynamical systems, it was extended to account for different limitations, such as nonlinearity (Chen et al., 2004) or instantaneous relationships (Schiatti et al., 2015). However, the assumption of separability of information between causative and caused variables leads to the failure of the Granger paradigm for a significant number of time series coupling scenarios (Sugihara et al., 2012) (see Appendix D for a revealing worked out example). Convergent Cross Mapping, a technique based on nonlinear state space reconstruction was introduced to tackle this issue (Sugihara et al., 2012). Recently, several works have proposed extensions of CCM, such as the extended CCM, to address issues such as synchrony (Ye et al., 2015) or to improve the discrimination of the confounding case (Benkő et al., 2018). Synchrony occurs when one time series can be expressed as a function of the other (e.g. Y (t) = (X(t)) and attractors of both dynamical systems become homeomorphic to each other (Rulkov et al., 1995). This occurs when coupling between two chaotic system is too strong. Confounding, on the other hand, occurs when two variables are causally driven by a third one. In general we say that X confounds the relation between Y and Z if X causes both Y and Z.\nHuang et al. (2020) also proposed to predict directly the driving time series from the driven one with reservoir computing, bypassing the delay embedding step, making it more robust to noise. However, those methods still require long regularly sampled time series.\nCausality for short or sporadic time series. Short time series are very common in practice and there has been some work proposing to learn causality from short time series relying on state space reconstruction. Ma et al. (2014) proposed a method for short, fully observed, unique time series. Multi-spatial CCM (Clark et al., 2015), considered the problem of inferring causality from several short fully observed snippets of the same dynamical system by computing delay embeddings compatible with the lengths of the time series and aggregating them. In comparison, on top of addressing irregular sampling, our approach computes more informative state-space representations by sharing a model across all segments. Techniques to infer causal direction from incomplete time series have also been proposed, but all are relying on the Granger causality framework, which limits their applicability to separable dynamical systems. They use direct partial correlations on regularly sampled data (but with missing values) (Elsegai, 2019) or generalizations of similarity measures for sporadic time series (Bahadori & Liu, 2012). To the best of our knowledge, this is the first work investigating the identification of causal dependencies from short sporadic time series using state-space reconstruction." }, { "heading": "3 METHOD", "text": "We consider the problem of inferring a causal dependency between two temporal variables from several segments of their multivariate time series X[t] 2 RdX and Y [t] 2 RdY . We assume that X[t] and Y [t] have been generated by an unknown dynamical system. In this work, we refer to the dynamical system of a time varying variable X as the smallest dynamical system that fully describes the dynamics of X . As an example, let’s consider the following system of ODEs representing the dynamics of X and Y :\ndX(t)\ndt = f(X(t)) (1)\ndY (t)\ndt = g(X(t)) + h(Y (t)). (2)\nThe dynamical system of X is given by Equation (1). On the other hand, the dynamical system of Y is Equation (1) + (2) as Equation (1) is required to describe the dynamics of Y .\nTo account for the more general and most frequent case, we consider those time series are only observed in segments of finite duration. X[t] and Y [t] then consist of collections of N short time series (X1[t],...,XN [t])) and (Y 1[t],...,Y N [t])) respectively. Importantly, each segment of X and Y is observed concomitantly. To proceed with a lighter notation, we’ll drop the superscript when referring to a segment of time series.\nEach of those time series is also sporadic namely the are not regularly sampled and not all dimensions are observed each time.\nIn this work, we define the notion of causality by considering the equations of the dynamical system as a structural causal model. In this framework, X causes Y if p(Y |do(X)) 6= P (Y ) where do(X) is an intervention on X (Pearl, 2009). Then, if X causes Y , X is part of the dynamical system of Y (X is required to describe the dynamics of Y ). In the case of the example described by Equations 1 and 2, X causes Y if g(·) is not a constant function." }, { "heading": "3.1 CONVERGENT CROSS MAPPING AND TAKENS’ THEOREM", "text": "CCM aims at discovering the causal direction between temporal variables in dynamical systems by checking if the state-space dynamics of their time series can be recovered from one another. As shown above, if X causes Y , X is then contained in the dynamical system of Y and it should be possible to recover a representation of the dynamical system of X from the dynamical system of Y .\nA common way to obtain a representation of a dynamical system from its time series relies on Takens’ embedding theorem (Takens, 1981).\nLet X[t] 2 RdX be issued from a chaotic dynamical system that has a strange attractor M with box-counting dimension dM , where we define an attractor as the manifold toward which the state of a chaotic dynamical system tends to evolve. The dynamics of this system are specified by a flow on M, (·)(·) : R⇥M!M, where ⌧ (Mt) = Mt+⌧ and Mt stands for the point on the manifold at time index t. This flow is encoded in the ODE of the system. The observed time series X[t] is then obtained through an observation function fobs(·) : X[t] = fobs(Mt). Takens’ theorem then states that a delay embedding with delay ⌧ and embedding dimension k\nk,⌧ ,↵ (Mt) = (↵( 0(Mt)),↵( ⌧ (Mt)), . . . ,↵( k⌧ (Mt)))\nis an embedding of the strange attractor M if k > 2dM and ↵ : RdM ! R is a twice-differentiable observation function. More specifically, the embedding map is a diffeomorphism between the original strange attractor manifold M and a shadow attractor manifold M0 generated by the delay embeddings. Under these assumptions, one can then theoretically reconstruct the original time series from the delay embedding.\nThe simplest observation function ↵ consists in simply taking one of the dimensions of observations of the dynamical system. In this case, writing Xi[t] as the i-th dimension of X[t], Takens’ theorem ensures that there is a diffeomorphism between the original attractor manifold of the full dynamical system and the shadow manifold M0 that would be generated by X 0[t] = (Xi[t], Xi[t ⌧ ], . . . , Xi[t k⌧ ]). To see how this theorem can be used to infer the causal direction, let us consider the manifold MZ of the joint dynamical system resulting of the concatenation of X[t] and Y [t]. We then generate two shadow manifolds M0\nX and M0 Y from the delay embeddings\nX 0[t] = (Xi[t], Xi[t ⌧ ], . . . , Xi[t k⌧ ]) and Y 0[t] : (Yj [t], Yj [t ⌧ ], . . . , Yj [t k⌧ ]). Now, if X unidirectionally causes Y (i.e., Y does not cause X), it means that X is part of an autonomous dynamical system and that Y is part of a larger one, containing X. The attractor of Y is then the same as the one of the joint dynamical system Z. By contrast, the attractor of X is only a subset of it. From Taken’s theorem, it is theoretically possible to recover the original MZ from M0Y and hence, by extension, recover M0\nX from M0 Y . However, the contrary is not true and it is in general\nnot possible to recover M0 Y from M0 X .\nThe CCM algorithm uses this property to infer causal dependency. It embeds both dynamical systems X and Y and use k-nearest neighbors to predict points on M0\nX from M0 Y and inversely. The\nresult then consists in the correlation of the predictions with the true values. We write Ccm(X,Y ) the Pearson correlation for the task of reconstructing M0\nX from M0 Y ,\nCcm(X,Y ) = Corr(M0 X ,M̂0 X )\nwhere M̂0 X stands fro the prediction of M0 X obtained from M0 Y . Importantly, this measure is nonsymmetric as an non-injective map between M0\nX and M0 Y would lead to an accurate reconstruction\nbeing possible in one direction only.\nTo infer that there is a causal link between the predictor dynamical system and the predicted one, this correlation should be high and, importantly, increase with the length of the observed time series, as the observed manifolds become denser.\nThe potential results are then interpreted in the following way (1) X causes Y if one can reconstruct with high accuracy M0\nX from M0 Y ; (2) X and Y are not causally related (but not necessarily\nstatistically independent) if nor M0 X nor M0 Y can be reconstructed from the other; (3) X and Y are in a circular causal relation if both M0\nY and M0 X can be reconstructed from the other. In the\nextreme case of strong coupling, the two systems are said to be in synchrony, and it becomes hard to distinguish between unidirectional or bidirectional coupling (Ye et al., 2015)." }, { "heading": "3.2 NEURAL ODES", "text": "Many continuous-time deterministic dynamical systems are usefully described as ODEs. But in general, not all dimensions of the dynamical system will be observed so that the system is better described as an ODE on a continuous latent process H(t), conditioned on which the observations X[t] are generated. For instance, when observing only one dimension of a 2 dimensional dynamical system, we cannot find a flow t(X) on that single dimension variable, but we can find one on the latent process H(t). We then have the following description of the dynamics:\nX[t] = g(H[t]) with dH(t)\ndt = f✓(H(t), t) (3)\nwhere ✓ represents the parameters of the ODE, f✓(·) is a uniformly Lipschitz continuous function and g(·) is a continuous function. Learning the dynamics of the system then consists in learning those parameters ✓ from a finite set of (potentially noisy) observations of the process X . Neural ODEs (Chen et al., 2018) parametrize this function by a neural network. Learning the weights of this network can be done using the adjoint method or by simply backpropagating through the numerical integrator. Note that one usually allows X[t] to be stochastic (e.g. observation noise). In that case, the mean of X[t] (rather than X[t] itself) follows Equation 3." }, { "heading": "3.3 CAUSAL INFERENCE WITH LATENT CCM", "text": "A key step in the CCM methodology is to compute the delay embedding of both time series: (X[t]) and (Y [t]). However, when the data is only sporadically observed at irregular intervals, the probability of observing the delayed samples Xi[t], Xi[t ⌧ ], . . . , Xi[t k⌧ ] is vanishing for any t. X\n0[t] and Y 0[t] are then never fully observed (in fact, only one dimension is observed) and nearest neighbor prediction cannot be performed. What is more, short time series usually do not allow to compute a delay embedding of sufficient dimension (k) and lag (⌧ ) (Clark et al., 2015).\nInstead of computing delay embeddings, we learn the dynamics of the process with a continuoustime hidden process parametrized by a Neural ODE (as in Eq. 3) and use this hidden representation as a complete representation of the state-space, therefore eliminating the need for delay-embedding that was limiting the applicability of CCM to long, constant sampling time series. A graphical representation of the method is shown on Figure 1.\nTo infer causality between temporal variables from their time series X[t] and Y [t], the first step is to train two GRU-ODE-Bayes models (De Brouwer et al., 2019), a filtering technique that extends Neural-ODEs. Being a filtering approach, GRU-ODE-Bayes ensures no leakage of future information backward in time, an important requirement for our notion of causality. The continuity of the latent process is also important as it provides more coverage of the attractor of the dynamical system. Indeed, a constant latent process in between observations (such as obtained with a classical recurrent neural network such as GRU) would lead to fewer unique latent process observations.\nThe same model is used for all segments of each time series and is trained to minimize forecasting error. We learn the observation function g, the ODE f✓ and the continuous-time latent process H(t). We write the resulting space of latent vectors from time series X on all segments as HX . Causality is then inferred by checking the existence of a continuous map between HX and HY . Analogously to CCM, we consider X causes Y if there exists a continuous map between HY and\nHX . This is consistent because HX , just as the delay embedding X , is a embedding of the strange attractor of the dynamical system as stated in Lemma 3.1 for which we give the proof in the Appendix E.\nLemma 3.1. For a sporadic time series X[t] 2 X satisfying the following dynamics,\nX[t] = g(H(t)) with dH(t)\ndt = f✓(H(t), t)\nwith g(·) and f✓ continuous functions. If there exists one observation function ↵H 2 C2 : X ! R along with a valid couple (k, ⌧ ) (in the Takens’ embedding theorem sense) such that the map k,⌧\ng( H),↵H (H(t)) is injective, the latent process H(t) is an embedding of the strange attractor of\nthe full dynamical system containing X .\nThe requirement of k,⌧ g( H),↵H being injective is not enforced in our architecture. However, with sufficient regularization of the network, it is satisfied in practice as shown by our results in Section 4.5.\nThe same reasoning as in CCM then applies to the latent process and causal direction can be inferred. The existence of a continuous map between the latent spaces of both time series is quantitatively assessed with the correlation between the true latents of the driven time series and the reconstructions obtained with a k-nearest neighbors model on the latents of the driven time series. For instance, for a direction X ! Y , we report the correlation between predictions of HX obtained from HY and the actual ones (HX ). A strong positive correlation suggests an accurate reconstruction and thus a causal link in the studied direction between the variables (e.g., X ! Y ). By contrast, a weak correlation suggests no causal link in that direction." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate the performance of our approach on data sets from physical and neurophysiology models, namely a double pendulum and neurons activity data. We show that our method detects the right causal topology in all cases, outperforming multi-spatial CCM, as well as baselines designed to address the sporadicity of the time series. The code is available at https: //github.com/edebrouwer/latentCCM." }, { "heading": "4.1 BASELINE METHODS", "text": "To the best of our knowledge, this is the first time CCM is applied to short sporadic time series. Indeed, because of missing variables, many standard approaches are simply not applicable. The main baseline consists in multi-spatial CCM (Clark et al., 2015) applied to regularly sampled data with a sampling rate similar to the one of the sporadic data. We also compare our approach to variants where multi-spatial CCM is applied to an interpolation of the sporadic time series using (1) linear interpolation and (2) univariate and multivariate Gaussian Processes (GP and MVGP). For the Gaussian Process, we chose a mixture of RBF and identity kernel and learn the parameters from the data. To model multivariate GPs (MVGP), we used the combination of a Matern and a periodic Matern kernel for the time dimension and used co-regionalization (Bonilla et al., 2008) with a full-rank interaction matrix. We then use the mean of the posterior process as the reconstruction subsequentially fed to the classical CCM method. Implementation was done with GPflow (Matthews et al., 2017).\nWe also compared our approach to non-CCM causal discovery methods such as PCMCI (Runge et al., 2019) and VARLinGAM (Hyvärinen et al., 2010). PCMCI uses conditional independence testing between time series at differrent lags to infer causal dependencies. VARLinGAM learns a graphical model of the longitudinal variables and their time lags, using the LinGAM method (Shimizu et al., 2006). These methods do not allow for short sporadic time series as input but a comparison with a less challenging non-sporadic variant of our datasets is presented in Appendix F." }, { "heading": "4.2 PERFORMANCE METRICS", "text": "Our method assesses causality by detecting convergent reconstruction accuracy between the latent processes of different time series. To account for both aspects in a single score, we use the difference between the correlation of the reconstruction and the target latent vector using the whole data (Ccmfull) and the correlation using only 100 sample points (Ccm0), as shown on Figure 2 and suggested in Clark et al. (2015). The score for the causal coupling from X[t] to Y [t] is then defined as\nScX!Y = Ccmfull(X,Y ) Ccm0(X,Y ).\nwith a higher score implying more confidence in a causal relationship. Additionally, to quantify the certainty about the presence of a causal edge in the data generation graph, we compare the obtained scores with the ones that would be obtained with CCM on fully observed but independent time series. We compute the Mann-Whitney U -statistics (Ma et al., 2014) and provide the corresponding p-value.\nNevertheless, in practice, one might not have access to the score of independent time series, making it difficult to assess from the score only if a causal relationship is present. To address this issue, we visualize the results graphically as shown in Figure 2. Causal directions should then stand out clearly and have the characteristic convergent pattern (Sugihara et al., 2012)." }, { "heading": "4.3 DOUBLE PENDULUM", "text": "Description. The double pendulum is a simple physical system that is chaotic and exhibits rich dynamical behavior. It consists of two point masses m1 and m2 connected to a pivot point and to each other by weightless rods of length l1 and l2, as shown on Figure 4 in Appendix A. The trajectories of the double pendulum are described by the time series of the angles of the rods with respect to the vertical (✓1 and ✓2), as well as the angular momenta p1 and p2 conjugate to these angles. Each trajectory is then a collection of 4-dimensional vector observations.\nTo introduce causal dependencies from pendulum X to Y , we include a non-physical asymmetrical coupling term in the update of the momentum conjugate to the first angle:\nṗ Y 1 = @H\nY\n@✓ Y\n1\n2 · cX,Y (✓Y1 ✓X1 ),\nwhere cX,Y is a coupling parameter. The term corresponding to a quadratic potential incorporated to the Hamiltonian of system Y results in an attraction on system Y by system X . Depending on the values of cX,Y and cY,X , we have different causal relationships between X and Y . Namely, (1) X causes Y iff cX,Y 6= 0, (2) Y causes X iff cY,X 6= 0 and (3) X is not causally related to Y if cX,Y = cY,X = 0.\nData generation. We consider two cases of generating models. The first one consists of two double pendulums (X[t] and Y [t]) with high observation noise with Y causing X . In this case, we set cX,Y = 0 and cY,X = 0.2. The second consists of 3 double pendulums (X[t], Y [t] and Z[t]), with one of them causing the other two (cZ,X = 0.5, cZ,Y = 1). We then infer the causal model relations between those 3 variables in a pairwise fashion (i.e. we infer the causal direction between all pairs of variables in the system). Remarkably, X and Y are here correlated but not causally related. Graphical representations of both considered cases are presented in Figure 3. Parameters of the pendulums (lengths and masses) are presented in Appendix A. We generate 5 trajectories with different initial conditions (✓1 ⇠ N ( 1, 0.05) and ✓2 ⇠ N (0.5, 0.05)). We simulate observation noise by adding a random Gaussian noise n to the samples with n ⇠ N (µ = 0, = 0.1) for the first case and n ⇠ N (µ = 0, = 0.01) for the second. To account for the short length of time series usually encountered in the real world, we randomly split the trajectories in windows of 10 seconds. To simulate sporadicity, we sample observation uniformly at random with an average rate of 4 samples per second. Furthermore, for each of those samples, we apply an observation mask that keeps each individual dimension with probability 0.3. This whole procedure leads to a sporadic pattern as shown in Figure 5 of Appendix A. We used 80% of available windows for training and used the remaining 20% for hyperparameter tuning with the MSE prediction on future samples used as model selection criterion. More details on this procedure is given in Appendix G." }, { "heading": "4.4 NEURAL ACTIVITY DATA", "text": "We also evaluate our approach on neural activity data. We generate time series of the average membrane potential of two populations of leaky integrate-and-fire neurons with alpha-function shaped synaptic currents (iaf psc alpha) simulated by NEST-2.20.0 (Fardet et al., 2020). Each neuron population contains 100 units with sparse random excitatory synapses within the population. We consider two cases, one where population A unidirectionally excites population B and another case where both populations fire independently. To account for the short and sporadic nature of real-world data, we generate 5,000 windows of 20 seconds from which we sample 1 observation every second on average. This leads to 20 samples being available on average per time window." }, { "heading": "4.5 RESULTS", "text": "Results over 5 repeats for the double pendulums and the neural activity data are presented in Table 1 and in Figure 2. Because this method uses different metrics for inferring the underlying causal graph, the results for PCMCI are presented in Appendix F, where we show that the method cannot reliably infer the generative causal dependencies in our data.\nDouble Pendulum. Our approach is the only one to recover the right causal direction from the sporadic data. The other baselines do not detect any significant correlation and thus no causal link between double pendulums. Despite having access to constant sampling data, multi-spatial CCM is also not able to detect the right data structure. We argue this is caused by the short length of time series window, and thus the low number and quality of delay embeddings that can be computed. In contrast, as our method shares the same model across all time windows, it represents more reliably the (hidden) state-space at any point in time. Importantly, the perfect reconstruction for Case 2 shows that we can distinguish confounding from correlation between time series. Indeed, when inferring causal directions between X and Y , variable Z is not used and thus hidden. Yet, our methods detects no causal relation between X and Y . Figure 2a graphically presents the results of our method for the second case, where it is obvious that the only two convergent mappings are the ones corresponding to the true directions (solid blue and green lines), providing a strong signal for the right underlying causal mechanism.\nNeural activity For the neuron activity data, we observe that our method delivers the largest effect size towards the true data generating model (Sc = 0.295). Baselines methods relying on imputation do not provide any clear signal for a causal coupling (score 10 times lower). Multi-spatial CCM with the regularly sampled original data provides similar signal than our approach but dampened. Interestingly, we observe a small but significant correlation in the wrong direction (A B) suggesting a small coupling in this direction. An inspection of Figure 2b, however, will convince the reader that\nthe main causal effect is indeed from A to B. This small correlation in the direction A B is also observed in the fully observed data as shown in Figure 6 in Appendix B." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "In this work, we propose a novel way to detect causal structure linking chaotic dynamical systems that are sporadically observed using reconstruction of underlying latent processes learnt with Neural-ODE models. We show that our method correctly detects the causal directions between temporal variables in a low and irregular sampling regime, when time series are observed in only short noncontiguous time windows and even in the case of hidden confounders, which are characteristics of real-world data. Despite the apparent limitation of our method to chaotic systems, it has been shown that CCM is broadly applicable in practice as many real dynamical systems are either chaotic or empirically allow Takens’-like embeddings. As our work builds upon CCM theoretically, we expect the range of application to be at least as large and leave the application to other real-world data for future work." }, { "heading": "ACKNOWLEDGEMENTS", "text": "YM is funded by the Research Council of KU Leuven through projects SymBioSys3 (C14/18/092); Federated cloud-based Artificial Intelligence-driven platform for liquid biopsy analyses (C3/20/100) and CELSA-HIDUCTION (CELSA/17/032). YM also acknowledges the FWO Elixir Belgium (I002819N) and Elixir Infrastructure (I002919N. This research received funding from the Flemish Government (AI Research Program). YM is affiliated to Leuven.AI - KU Leuven institute for AI, B-3000, Leuven, Belgium. YM received funding from VLAIO PM: Augmanting Therapeutic Effectiveness through Novel Analytics (HBC.2019.2528) and Industrial Project MaDeSMart (HBC.2018.2287) EU. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 956832. We also thank Nvidia for supporting this research by donating GPUs. EDB is funded by a SB grant from FWO." } ]
2,021
LATENT CONVERGENT CROSS MAPPING
SP:474e2b9be8a3ec69a48c4ccd04a7e390ebb96347
[ "There have been multiple attempts to use self-attention in computer vision backbones for image classification and object detection. Most of these approaches either tried to combine convolution with global self-attention, or replace it completely with local self-attention operation. The proposed approach naturally combines the two, by employing query-key-value switching trick, with axial positional attention." ]
Recently, a series of works in computer vision have shown promising results on various image and video understanding tasks using self-attention. However, due to the quadratic computational and memory complexities of self-attention, these works either apply attention only to low-resolution feature maps in later stages of a deep network or restrict the receptive field of attention in each layer to a small local region. To overcome these limitations, this work introduces a new global self-attention module, referred to as the GSA module, which is efficient enough to serve as the backbone component of a deep network. This module consists of two parallel layers: a content attention layer that attends to pixels based only on their content and a positional attention layer that attends to pixels based on their spatial locations. The output of this module is the sum of the outputs of the two layers. Based on the proposed GSA module, we introduce new standalone global attention-based deep networks that use GSA modules instead of convolutions to model pixel interactions. Due to the global extent of the proposed GSA module, a GSA network has the ability to model long-range pixel interactions throughout the network. Our experimental results show that GSA networks outperform the corresponding convolution-based networks significantly on the CIFAR-100 and ImageNet datasets while using less parameters and computations. The proposed GSA networks also outperform various existing attention-based networks on the ImageNet dataset.
[]
[ { "authors": [ "cent Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "venue": null, "year": 2015 }, { "authors": [ "Irwan Bello", "Barret Zoph", "Ashish Vaswani", "Jonathon Shlens", "Quoc V Le" ], "title": "Attention augmented convolutional networks", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Nicolas Carion", "Francisco Massa", "Gabriel Synnaeve", "Nicolas Usunier", "Alexander Kirillov", "Sergey Zagoruyko" ], "title": "End-to-end object detection with transformers", "venue": "NeurIPS,", "year": 2005 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Jonathan Ho", "Nal Kalchbrenner", "Dirk Weissenborn", "Tim Salimans" ], "title": "Axial attention in multidimensional transformers", "venue": "arXiv preprint arXiv:1912.12180,", "year": 2019 }, { "authors": [ "Han Hu", "Zheng Zhang", "Zhenda Xie", "Stephen Lin" ], "title": "Local relation networks for image recognition", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Zilong Huang", "Xinggang Wang", "Lichao Huang", "Chang Huang", "Yunchao Wei", "Wenyu Liu" ], "title": "CCNet: Criss-cross attention for semantic segmentation", "venue": null, "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical Report,", "year": 2009 }, { "authors": [ "Guanbin Li", "Xiang He", "Wei Zhang", "Huiyou Chang", "Le Dong", "Liang Lin" ], "title": "Non-locally enhanced encoder-decoder network for single image de-raining", "venue": "In ACMMM,", "year": 2018 }, { "authors": [ "Xingyu Liao", "Lingxiao He", "Zhouwang Yang" ], "title": "Video-based person re-identification via 3d convolutional networks and non-local attention", "venue": "arXiv preprint arXiv:1807.05073,", "year": 2018 }, { "authors": [ "Prajit Ramachandran", "Niki Parmar", "Ashish Vaswani", "Irwan Bello", "Anselm Levskaya", "Jonathon Shlens" ], "title": "Stand-alone self-attention in vision models", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael S. Bernstein", "Alexander C. Berg", "Fei-Fei Li" ], "title": "ImageNet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Peter Shaw", "Jakob Uszkoreit", "Ashish Vaswani" ], "title": "Self-attention with relative position representations", "venue": "In NAACL-HLT,", "year": 2018 }, { "authors": [ "Zhuoran Shen", "Mingyuan Zhang", "Shuai Yi", "Junjie Yan", "Haiyu Zhao" ], "title": "Efficient attention: Selfattention with linear complexities", "venue": "arXiv preprint arXiv:1812.01243,", "year": 2018 }, { "authors": [ "Chen Sun", "Austin Myers", "Carl Vondrick", "Kevin Murphy", "Cordelia Schmid" ], "title": "Videobert: A joint model for video and language representation learning", "venue": null, "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "H. Wang", "Y. Zhu", "B. Green", "H. Adam", "A. Yuille", "L.-C. Chen" ], "title": "Axial-DeepLab: Stand-alone axial-attention for panoptic segmentation", "venue": "arXiv preprint arXiv:2003.07853,", "year": 2020 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Kaiyu Yue", "Ming Sun", "Yuchen Yuan", "Feng Zhou", "Errui Ding", "Fuxin Xu" ], "title": "Compact generalized non-local network", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Han Zhang", "Ian J. Goodfellow", "Dimitris N. Metaxas", "Augustus Odena" ], "title": "Self-attention generative adversarial networks", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Hengshuang Zhao", "Jiaya Jia", "Vladlen Koltun" ], "title": "Exploring self-attention for image recognition", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "TensorFlow Abadi" ], "title": "2019) provide direct support for the Einstein notation, through tf.einsum() and torch.einsum(), respectively. Therefore, there are direct TensorFlow/PyTorch transcriptions for all equations in this section. Assume the input X is a rank-3 tensor of shape h×", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Self-attention is a mechanism in neural networks that focuses on modeling long-range dependencies. Its advantage in terms of establishing global dependencies over other mechanisms, e.g., convolution and recurrence, has made it prevalent in modern deep learning. In computer vision, several recent works have augmented Convolutional Neural Networks (CNNs) with global self-attention modules and showed promising results for various image and video understanding tasks (Bello et al., 2019; Chen et al., 2018; Huang et al., 2019; Shen et al., 2018; Wang et al., 2018; Yue et al., 2018). For brevity, in the rest of the paper, we refer to self-attention simply as attention.\nThe main challenge in using the global attention mechanism for computer vision tasks is the large spatial dimensions of the input. An input image in a computer vision task typically contains tens of thousands of pixels, and the quadratic computational and memory complexities of the attention mechanism make global attention prohibitively expensive for such large inputs. Because of this, earlier works such as Bello et al. (2019); Wang et al. (2018) restricted the use of global attention mechanism to low-resolution feature maps in later stages of a deep network. Alternatively, other recent works such as Hu et al. (2019); Ramachandran et al. (2019); Zhao et al. (2020) restricted the receptive field of the attention operation to small local regions. While both these strategies are effective at capping the resource consumption of attention modules, they deprive the network of the ability to model long-range pixel interactions in its early and middle stages, preventing the attention mechanism from reaching its full potential.\nDifferent from the above works, Chen et al. (2018); Huang et al. (2019); Shen et al. (2018); Yue et al. (2018) made the global attention mechanism efficient by either removing the softmax normalization on the product of queries and keys and changing the order of matrix multiplications involved in the attention computation (Chen et al., 2018; Shen et al., 2018; Yue et al., 2018) or decomposing\none global attention layer into a sequence of multiple axial attention layers (Huang et al., 2019). However, all these works use content-only attention which does not take the spatial arrangement of pixels into account. Since images are spatially-structured inputs, an attention mechanism that ignores spatial information is not best-suited for image understanding tasks on its own. Hence, these works incorporate attention modules as auxiliary modules into standard CNNs.\nTo address the above issues, we introduce a new global self-attention module, referred to as the GSA module, that performs attention taking both the content and spatial positions of the pixels into account. This module consists of two parallel layers: a content attention layer and a positional attention layer, whose outputs are summed at the end. The content attention layer attends to all the pixels at once based only on their content. It uses an efficient global attention mechanism similar to Chen et al. (2018); Shen et al. (2018) whose computational and memory complexities are linear in the number of pixels. The positional attention layer computes the attention map for each pixel based on its own content and its relative spatial positions with respect to other pixels. Following the axial formulation (Ho et al., 2019; Huang et al., 2019), the positional attention layer is implemented as a column-only attention layer followed by a row-only attention layer. The computational and memory complexities of this axial positional attention layer are O(N √ N) in the number of pixels.\nThe proposed GSA module is efficient enough to act as the backbone component of a deep network. Based on this module, we introduce new standalone global attention-based deep networks, referred to as global self-attention networks. A GSA network uses GSA modules instead of convolutions to model pixel interactions. By virtue of the global extent of the GSA module, a GSA network has the ability to model long-range pixel interactions throughout the network. Recently, Wang et al. (2020) also introduced standalone global attention-based deep networks that use axial attention mechanism for both content and positional attentions. Different from Wang et al. (2020), the proposed GSA module uses a non-axial global content attention mechanism that attends to the entire image at once rather than just a row or column. Our experimental results show that GSA-ResNet, a GSA network that adopts ResNet (He et al., 2016) structure, outperforms the original convolution-based ResNet and various recent global or local attention-based ResNets on the widely-used ImageNet dataset.\nMAJOR CONTRIBUTIONS\n• GSA module: We introduce a new global attention module that is efficient enough to act as the backbone component of a deep network. Different from Wang et al. (2018); Yue et al. (2018); Chen et al. (2018); Shen et al. (2018); Huang et al. (2019), the proposed module attends to pixels based on both content and spatial positions. Different from Zhao et al. (2020); Hu et al. (2019); Ramachandran et al. (2019), the proposed module attends to the entire input rather than a small local neighborhood. Different from Wang et al. (2020), the proposed GSA module uses a non-axial global content attention mechanism that attends to the entire image at once rather than just a row or column.\n• GSA network: We introduce new standalone global attention-based networks that use GSA modules instead of spatial convolutions to model pixel interactions. This is one of the first works (Wang et al. (2020) being the only other work) to explore standalone global attention-based networks for image understanding tasks. Existing global attention-based works insert their attention modules into CNNs as auxiliary blocks at later stages of the network, and existing standalone attention-based networks use local attention modules.\n• Experiments: We show that the proposed GSA networks outperform the corresponding CNNs significantly on the CIFAR-100 and ImageNet datasets while using less parameters and computations. We also show that the GSA networks outperform various existing attention-based networks including the latest standalone global attention-based network of Wang et al. (2020) on the ImageNet dataset." }, { "heading": "2 RELATED WORKS", "text": "" }, { "heading": "2.1 AUXILIARY VISUAL ATTENTION", "text": "Wang et al. (2018) proposed the non-local block, which is the first adaptation of the dot-product attention mechanism for long-range dependency modeling in computer vision. They empirically verified its effectiveness on video classification and object detection. Follow-up works extended it to\ndifferent tasks such as generative adversarial image modeling (Zhang et al., 2019; Brock et al., 2019), video person re-identification (Liao et al., 2018), image de-raining (Li et al., 2018) etc. Several recent works focused on mitigating the high computational cost of Wang et al. (2018). Chen et al. (2018); Shen et al. (2018) utilized the associative property of matrix multiplication to reduce the complexity from quadratic to linear. Huang et al. (2019) proposed to decompose global attention into row attention and column attention to save resources.\nRecently, a series of works (Sun et al., 2019; Carion et al., 2020) have used Transformers (Vaswani et al., 2017) for various computer vision applications. These works first use a deep CNN to extract semantic features, and then use a Transformer to model interactions among the high-level semantic features. For example, Carion et al. (2020) used a Transformer to model object-level interactions for object detection, and Sun et al. (2019) used a Transformer to model inter-frame dependencies for video representation learning.\nAll these methods use attention modules as auxiliary modules to enhance long-range dependency modeling of a CNN, and relegate most of the feature extraction work to the convolution operation. In contrast, a GSA network uses attention as the primitive operation instead of spatial convolution." }, { "heading": "2.2 BACKBONE VISUAL ATTENTION", "text": "Bello et al. (2019) were the first to test attention as a primitive operation for computer vision tasks. However, they used the costly non-local block (Wang et al., 2018) which prevented them from fully replacing convolutional layers. Ramachandran et al. (2019), Hu et al. (2019) and Zhao et al. (2020) solved this problem by limiting the receptive field of attention to a local neighborhood. In contrast to these works, the proposed GSA network uses global attention throughout the network and is still efficient. Recently, Wang et al. (2020) used axial decomposition to make global attention efficient. Different from them, the proposed GSA network uses a non-axial global content attention mechanism which is better than axial mechanism as later shown in the experiments." }, { "heading": "3 GLOBAL SELF-ATTENTION NETWORK", "text": "" }, { "heading": "3.1 GLOBAL SELF-ATTENTION MODULE", "text": "Let F i ∈ RWH×din and F o ∈ RWH×dout , respectively, denote the (spatially) flattened input and output feature maps of the proposed GSA module. Here, W,H represent the spatial dimensions, and din, dout represent the channel dimensions. Each pixel in the output feature map is generated by aggregating information from every pixel in the input feature map based on their content and spatial positions. Let K = [kij ] ∈ RWH×dk , Q = [qij ] ∈ RWH×dk , and V = [vij ] ∈ RWH×dout respectively denote the matrices of keys, queries, and values generated using three 1×1 convolutions on the input feature map F i. Here, dk denotes the number of channels used for keys and queries. Each row in these matrices corresponds to one input pixel. The proposed GSA module (see Fig. 1) consists of two parallel layers: a content attention layer and a positional attention layer." }, { "heading": "3.1.1 CONTENT ATTENTION LAYER", "text": "This layer uses the keys, queries, and values to generate new features F c = [f cij ] ∈ RWH×dout using the following content-based global attention operation:\nF c = Q ( ρ ( K> ) V ) , (1)\nwhere K> denotes the matrix transpose of K, and ρ denotes the operation of applying softmax normalization for each row separately. This attention operation can be interpreted as first aggregating the pixel features in V into dk global context vectors using the weights in ρ ( K> ) , and then redistributing the global context vectors back to individual pixels using the weights in Q. The computational and memory complexities of this operation are O(N) in the number of pixels.\nThis attention operation is similar to the attention operation used in Chen et al. (2018); Shen et al. (2018) except that it does not use softmax normalization on queries. Normalizing the queries constrains the output features to be convex combinations of the global context vectors. As these constraints could restrict the expressive power of the attention mechanism, we remove the softmax\nnormalization on queries. This allows the output features to span the entire subspace of the dk global context vectors. When we experimented with softmax normalization on the queries, the top-1 accuracy on the ImageNet validation dataset decreased significantly (1%)." }, { "heading": "3.1.2 POSITIONAL ATTENTION LAYER", "text": "The content attention layer does not take the spatial positions of pixels into account, and hence, is equivariant to pixel shuffling. So, on its own, it is not best-suited for tasks that deal with spatiallystructured data such as images. Inspired by Bello et al. (2019); Ramachandran et al. (2019); Shaw et al. (2018), we address this issue by using a positional attention layer that computes the attention map for a pixel based on its own content and its relative spatial positions with respect to its neighbors. For each pixel, our positional attention layer attends to its L × L spatial neighbors. Inspired by the axial formulation (Ho et al., 2019; Huang et al., 2019), we implement this attention layer as a column-only attention layer followed by a row-only attention layer. In a column-only attention layer, an output pixel only attends to the input pixels along its column, and in a row-only attention layer, an output pixel only attends to the input pixels along its row. Note that a column-only attention layer followed by a row-only attention layer effectively results in information propagation over the entire L× L neighborhood. Let ∆ = {−L−12 , .., 0, .., L−1 2 } be a set of L offsets, and R\nc = [rcδ] ∈ RL×dk denote the matrix of L learnable relative position embeddings corresponding to L spatial offsets δ ∈ ∆ along a column. Let V cab = [va+δ,b] ∈ RL×dout be the matrix consisting of the values at the L column neighbors of pixel (a, b). Let f cab denote the output of the column-only positional attention layer at pixel (a, b). Then, our column-only positional attention mechanism, which uses the relative position embeddings Rc as keys, can be described using\nf cab = ( qabR c>)V cab, (2) where qab is the query at pixel (a, b). Since each pixel only attends to L column neighbors, the computational and memory complexities of this column-only positional attention layer are O(NL), where N is the number of pixels. Similarly, a row-only positional attention layer with O(NL) computational and memory complexities can be defined using L learnable relative position embeddings Rr = [rrδ ] ∈ RL×dk corresponding to the L row neighbors. In the case of global axial attention, the neighborhood spans the entire column or row resulting inO(N √ N) computational and memory complexities.\nThe final output feature map of the GSA module is the sum of the outputs of the content and positional attention layers." }, { "heading": "3.2 GSA NETWORKS", "text": "A GSA network is a deep network that uses GSA modules instead of spatial convolutions to model pixel interactions. Table 1 shows how a GSA network differs from various recent attention-based\nnetworks. All existing works except Wang et al. (2020) either insert their attention modules into CNNs as auxiliary blocks (Bello et al., 2019; Chen et al., 2018; Huang et al., 2019; Shen et al., 2018; Wang et al., 2018; Yue et al., 2018; Carion et al., 2020; Sun et al., 2019) at later stages of the network or constrain their attention mechanism to small local regions (Hu et al., 2019; Ramachandran et al., 2019; Zhao et al., 2020). In contrast, a GSA network replaces spatial convolution layers in a deep network with a global attention module and has the ability to model long-range pixel interactions throughout the network. While Wang et al. (2020) also introduces a global attention module as an alternative for spatial convolution, their module uses axial attention mechanism for both content and positional attention. In contrast, the proposed GSA module uses a non-axial global content attention mechanism that attends to the entire image at once rather than just a row or column." }, { "heading": "3.3 JUSTIFICATIONS", "text": "The proposed GSA module uses a direct global attention operation for content attention and an axial attention mechanism for positional attention.\nWhy not axial content attention? Axial attention is a mechanism that approximates direct global attention with column-only attention followed by row-only attention. In the proposed global content attention layer, two pixels (i, j) and (p, q) interact directly based only on their content. In contrast, in a column-followed-by-row axial content attention layer, pixels (i, j) and (p, q) would interact through pixel (p, j), and hence, their interaction would be undesirably controlled by the content at (p, j). Therefore, the proposed direct global attention is better than axial mechanism for content attention. This is also verified by the experimental results in Table 2 which show that the proposed GSA module that uses direct global content attention is significantly better than axial attention.\nWhy not direct global positional attention? It is important to attend to pixels based on relative positions (instead of absolute positions) to maintain translation equivariance. In the case of content attention, each pixel has a unique key, and hence, we can multiply keys and values first to make the attention mechanism efficient. This is not possible in the case of positional attention since the key at a pixel varies based on its relative position with respect to the query pixel. Hence, we use axial mechanism to make positional attention efficient. While axial attention is not good for content attention (as explained above), it is suitable for positional attention. The relative position between pixels (i, j) and (p, q) is strongly correlated to the relative positions between pixels (i, j) and (p, j), and between pixels (p, q) and (p, j). So, routing position-based interaction between (i, j) and (p, q) through (p, j) works fine." }, { "heading": "4 EXPERIMENTS", "text": "Model Unless specified otherwise, we use GSA-ResNet-50, a network obtained by replacing all 3 × 3 convolution layers in ResNet-50 (He et al., 2016) with the proposed GSA module. We use an input size of 224 × 224, and for reducing the spatial dimensions, we use 2 × 2 average pooling layers (with stride 2) immediately after the first GSA module in the second, third and fourth residual groups. The number of channels for K,Q,V in each GSA module are set to be the same as the corresponding input features. We use a multi-head attention mechanism (Ramachandran et al., 2019; Vaswani et al., 2017) with 8 heads in each GSA module. The relative position embeddings are shared across all heads within a module, but not across modules. All 1× 1 convolutions and GSA modules are followed by batch normalization (Ioffe & Szegedy, 2015).\nTraining and evaluation All models are trained and evaluated on the training and validation sets of the ImageNet dataset (Russakovsky et al., 2015), respectively. They are trained from scratch for 90 epochs using stochastic gradient descent with momentum of 0.9, cosine learning rate schedule with base learning rate of 0.1, weight decay of 10−4, and mini-batch size of 2048. We use standard data augmentations such as random cropping and horizontal flipping. Following recent attentionbased works (Ramachandran et al., 2019; Zhao et al., 2020; Wang et al., 2020), we also use label smoothing regularization with coefficient 0.1. For evaluation, we use a single 224 × 224 center crop. While computing FLOPs, multiplications and additions are counted separately. For reporting runtime, we measure inference time for a single image on a TPUv3 accelerator." }, { "heading": "4.1 COMPARISON WITH THE CONVOLUTION OPERATION", "text": "Figure 2 compares ResNet-{38,50,101} structure-based CNNs and GSA networks. The GSA networks outperform CNNs significantly while using less parameters, computations and runtime. These results clearly shows the superiority of the proposed global attention module over the widely-used convolution operation. With increasing popularity of attention-based models, we hope that hardware accelerators will be further optimized for attention-based operations and GSA networks will become much more faster than CNNs in the near future." }, { "heading": "4.2 COMPARISON WITH AXIAL ATTENTION", "text": "The GSA module uses a global content attention mechanism that attends to the entire image at once. To validate the superiority of this attention mechanism over axial attention, in Table 2, we compare the proposed GSA module with a global attention module that attends based on both content and positions similar to Ramachandran et al. (2019) but in an axial fashion. The GSA module clearly outperforms the axial alternative. Also, the performance of our axial positional attention alone is comparable to the axial attention that uses both content and positions suggesting that axial mechanism is not able to take advantage of content-only interactions (see Section 3.3 for justification)." }, { "heading": "4.3 COMPARISON WITH EXISTING ATTENTION-BASED APPROACHES", "text": "Table 3 compares GSA networks with recent attention-based networks. The GSA networks achieve better performance than existing global and local attention-based networks while using similar or less number of parameters and FLOPs, except when compared to Zhao et al. (2020); Wang et al. (2020) which use slightly fewer FLOPs. Compared to local attention-based works (Hu et al., 2019; Ramachandran et al., 2019; Zhao et al., 2020), the proposed GSA network takes advantage of global attention throughout the network and produces better results. Compared to Shen et al. (2018); Yue et al. (2018); Bello et al. (2019); Chen et al. (2018) which insert a few attention modules as auxiliary blocks into a CNN, the proposed GSA network uses global attention through out the network. Compared to Wang et al. (2020), the proposed GSA network uses a non-axial global content attention which is better than axial mechanism. To report runtime for other methods, we measure single image inference time on a TPUv3 accelerator using the code provided by the corresponding authors." }, { "heading": "4.4 ABLATION STUDIES", "text": "" }, { "heading": "4.4.1 IMPORTANCE OF INDIVIDUAL COMPONENTS", "text": "As described in Section 3, a GSA module consists of three components: a content attention layer, a column-only positional attention layer, and a row-only positional attention layer. Table 4 shows the results for different variants of the proposed GSA module obtained by removing one or more of its components. As expected, the module with all three components performs the best and the content-only attention performs poorly (7.7% drop in the top-1 accuracy) since it treats the entire image as a bag of pixels. This clearly shows the need for positional attention that is missing in many existing global attention-based works (Chen et al., 2018; Wang et al., 2018; Yue et al., 2018). Interestingly, for positional attention, column-only attention performs better than row-only attention\n(row3 vs row4 and row5 vs row6) suggesting that modeling pixel interactions along the vertical dimension is more important than the horizontal dimension for categories in the ImageNet dataset." }, { "heading": "4.4.2 WHERE IS GLOBAL ATTENTION MOST HELPFUL?", "text": "Our default GSA-ResNet-50 replaces spatial convolution with the proposed global attention module in all residual groups of ResNet-50. Table 5 shows how the performance varies when global attention replaces spatial convolution only in certain residual groups. Starting from the last residual group, as we move towards the earlier stages of the network, replacing convolution with attention improves the performance consistently until the second residual group. Replacing convolutions in the first residual group results in a slight drop in the performance. These results show that the global attention mechanism is helpful throughout the network except in the first few layers. This is an expected behavior since the first few layers of a deep network typically focus on learning low-level features. It is worth noting that by replacing convolutions with the proposed GSA modules in the second, third and fourth residual blocks of ResNet-50, we are able to achieve same top-1 accuracy as convolutionbased ResNet-101 while being significantly faster." }, { "heading": "4.5 RESULTS ON CIFAR-100 (KRIZHEVSKY & HINTON, 2009)", "text": "Similar to the ImageNet dataset, the proposed GSA networks outperform the corresponding CNNs significantly on the CIFAR-100 dataset while using less parameters, computations, and runtime. Improvements in the top-1 accuracy with ResNet-{38, 50, 101} structures are 2.5%, 2.7% and 1.6%, respectively. Please refer to Fig. 3 and Table 6 in the Appendix for further details." }, { "heading": "5 CONCLUSIONS", "text": "In this work, we introduced a new global self-attention module that takes both the content and spatial locations of the pixels into account. This module consists of parallel content and positional attention branches, whose outputs are summed at the end. While the content branch attends to all the pixels jointly using an efficient global attention mechanism, the positional attention branch follows axial formulation and performs column-only attention followed by row-only attention. Overall, the proposed GSA module is efficient enough to be the backbone component of a deep network. Based on the proposed GSA module, we introduced GSA networks that use GSA modules instead of spatial convolutions. Due to the global extent of the proposed GSA module, these networks have the ability to model long-range pixel interactions throughout the network. We conducted experiments on the CIFAR-100 and ImageNet datasets, and showed that GSA networks clearly outperform their convolution-based counterparts while using less parameters and computations. We also showed that GSA networks outperform various recent local and global attention-based networks. In the near future, we plan to extend this work to other computer vision tasks." }, { "heading": "A CIFAR-100 EXPERIMENTS", "text": "All the models are trained and evaluated on the training and test splits of CIFAR-100, respectively. They are trained for 10K steps starting from ImageNet pretrained weights using stochastic gradient descent with momentum of 0.9, weight decay of 10−4, and mini-batch size of 128. We use an initial learning rate of 5× 10−3 and reduce it by a factor of 10 after every 3K steps. For both training and evaluation, we use 224× 224 input images. Fig. 3 compares ResNet-{38,50,101} structure-based CNNs and GSA networks on the CIFAR-100 dataset. Similar to ImageNet results, GSA networks outperform CNNs significantly on the CIFAR100 dataset while using less parameters, computations, and runtime. Table 6 reports all the numbers corresponding to the plots in Fig. 2 and Fig. 3." }, { "heading": "B MATHEMATICAL IMPLEMENTATION DETAILS", "text": "This section presents mathematical implementation details of the Global Self-Attention (GSA) module to supplement the high-level description in Section 3 of the paper.\nFor conciseness and better resemblance of the actual implementation, this section uses the Einstein notation1. Note that both TensorFlow Abadi et al. (2015) and PyTorch Paszke et al. (2019) provide direct support for the Einstein notation, through tf.einsum() and torch.einsum(), respectively. Therefore, there are direct TensorFlow/PyTorch transcriptions for all equations in this section.\nAssume the input X is a rank-3 tensor of shape h× w × d, for h the height, w the width, and d the number of channels.\nKQV layer The first step is to compute the keys K, queries Q, and values V from X using 3 separate 1 × 1 (i.e. point-wise) convolution layers. Then, the module splits K,Q, V each into n equal-size slices along the channel dimension for the n attention heads. An efficient implementation fuses the two steps into\nKxynk = W (K) dnkXxyd,\nQxynk = W (Q) dnkXxyd,\nVxynv = W (V ) dnvXxyd,\n(3)\nwhere W (K),W (Q),W (V ) are the corresponding weights, x, y are the spatial dimensions, n is the head dimension, and d, k, v are the channels dimensions for the input, the keys and queries, and the values, respectively.\nContent attention As Section 3 of the paper describes, within each head the module uses matrix multiplication to implement content attention. The actual implementation parallelizes the process across all heads by computing\nK̂ = σ(K),\nCnkv = K̂xynkVxynv,\nY Cxynv = QxynkCnkv,\n(4)\nwhere σ represents softmax along the spatial dimensions (x, y).\nPositional attention The positional attention layer consists of a column-only attention sub-layer, a batch normalization layer, and a row-only attention sub-layer. Since the column-only and rowonly sub-layers are symmetric, this section only presents the implementation for the column-only sub-layer.\n1The Einstein notation is a compact convention for linear algebra operations Albert Einstein developed. https://en.wikipedia.org/wiki/Einstein_notation provides a reference. https: //ajcr.net/Basic-guide-to-einsum/ gives an intuitive tutorial.\nThe layer maintains a relative position embedding matrix R ∈ R(2h−1)×k, for h the image height and k the number of channels. Each of the 2h − 1 rows corresponds to a possible vertical relative shift, from −(h − 1) to h − 1. The first step is to re-index this matrix from using relative shifts to absolute shifts. To achieve this goal, the module creates a re-indexing tensor I where\nIx,i,r = 1, if i− x = r & |i− x| ≤ L, Ix,i,r = 0, otherwise,\n(5)\nwhereL is the maximum relative shift to attend to. The default version of GSA setsL = max{h,w} so that the positional attention is global.\nThen, the module computes the position embedding tensor whose indices are the absolute shifts as\nPxik = IxirRrk. (6)\nNow, the output of the column-only attention sub-layer is\nSxyin = QxynkPxik, Y Hxynv = SxyinViynv. (7)\nAfter obtaining Y H , the module applies batch normalization to it and uses it as the input to the row-only sub-layer to generate YW as the final output of the positional attention layer.\nFinal fusion After computing the outputs of the content and positional attention layers, the final output is simply\nY = Y C + YW . (8)\nComparison to competing approaches The implementation of the GSA module only consists of 8 Einstein-notation equations and 5 other equations, each of which corresponds to one line of code in TensorFlow or PyTorch. The implementation is substantially simpler in comparison to competing approaches Ramachandran et al. (2019); Zhao et al. (2020) using local attention which requires custom kernels." } ]
2,020
null
SP:bf70c9e16933774746d621a5b8475843e723ac24
[ "In the context of deep learning, back-propagation is stochastic in the sample level to attain bette efficiency than full-dataset gradient descent. The authors asked that, can we further randomize the gradient compute within each single minibatch / sample with the goal to achieve strong model accuracy. In modern deep learning, training memory consumption is high due to activation caching. Thus this randomized approach can help attain strong model accuracy under memory constraints." ]
The successes of deep learning, variational inference, and many other fields have been aided by specialized implementations of reverse-mode automatic differentiation (AD) to compute gradients of mega-dimensional objectives. The AD techniques underlying these tools were designed to compute exact gradients to numerical precision, but modern machine learning models are almost always trained with stochastic gradient descent. Why spend computation and memory on exact (minibatch) gradients only to use them for stochastic optimization? We develop a general framework and approach for randomized automatic differentiation (RAD), which can allow unbiased gradient estimates to be computed with reduced memory in return for variance. We examine limitations of the general approach, and argue that we must leverage problem specific structure to realize benefits. We develop RAD techniques for a variety of simple neural network architectures, and show that for a fixed memory budget, RAD converges in fewer iterations than using a small batch size for feedforward networks, and in a similar number for recurrent networks. We also show that RAD can be applied to scientific computing, and use it to develop a low-memory stochastic gradient method for optimizing the control parameters of a linear reaction-diffusion PDE representing a fission reactor.
[ { "affiliations": [], "name": "Deniz Oktay" }, { "affiliations": [], "name": "Nick McGreivy" }, { "affiliations": [], "name": "Joshua Aduol" }, { "affiliations": [], "name": "Alex Beatson" }, { "affiliations": [], "name": "Ryan P. Adams" } ]
[ { "authors": [ "Martín Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "In 12th USENIX Symposium on Operating Systems Design and Implementation", "year": 2016 }, { "authors": [ "Hany S Abdel-Khalik", "Paul D Hovland", "Andrew Lyons", "Tracy E Stover", "Jean Utke" ], "title": "A low rank approach to automatic differentiation", "venue": "In Advances in Automatic Differentiation,", "year": 2008 }, { "authors": [ "Menachem Adelman", "Mark Silberstein" ], "title": "Faster neural network training with approximate tensor operations", "venue": "arXiv preprint arXiv:1805.08079,", "year": 2018 }, { "authors": [ "Friedrich L Bauer" ], "title": "Computational graphs and rounding error", "venue": "SIAM Journal on Numerical Analysis,", "year": 1974 }, { "authors": [ "Atilim Gunes Baydin", "Barak A Pearlmutter", "Alexey Andreyevich Radul", "Jeffrey Mark Siskind" ], "title": "Automatic differentiation in machine learning: a survey", "venue": "Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Alex Beatson", "Ryan P Adams" ], "title": "Efficient optimization of loops and limits with randomized telescoping sums", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "James Bergstra", "Olivier Breuleux", "Frédéric Bastien", "Pascal Lamblin", "Razvan Pascanu", "Guillaume Desjardins", "Joseph Turian", "David Warde-Farley", "Yoshua Bengio" ], "title": "Theano: a CPU and GPU math expression compiler", "venue": "In Proceedings of the Python for Scientific Computing Conference (SciPy),", "year": 2010 }, { "authors": [ "Christian Bischof", "Alan Carle", "George Corliss", "Andreas Griewank", "Paul Hovland" ], "title": "ADIFOR– generating derivative codes from Fortran programs", "venue": "Scientific Programming,", "year": 1992 }, { "authors": [ "Ricky T.Q. Chen", "Yulia Rubanova", "Jesse Bettencourt", "David Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tian Qi Chen", "Jens Behrmann", "David K Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Residual flows for invertible generative modeling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tianqi Chen", "Bing Xu", "Chiyuan Zhang", "Carlos Guestrin" ], "title": "Training deep nets with sublinear memory cost", "venue": "arXiv preprint arXiv:1604.06174,", "year": 2016 }, { "authors": [ "Krzysztof M Choromanski", "Vikas Sindhwani" ], "title": "On blackbox backpropagation and Jacobian sensing", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Conal Elliott" ], "title": "The simple essence of automatic differentiation", "venue": "Proceedings of the ACM on Programming Languages,", "year": 2018 }, { "authors": [ "Aidan N Gomez", "Mengye Ren", "Raquel Urtasun", "Roger B Grosse" ], "title": "The reversible residual network: Backpropagation without storing activations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "A Griewank", "U Naumann" ], "title": "Accumulating Jacobians by vertex, edge, or face elimination", "venue": "cari", "year": 2002 }, { "authors": [ "Andreas Griewank", "Andrea Walther" ], "title": "Algorithm 799: revolve: an implementation of checkpointing for the reverse or adjoint mode of computational differentiation", "venue": "ACM Transactions on Mathematical Software (TOMS),", "year": 2000 }, { "authors": [ "Andreas Griewank", "Andrea Walther" ], "title": "Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, volume 105", "venue": null, "year": 2008 }, { "authors": [ "Laurent Hascoet", "Valérie Pascual" ], "title": "The Tapenade automatic differentiation tool: Principles, model, and specification", "venue": "ACM Transactions on Mathematical Software (TOMS),", "year": 2013 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Geoffrey E Hinton" ], "title": "Training products of experts by minimizing contrastive divergence", "venue": "Neural computation,", "year": 2002 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Quoc V Le", "Navdeep Jaitly", "Geoffrey E Hinton" ], "title": "A simple way to initialize recurrent networks of rectified linear units", "venue": "arXiv preprint arXiv:1504.00941,", "year": 2015 }, { "authors": [ "Yucen Luo", "Alex Beatson", "Mohammad Norouzi", "Jun Zhu", "David Duvenaud", "Ryan P Adams", "Ricky TQ Chen" ], "title": "Sumo: Unbiased estimation of log marginal probability for latent variable models", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Dougal Maclaurin", "David Duvenaud", "Ryan Adams" ], "title": "Gradient-based hyperparameter optimization through reversible learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Ryan G. McClarren" ], "title": "Computational Nuclear Engineering and Radiological Science Using Python: Chapter 18 - One-Group Diffusion Equation", "venue": null, "year": 2018 }, { "authors": [ "Uwe Naumann" ], "title": "Optimal accumulation of Jacobian matrices by elimination methods on the dual computational graph", "venue": "Mathematical Programming,", "year": 2004 }, { "authors": [ "Uwe Naumann" ], "title": "Optimal Jacobian accumulation is NP-complete", "venue": "Mathematical Programming,", "year": 2008 }, { "authors": [ "Samuel L Smith", "Quoc V Le" ], "title": "A Bayesian perspective on generalization and stochastic gradient descent", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Xu Sun", "Xuancheng Ren", "Shuming Ma", "Houfeng Wang" ], "title": "meprop: Sparsified back propagation for accelerated deep learning with reduced overfitting", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Corentin Tallec", "Yann Ollivier" ], "title": "Unbiasing truncated backpropagation through time", "venue": "arXiv preprint arXiv:1705.08209,", "year": 2017 }, { "authors": [ "Bart van Merrienboer", "Dan Moldovan", "Alexander Wiltschko" ], "title": "Tangent: Automatic differentiation using source-code transformation for dynamically typed array programming", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Andrea Walther", "Andreas Griewank" ], "title": "Getting started with ADOL-C", "venue": "Combinatorial Scientific Computing,", "year": 2009 }, { "authors": [ "Bingzhen Wei", "Xu Sun", "Xuancheng Ren", "Jingjing Xu" ], "title": "Minimal effort back propagation for convolutional neural networks", "venue": "arXiv preprint arXiv:1709.05804,", "year": 2017 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have taken center stage as a powerful way to construct and train massivelyparametric machine learning (ML) models for supervised, unsupervised, and reinforcement learning tasks. There are many reasons for the resurgence of neural networks—large data sets, GPU numerical computing, technical insights into overparameterization, and more—but one major factor has been the development of tools for automatic differentiation (AD) of deep architectures. Tools like PyTorch and TensorFlow provide a computational substrate for rapidly exploring a wide variety of differentiable architectures without performing tedious and error-prone gradient derivations. The flexibility of these tools has enabled a revolution in AI research, but the underlying ideas for reverse-mode AD go back decades. While tools like PyTorch and TensorFlow have received huge dividends from a half-century of AD research, they are also burdened by the baggage of design decisions made in a different computational landscape. The research on AD that led to these ubiquitous deep learning frameworks is focused on the computation of Jacobians that are exact up to numerical precision. However, in modern workflows these Jacobians are used for stochastic optimization. We ask:\nWhy spend resources on exact gradients when we’re going to use stochastic optimization?\nThis question is motivated by the surprising realization over the past decade that deep neural network training can be performed almost entirely with first-order stochastic optimization. In fact, empirical evidence supports the hypothesis that the regularizing effect of gradient noise assists model generalization (Keskar et al., 2017; Smith & Le, 2018; Hochreiter & Schmidhuber, 1997). Stochastic gradient descent variants such as AdaGrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2015) form the core of almost all successful optimization techniques for these models, using small subsets of the data to form the noisy gradient estimates.\n1Department of Computer Science 2Department of Astrophysical Sciences\nfrom math import sin , exp\ndef f (x1, x2): a = exp(x1) b = sin (x2) c = b ∗ x2 d = a ∗ c return a ∗ d\na exp(x1)\nb sin(x2)\nc b * x2\nd a * c\nf a * d\na\nb\nc\nd\nf\nexp(x1)\ncos(x2)\nx2\nb\nc a\nd\na\na\nb\nc\nd\nf\nexp(x1)\ncos(x2)\nx2\nb\nc a\nd\na\nThe goals and assumptions of automatic differentiation as performed in classical and modern systems are mismatched with those required by stochastic optimization. Traditional AD computes the derivative or Jacobian of a function accurately to numerical precision. This accuracy is required for many problems in applied mathematics which AD has served, e.g., solving systems of differential equations. But in stochastic optimization we can make do with inaccurate gradients, as long as our estimator is unbiased and has reasonable variance. We ask the same question that motivates mini-batch SGD: why compute an exact gradient if we can get noisy estimates cheaply? By thinking of this question in the context of AD, we can go beyond mini-batch SGD to more general schemes for developing cheap gradient estimators: in this paper, we focus on developing gradient estimators with low memory cost. Although previous research has investigated approximations in the forward or reverse pass of neural networks to reduce computational requirements, here we replace deterministic AD with randomized automatic differentiation (RAD), trading off of computation for variance inside AD routines when imprecise gradient estimates are tolerable, while retaining unbiasedness." }, { "heading": "2 AUTOMATIC DIFFERENTIATION", "text": "Automatic (or algorithmic) differentiation is a family of techniques for taking a program that computes a differentiable function f : Rn → Rm, and producing another program that computes the associated derivatives; most often the Jacobian: J [f ] = f ′ : Rn → Rm×n. (For a comprehensive treatment of AD, see Griewank & Walther (2008); for an ML-focused review see Baydin et al. (2018).) In most machine learning applications, f is a loss function that produces a scalar output, i.e., m = 1, for which the gradient with respect to parameters is desired. AD techniques are contrasted with the method of finite differences, which approximates derivatives numerically using a small but non-zero step size, and also distinguished from symbolic differentiation in which a mathematical expression is processed using standard rules to produce another mathematical expression, although Elliott (2018) argues that the distinction is simply whether or not it is the compiler that manipulates the symbols.\nThere are a variety of approaches to AD: source-code transformation (e.g., Bischof et al. (1992); Hascoet & Pascual (2013); van Merrienboer et al. (2018)), execution tracing (e.g., Walther & Griewank (2009); Maclaurin et al.), manipulation of explicit computational graphs (e.g., Abadi et al. (2016); Bergstra et al. (2010)), and category-theoretic transformations (Elliott, 2018). AD implementations exist for many different host languages, although they vary in the extent to which they take advantage of native programming patterns, control flow, and language features. Regardless of whether it is constructed at compile-time, run-time, or via an embedded domain-specific language, all AD approaches can be understood as manipulating the linearized computational graph (LCG) to collapse out intermediate variables. Figure 1 shows the LCG for a simple example. These computational graphs are always directed acyclic graphs (DAGs) with vertices as variables.\nLet the outputs of f be yj , the inputs θi, and the intermediates zl. AD can be framed as the computation of a partial derivative as a sum over all paths through the LCG DAG (Bauer, 1974):\n∂yj ∂θi\n= Jθ[f ]j,i = ∑\n[i→j] ∏ (k,l)∈[i→j] ∂zl ∂zk\n(1)\nwhere [i→ j] indexes paths from vertex i to vertex j and (k, l) ∈ [i→ j] denotes the set of edges in that path. See Figure 1d for an illustration. Although general, this naïve sum over paths does not take advantage of the structure of the problem and so, as in other kinds of graph computations, dynamic programming (DP) provides a better approach. DP collapses substructures of the graph until it becomes bipartite and the remaining edges from inputs to outputs represent exactly the entries of the Jacobian matrix. This is referred to as the Jacobian accumulation problem (Naumann, 2004) and there are a variety of ways to manipulate the graph, including vertex, edge, and face elimination (Griewank & Naumann, 2002). Forward-mode AD and reverse-mode AD (backpropagation) are special cases of more general dynamic programming strategies to perform this summation; determination of the optimal accumulation schedule is unfortunately NP-complete (Naumann, 2008).\nWhile the above formulation in which each variable is a scalar can represent any computational graph, it can lead to structures that are difficult to reason about. Often we prefer to manipulate vectors and matrices, and we can instead let each intermediate zl represent a dl dimensional vector. In this case, ∂zl/∂zk ∈ Rdl×dk represents the intermediate Jacobian of the operation zk → zl. Note that Equation 1 now expresses the Jacobian of f as a sum over chained matrix products." }, { "heading": "3 RANDOMIZING AUTOMATIC DIFFERENTIATION", "text": "We introduce techniques that could be used to decrease the resource requirements of AD when used for stochastic optimization. We focus on functions with a scalar output where we are interested in the gradient of the output with respect to some parameters, Jθ[f ]. Reverse-mode AD efficiently calculates Jθ[f ], but requires the full linearized computational graph to either be stored during the forward pass, or to be recomputed during the backward pass using intermediate variables recorded during the forward pass. For large computational graphs this could provide a large memory burden.\nThe most common technique for reducing the memory requirements of AD is gradient checkpointing (Griewank & Walther, 2000; Chen et al., 2016), which saves memory by adding extra forward pass computations. Checkpointing is effective when the number of \"layers\" in a computation graph is much larger than the memory required at each layer. We take a different approach; we instead aim to save memory by increasing gradient variance, without extra forward computation.\nOur main idea is to consider an unbiased estimator Ĵθ[f ] such that EĴθ[f ] = Jθ[f ] which allows us to save memory required for reverse-mode AD. Our approach is to determine a sparse (but random) linearized computational graph during the forward pass such that reverse-mode AD applied on the sparse graph yields an unbiased estimate of the true gradient. Note that the original computational graph is used for the forward pass, and randomization is used to determine a LCG to use for the backward pass in place of the original computation graph. We may then decrease memory costs by storing the sparse LCG directly or storing intermediate variables required to compute the sparse LCG.\nIn this section we provide general recipes for randomizing AD by sparsifying the LCG. In sections 4 and 5 we apply these recipes to develop specific algorithms for neural networks and linear PDEs which achieve concrete memory savings." }, { "heading": "3.1 PATH SAMPLING", "text": "Observe that in Bauer’s formula each Jacobian entry is expressed as a sum over paths in the LCG. A simple strategy is to sample paths uniformly at random from the computation graph, and form a Monte Carlo estimate of Equation 1. Naïvely this could take multiple passes through the graph. However, multiple paths can be sampled without significant computation overhead by performing a topological sort of the vertices and iterating through vertices, sampling multiple outgoing edges for each. We provide a proof and detailed algorithm in the appendix. Dynamic programming methods such as reverse-mode automatic differentiation can then be applied to the sparsified LCG." }, { "heading": "3.2 RANDOM MATRIX INJECTION", "text": "In computation graphs consisting of vector operations, the vectorized computation graph is a more compact representation. We introduce an alternative view on sampling paths in this case. A single path in the vectorized computation graph represents many paths in the underlying scalar computation graph. As an example, Figure 2c is a vector representation for Figure 2b. For this example,\n∂y ∂θ = ∂y ∂C ∂C ∂B ∂B ∂A ∂A ∂θ (2)\nwhere A,B,C are vectors with entries ai, bi, ci, ∂C/∂B, ∂B/∂A are 3× 3 Jacobian matrices for the intermediate operations, ∂y/∂C is 1× 3, and ∂A/∂θ is 3× 1.\nWe now note that the contribution of the path p = θ → a1 → b2 → c2 → y to the gradient is,\n∂y ∂C P2 ∂C ∂B P2 ∂B ∂A P1 ∂A ∂θ (3)\nwhere Pi = eieTi (outer product of standard basis vectors). Sampling from {P1, P2, P3} and right multiplying a Jacobian is equivalent to sampling the paths passing through a vertex in the scalar graph.\nIn general, if we have transition B → C in a vectorized computational graph, where B ∈ Rd, C ∈ Rm, we can insert a random matrix P = d/k ∑k s=1 Ps where each Ps is sampled uniformly from {P1, P2, . . . , Pd}. With this construction, EP = Id, so\nE [ ∂C\n∂B P\n] = ∂C\n∂B . (4)\nIf we have a matrix chain product, we can use the fact that the expectation of a product of independent random variables is equal to the product of their expectations, so drawing independent random matrices PB , PC would give\nE [ ∂y\n∂C PC\n∂C ∂B PB\n] = ∂y\n∂C E [PC ]\n∂C ∂B E [PB ] = ∂y ∂C ∂C ∂B (5)\nRight multiplication by P may be achieved by sampling the intermediate Jacobian: one does not need to actually assemble and multiply the two matrices. For clarity we adopt the notation SP [∂C/∂B] = ∂C/∂BP . This is sampling (with replacement) k out of the d vertices represented by B, and only considering paths that pass from those vertices.\nThe important properties of P that enable memory savings with an unbiased approximation are\nEP = Id and P = RRT , R ∈ Rd×k, k < d . (6) We could therefore consider other matrices with the same properties. In our additional experiments in the appendix, we also let R be a random projection matrix of independent Rademacher random variables, a construction common in compressed sensing and randomized dimensionality reduction.\nIn vectorized computational graphs, we can imagine a two-level sampling scheme. We can both sample paths from the computational graph where each vertex on the path corresponds to a vector. We can also sample within each vector path, with sampling performed via matrix injection as above.\nIn many situations the full intermediate Jacobian for a vector operation is unreasonable to store. Consider the operation B → C where B,C ∈ Rd. The Jacobian is d × d. Thankfully many common operations are element-wise, leading to a diagonal Jacobian that can be stored as a d-vector. Another common operation is matrix-vector products. Consider Ab = c, ∂c/∂b = A. Although A has many more entries than c or b, in many applications A is either a parameter to be optimized or is easily recomputed. Therefore in our implementations, we do not directly construct and sparsify the Jacobians. We instead sparsify the input vectors or the compact version of the Jacobian in a way that has the same effect. Unfortunately, there are some practical operations such as softmax that do not have a compactly-representable Jacobian and for which this is not possible." }, { "heading": "3.3 VARIANCE", "text": "The variance incurred by path sampling and random matrix injection will depend on the structure of the LCG. We present two extremes in Figure 2. In Figure 2a, each path is independent and there are a small number of paths. If we sample a fixed fraction of all paths, variance will be constant in the depth of the graph. In contrast, in Figure 2b, the paths overlap, and the number of paths increases exponentially with depth. Sampling a fixed fraction of all paths would require almost all edges in the graph, and sampling a fixed fraction of vertices at each layer (using random matrix injection, as an example) would lead to exponentially increasing variance with depth.\nIt is thus difficult to apply sampling schemes without knowledge of the underlying graph. Indeed, our initial efforts to apply random matrix injection schemes to neural network graphs resulted in variance exponential with depth of the network, which prevented stochastic optimization from converging. We develop tailored sampling strategies for computation graphs corresponding to problems of common interest, exploiting properties of these graphs to avoid the exploding variance problem." }, { "heading": "4 CASE STUDY: NEURAL NETWORKS", "text": "We consider neural networks composed of fully connected layers, convolution layers, ReLU nonlinearities, and pooling layers. We take advantage of the important property that many of the intermediate Jacobians can be compactly stored, and the memory required during reverse-mode is often bottlenecked by a few operations. We draw a vectorized computational graph for a typical simple neural network in figure 3. Although the diagram depicts a dataset of size of 3, mini-batch size of size 1, and 2 hidden layers, we assume the dataset size is N . Our analysis is valid for any number of hidden layers, and also recurrent networks. We are interested in the gradients ∂y/∂W1 and ∂y/∂W2." }, { "heading": "4.1 MINIBATCH SGD AS RANDOMIZED AD", "text": "At first look, the diagram has a very similar pattern to that of 2a, so that path sampling would be a good fit. Indeed, we could sample B < N paths from W1 to y, and also B paths from W2 to y. Each path corresponds to processing a different mini-batch element, and the computations are independent.\nIn empirical risk minimization, the final loss function is an average of the loss over data points. Therefore, the intermediate partials ∂y/∂h2,x for each data point x will be independent of the other data points. As a result, if the same paths are chosen in path sampling for W1 and W2, and if we are only interested in the stochastic gradient (and not the full function evaluation), the computation graph only needs to be evaluated for the data points corresponding to the sampled paths. This exactly corresponds to mini-batching. The paths are visually depicted in Figure 3b." }, { "heading": "4.2 ALTERNATIVE SGD SCHEMES WITH RANDOMIZED AD", "text": "We wish to use our principles to derive a randomization scheme that can be used on top of mini-batch SGD. We ensure our estimator is unbiased as we randomize by applying random matrix injection independently to various intermediate Jacobians. Consider a path corresponding to data point 1. The contribution to the gradient ∂y/∂W1 is\n∂y\n∂h2,1 ∂h2,1 ∂a1,1 ∂a1,1 ∂h1,1 ∂h1,1 ∂W1\n(7)\nUsing random matrix injection to sample every Jacobian would lead to exploding variance. Instead, we analyze each term to see which are memory bottlenecks. ∂y/∂h2,1 is the Jacobian with respect to (typically) the loss. Memory requirements for this Jacobian are independent of depth of the network. The dimension of the classifier is usually smaller (10− 1000) than the other layers (which can have dimension 10, 000 or more in convolutional networks). Therefore, the Jacobian at the output layer is not a memory bottleneck.\nX\nW1\nH1 A1\nW2\nH2 L\nFigure 4: Convnet activation sampling for one minibatch element. X is the image, H is the pre-activation, and A is the activation. A is the output of a ReLU, so we can store the Jacobian ∂A1/∂H1 with 1 bit per entry. For X and H we sample spatial elements and compute the Jacobians ∂H1/∂W1 and ∂H2/∂W2 with the sparse tensors.\n∂h2,1/∂a1,1 is the Jacobian of the hidden layer with respect to the previous layer activation. This can be constructed from W2, which must be stored in memory, with memory cost independent of mini-batch size. In convnets, due to weight sharing, the effective dimensionality is much smaller than H1 ×H2. In recurrent networks, it is shared across timesteps. Therefore, these are not a memory bottleneck. ∂a1,1/∂h1,1 contains the Jacobian of the ReLU activation function. This can be compactly stored using 1-bit per entry, as the gradient can only be 1 or 0. Note that this is true for ReLU activations in particular, and not true for general activation functions, although ReLU is widely used in deep learning. For ReLU activations, these partials are not a memory bottleneck. ∂h1,1/∂W1 contains the memory bottleneck for typical ReLU neural networks. This is the Jacobian of the hidden layer output with respect to W1, which, in a multi-layer perceptron, is equal to x1. For B data points, this is a B ×D dimensional matrix. Accordingly, we choose to sample ∂h1,1/∂W1, replacing the matrix chain with ∂y∂h2,1 ∂h2,1 ∂a1,1 ∂a1,1 ∂h1,1 SPW1 [ ∂h1,1 ∂W1 ] . For an arbitrarily deep NN, this can be generalized:\n∂y\n∂hd,1 ∂hd,1 ∂ad−1,1 ∂ad−1,1 ∂hd−1,1 . . . ∂a1,1 ∂h1,1 SPW1 [ ∂h1,1 ∂W1 ] ,\n∂y\n∂hd,1 ∂hd,1 ∂ad−1,1 ∂ad−1,1 ∂hd−1,1 . . . ∂a2,1 ∂h2,1 SPW2 [ ∂h2,1 ∂W2 ] This can be interpreted as sampling activations on the backward pass. This is our proposed alternative SGD scheme for neural networks: along with sampling data points, we can also sample activations, while maintaining an unbiased approximation to the gradient. This does not lead to exploding variance, as along any path from a given neural network parameter to the loss, the sampling operation is only applied to a single Jacobian. Sampling for convolutional networks is visualized in Figure 4." }, { "heading": "4.3 NEURAL NETWORK EXPERIMENTS", "text": "We evaluate our proposed RAD method on two feedforward architectures: a small fully connected network trained on MNIST, and a small convolutional network trained on CIFAR-10. We also evaluate our method on an RNN trained on Sequential-MNIST. The exact architectures and the calculations for the associated memory savings from our method are available in the appendix. In Figure 5 we include empirical analysis of gradient noise caused by RAD vs mini-batching.\nWe are mainly interested in the following question:\nFor a fixed memory budget and fixed number of gradient descent iterations, how quickly does our proposed method optimize the training loss compared to standard SGD with a smaller mini-batch?\nReducing the mini-batch size will also reduce computational costs, while RAD will only reduce memory costs. Theoretically our method could reduce computational costs slightly, but this is not our focus. We only consider the memory/gradient variance tradeoff while avoiding adding significant overhead on top of vanilla reverse-mode (as is the case for checkpointing).\nResults are shown in Figure 6. Our feedforward network full-memory baseline is trained with a minibatch size of 150. For RAD we keep a mini-batch size of 150, and try 2 different configurations. For \"same sample\", we sample with replacement a 0.1 fraction of activations, and the same activations are sampled for each mini-batch element. For “different sample”, we sample a 0.1 fraction of activations, independently for each mini-batch element. Our \"reduced batch\" experiment is trained without RAD with a mini-batch size of 20 for CIFAR-10 and 22 for MNIST. This achieves similar memory budget as RAD with mini-batch size 150. Details of this calculation and of hyperparameters are in the appendix.\n0 20000 40000 60000 80000 100000 90\n92\n94\n96\n98\nReduced batch Baseline Same Sample Different Sample\n0 20000 40000 60000 80000 100000\n100\n2× 100\n3× 100\n4× 100 Test Loss vs Iterations for SmallConvNet on CIFAR-10\nReduced batch Baseline Same Sample Different Sample\n0 20000 40000 60000 80000 100000\n66\n68\n70\n74\nTest Accuracy vs Iterations for SmallConvNet on CIFAR-10\nReduced batch Baseline Same Sample Different Sample\n0 20000 40000 60000 80000 100000\n5\n10\n15\n20\nTraining time vs Iterations for SmallConvNet on CIFAR-10\n5000 7500 10000 12500 15000 17500 20000 98.00\n98.25\n98.50\n98.75\n99.00\n99.25\n99.50\nReduced batch Baseline Same Sample Different Sample\n0 5000 10000 15000 20000\n10−1\n2× 10−1\n3× 10−1 Test Loss vs Iterations for SmallFCNet on MNIST\nReduced batch Baseline Same Sample Different Sample\n0 5000 10000 15000 20000 95\n96\n97\n100 Test Accuracy vs Iterations for SmallFCNet on MNIST\nReduced batch Baseline Same Sample Different Sample\n0 5000 10000 15000 20000\n2\n4\n6\n8\n10\n12\n14\n16 Training time vs Iterations for SmallFCNet on MNIST\nFraction of activations Baseline (1.0) 0.8 0.5 0.3 0.1 0.05 ConvNet Mem 23.08 19.19 12.37 7.82 3.28 2.14 Fully Connected Mem 2.69 2.51 2.21 2.00 1.80 1.75 RNN Mem 47.93 39.98 25.85 16.43 7.01 4.66\nconv1.weight conv2.weight conv3.weight conv4.weight fc5.weight 0.00\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nMSE of Stochastic Gradients from Full Gradient (Averaged over 1000 batches)\nBatch 150 Baseline (23.08 MB) Batch 300 Kept Activation Fraction 0.1 (6.21 MB) Batch 150 Kept Activation Fraction 0.1 (3.28 MB) Batch 20 Baseline (3.37 MB)\nFigure 5: We visualize the gradient noise for each stochastic gradient method by computing the full gradient (over all mini-batches in the training set) and computing the mean squared error deviation for the gradient estimated by each method for each layer in the convolutional net. RAD has significantly less variance vs memory than reducing minibatch size. Furthermore, combining RAD with an increased mini-batch size achieves similar variance to baseline 150 mini-batch elements while saving memory.\nFor the feedforward networks we tune the learning rate and `2 regularization parameter separately for each gradient estimator on a randomly h ld out validation set. We train with the best performing hyper arameters on bootstrapped versions of the full training set to measure variability in training. Details are in the appendix, including plots for train/test accuracy/loss, and a wider range of fraction of activations sampled. All feedforward models are trained with Adam.\nIn the RNN case, we also run baseline, “same sample”, “different sample” and “reduced batch” experiments. The “reduced batch” experiment used a mini-batch size of 21, while the others used a mini-batch size of 150. The learning rate was fixed at 10−4 for all gradient estimators, found via a coarse grid search for the largest learning rate for which optimization did not diverge. Although we did not tune the learning rate separately for each estimator, we still expect that with a fixed learning rate, the lower variance estimators should perform better. When sampling, we sample different activations at each time-step. All recurrent models are trained with SGD without momentum." }, { "heading": "5 CASE STUDY: REACTION-DIFFUSION PDE-CONSTRAINED OPTIMIZATION", "text": "Our second application is motivated by the observation that many scientific computing problems involve a repeated or iterative computation resulting in a layered computational graph. We may apply RAD to get a stochastic estimate of the gradient by subsampling paths through the computational graph. For certain problems, we can leverage problem structure to develop a low-memory stochastic gradient estimator without exploding variance. To illustrate this possibility we consider the optimization of a linear reaction-diffusion PDE on a square domain with Dirichlet boundary conditions, representing the production and diffusion of neutrons in a fission reactor (McClarren,\n2018). Simulating this process involves solving for a potential φ(x, y, t) varying in two spatial coordinates and in time. The solution obeys the partial differential equation:\n∂φ(x, y, t)\n∂t = D∇2φ(x, y, t) + C(x, y, t,θ)φ(x, y, t)\nWe discretize the PDE in time and space and solve on a spatial grid using an explicit update rule φt+1 = Mφt + ∆tCt φt, where M summarizes the discretization of the PDE in space. The exact form is available in the appendix. The initial condition is φ0 = sin (πx) sin (πy), with φ = 0 on the boundary of the domain. The loss function is the time-averaged squared error between φ and a time-dependent target, L = 1/T ∑ t ||φt(θ)− φ target t ||22. The target is φtargett = φ0 + 1/4 sin (πt) sin (2πx) sin (πy). The source C is given by a seven-term Fourier series in x and t, with coefficients given by θ ∈ R7, where θ is the control parameter to be optimized. Full simulation details are provided in the appendix.\nThe gradient is ∂L∂θ = ∑T t=1 ∂L ∂φt ∑t i=1 (∏t−1 j=i ∂φj+1 ∂φj ) ∂φi ∂Ci−1 ∂Ci−1 ∂θ . As the reaction-diffusion PDE is linear and explicit, ∂φj+1/∂φj ∈ RN2x×N2x is known and independent of φ. We avoid storingC at each timestep by recomputing C from θ and t. This permits a low-memory stochastic gradient estimate without exploding variance by sampling from ∂L/∂φt ∈ RN2x and the diagonal matrix ∂φi/∂Ci−1, replacing ∂L∂θ with the unbiased estimator\nT∑ t=1 SPφt [ ∂L ∂φt ] t∑ i=1 ( t−1∏ j=i ∂φj+1 ∂φj ) SPφi−1 [ ∂φi ∂Ci−1 ] ∂Ci−1 ∂θ . (8)\nThis estimator can reduce memory by as much as 99% without harming optimization; see Figure 7b.\n6 RELATED WORK\nApproximating gradients and matrix operations Much thought has been given to the approximation of general gradients and Jacobians. We draw inspiration from this literature, although our main objective is designing an unbiased gradient estimator, rather than an approximation with bounded accuracy. Abdel-Khalik et al. (2008) accelerate Jacobian accumulation via random projections, in a similar manner to randomized methods for SVD and matrix multiplication. Choromanski & Sindhwani (2017) recover Jacobians in cases where AD is not available by performing a small number of function evaluations with random input perturbations and leveraging known structure of the Jacobian (such as sparsity and symmetry) via compressed sensing.\nOther work aims to accelerate neural network training by approximating operations from the forward and/or backward pass. Sun et al. (2017) and Wei et al. (2017) backpropagate sparse gradients, keeping only the top k elements of the adjoint vector. Adelman & Silberstein (2018) approximate matrix multiplications and convolutions in the forward pass of neural nets nets using a columnrow sampling scheme similar to our subsampling scheme. Their method also reduces the computational cost of the backwards pass but changes the objective landscape.\nRelated are invertible and reversible transformations, which remove the need to save intermediate variables on\nthe forward pass, as these can be recomputed on the backward pass. Maclaurin et al. (2015) use this idea for hyperparameter optimization, reversing the dynamics of SGD with momentum to avoid the expense of saving model parameters at each training iteration. Gomez et al. (2017) introduce a reversible ResNet (He et al., 2016) to avoid storing activations. Chen et al. (2018) introduce Neural ODEs, which also have constant memory cost as a function of depth.\nLimited-memory learning and optimization Memory is a major bottleneck for reverse-mode AD, and much work aims to reduce its footprint. Gradient checkpointing is perhaps the most well known, and has been used for both reverse-mode AD (Griewank & Walther, 2000) with general layerwise computation graphs, and for neural networks (Chen et al., 2016). In gradient checkpointing, some subset of intermediate variables are saved during function evaluation, and these are used to re-compute downstream variables when required. Gradient checkpointing achieves sublinear memory cost with the number of layers in the computation graph, at the cost of a constant-factor increase in runtime.\nStochastic Computation Graphs Our work is connected to the literature on stochastic estimation of gradients of expected values, or of the expected outcome of a stochastic computation graph. The distinguishing feature of this literature (vs. the proposed RAD approach) is that it uses stochastic estimators of an objective value to derive a stochastic gradient estimator, i.e., the forward pass is randomized. Methods such as REINFORCE (Williams, 1992) optimize an expected return while avoiding enumerating the intractably large space of possible outcomes by providing an unbiased stochastic gradient estimator, i.e., by trading computation for variance. This is also true of mini-batch SGD, and methods for training generative models such as contrastive divergence (Hinton, 2002), and stochastic optimization of evidence lower bounds (Kingma & Welling, 2013). Recent approaches have taken intractable deterministic computation graphs with special structure, i.e. involving loops or the limits of a series of terms, and developed tractable, unbiased, randomized telescoping series-based estimators for the graph’s output, which naturally permit tractable unbiased gradient estimation (Tallec & Ollivier, 2017; Beatson & Adams, 2019; Chen et al., 2019; Luo et al., 2020)." }, { "heading": "7 CONCLUSION", "text": "We present a framework for randomized automatic differentiation. Using this framework, we construct reduced-memory unbiased estimators for optimization of neural networks and a linear PDE. Future work could develop RAD formulas for new computation graphs, e.g., using randomized rounding to handle arbitrary activation functions and nonlinear transformations, integrating RAD with the adjoint method for PDEs, or exploiting problem-specific sparsity in the Jacobians of physical simulators. The randomized view on AD we introduce may be useful beyond memory savings: we hope it could be a useful tool in developing reduced-computation stochastic gradient methods or achieving tractable optimization of intractable computation graphs." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The authors would like to thank Haochen Li for early work on this project. We would also like to thank Greg Gundersen, Ari Seff, Daniel Greenidge, and Alan Chung for helpful comments on the manuscript. This work is partially supported by NSF IIS-2007278." }, { "heading": "APPENDIX A: NEURAL NETWORK EXPERIMENTS", "text": "" }, { "heading": "RANDOM PROJECTIONS FOR RAD", "text": "As mentioned in Section 3.2 (around Equation 5) of the main paper, we could also use different matrices P that have the properties\nEP = Id and P = RRT , R ∈ Rd×k, k < d . In the appendix we report experiments of letting R be a matrix of iid Rademacher random variables, scaled by √ k. P = RRT defined in this way satisfies the properties above. Note that this would lead to additional computation: The Jacobian or input vector would have to be fully computed, and then multiplied by R and stored. In the backward pass, it would have to be multiplied by RT . We report results as the “project” experiment in the full training/test curves in the following sections. We see that it performs competitively with reducing the mini-batch size." }, { "heading": "ARCHITECTURES USED", "text": "We use three different neural network architectures for our experiments: one fully connected feedforward, one convolutional feedforward, and one recurrent.\nOur fully-connected architecture consists of:\n1. Input: 784-dimensional flattened MNIST Image 2. Linear layer with 300 neurons (+ bias) (+ ReLU) 3. Linear layer with 300 neurons (+ bias) (+ ReLU) 4. Linear layer with 300 neurons (+ bias) (+ ReLU) 5. Linear layer with 10 neurons (+ bias) (+ softmax)\nOur convolutional architecture consists of:\n1. Input: 3× 32× 32-dimensional CIFAR-10 Image 2. 5× 5 convolutional layer with 16 feature maps (+ 2 zero-padding) (+ bias) (+ ReLU) 3. 5× 5 convolutional layer with 32 feature maps (+ 2 zero-padding) (+ bias) (+ ReLU) 4. 2× 2 average pool 2-d 5. 5× 5 convolutional layer with 32 feature maps (+ 2 zero-padding) (+ bias) (+ ReLU) 6. 5× 5 convolutional layer with 32 feature maps (+ 2 zero-padding) (+ bias) (+ ReLU) 7. 2× 2 average pool 2-d (+ flatten) 8. Linear layer with 10 neurons (+ bias) (+ softmax)\nOur recurrent architecture was taken from Le et al. (2015) and consists of:\n1. Input: A sequence of length 784 of 1-dimensional pixels values of a flattened MNIST image. 2. A single RNN cell of the form\nht = ReLU(Wihxt + bih +Whhht−1 + bhh)\nwhere the hidden state (ht) dimension is 100 and xt is the 1-dimensional input. 3. An output linear layer with 10 neurons (+ bias) (+ softmax) that has as input the last hidden\nstate." }, { "heading": "CALCULATION OF MEMORY SAVED FROM RAD", "text": "For the baseline models, we assume inputs to the linear layers and convolutional layers are stored in 32-bits per dimensions. The ReLU derivatives are then recalculated on the backward pass.\nFor the RAD models, we assume inputs are sampled or projected to 0.1 of their size (rounded up) and stored in 32-bits per dimension. Since ReLU derivatives can not exactly be calculated now, we\nassume they take 1-bit per dimension (non-reduced dimension) to store. The input to the softmax layer is not sampled or projected.\nIn both cases, the average pool and bias gradients does not require saving since the gradient is constant.\nFor MNIST fully connected, this gives (per mini-batch element memory):\nBaseline: (784 + 300 + 300 + 300 + 10) · 32 bits = 6.776 kBytes RAD 0.1: (79 + 30 + 30 + 30 + 10) · 32 bits + (300 + 300 + 300) · 1 bits = 828.5 bytes which leads to approximately 8x savings per mini-batch element.\nFor CIFAR-10 convolutional, this gives (per mini-batch element memory):\nBaseline: (3 ·32 ·32+16 ·32 ·32+32 ·16 ·16+32 ·16 ·16+32 ·8 ·8+10) ·32 bits = 151.59 kBytes RAD 0.1: (308+ 1639+ 820+ 820+ 205+ 10) ·32 bits +(16384 +8192 +8192 +2048) ·1 bits = 19.56 kBytes\nwhich leads to approximately 7.5x savings per mini-batch element.\nFor Sequential-MNIST RNN, this gives (per mini-batch element memory):\nBaseline: (784 · (1 + 100) + 100 + 10) · 32 bits = 317.176 kBytes RAD 0.1: (784 · (1 + 10) + 10 + 10) · 32 bits + (784 · 100) · 1 bits = 44.376 kBytes which leads to approximately 7.15x savings per mini-batch element." }, { "heading": "FEEDFORWARD NETWORK TRAINING DETAILS", "text": "We trained the CIFAR-10 models for 100, 000 gradient descent iterations with a fixed mini-batch size, sampled with replacement from the training set. We lower the learning rate by 0.6 every 10, 000 iterations. We train with the Adam optimizer. We center the images but do not use data augmentation. The MNIST models were trained similarly, but for 20, 000 iterations, with the learning rate lowered by 0.6 every 2, 000 iterations. We fixed these hyperparameters in the beginning and did not modify them.\nWe tune the initial learning rate and `2 weight decay parameters for each experiment reported in the main text for the feedforward networks. For each experiment (project, same sample, different sample, baseline, reduced batch), for both architectures, we generate 20 (weight decay, learning rate) pairs, where each weight decay is from the loguniform distribution over 0.0000001− 0.001 and learning rate from loguniform distribution over 0.00001− 0.01. We then randomly hold out a validation dataset of size 5000 from the CIFAR-10 and MNIST training sets and train each pair on the reduced training dataset and evaluate on the validation set. For each experiment, we select the hyperparameters that give the highest test accuracy.\nFor each experiment, we train each experiment with the best hyperparameters 5 times on separate bootstrapped resamplings of the full training dataset (50, 000 for CIFAR-10 and 60, 000 for MNIST), and evaluate on the test dataset (10, 000 for both). This is to make sure the differences we observe across experiments are not due to variability in training. In the main text we show 3 randomly selected training curves for each experiment. Below we show all 5.\nAll experiments were run on a single NVIDIA K80 or V100 GPU. Training times were reported on a V100." }, { "heading": "RNN TRAINING DETAILS", "text": "All RNN experiments were trained for 200,000 iterations (mini-batch updates) with a fixed mini-batch size, sampled with replacement from the training set. We used the full MNIST training set of 60,000 images whereby the images were centered. Three repetitions of the same experiment were performed with different seeds. Hyperparameter tuning was not performed due to time constraints.\nThe hidden-to-hidden matrix (Whh) is initialised with the identity matrix, the input-to-hidden matrix (Wih) and hidden-to-output (last hidden layer to softmax input) are initialised with a random matrix where each element is drawn independently from a N (0, 0.001) distribution and the biases (bih, bhh) are initialised with zero.\nThe model was evaluated on the test set of 10,000 images every 400 iterations and on the entire training set every 4000 iterations.\nFor the \"sample\", \"different sample\", \"project\" and \"different project\" experiments different activations/random matrices were sampled at every time-step of the unrolled RNN.\nAll experiments were run on a single NVIDIA K80 or V100 GPU.\nThe average running times for each experiment are given in Table 2. Note that we did not optimise our implementation for speed and so these running times can be reduced significantly." }, { "heading": "IMPLEMENTATION", "text": "The code is provided on GitHub1. Note that we did not optimize the code for computational efficiency; we only implemented our method as to demonstrate the effect it has on the number of gradient steps to train. Similarly, we did not implement all of the memory optimizations that we account for in our memory calculations; in particular in our implementation we did not take advantage of storing ReLU derivatives with 1-bit or the fact that average pooling has a constant derivative. Although these would have to be implemented in a practical use-case, they are not necessary in this proof of concept.\n1https://github.com/PrincetonLIPS/RandomizedAutomaticDifferentiation" }, { "heading": "FULL TRAINING/TEST CURVES FOR MNIST AND CIFAR-10", "text": "" }, { "heading": "FULL TRAINING/TEST CURVES FOR RNN ON SEQUENTIAL-MNIST", "text": "" }, { "heading": "SWEEP OF FRACTIONS SAMPLED FOR MNIST AND CIFAR-10", "text": "" }, { "heading": "APPENDIX B: REACTION-DIFFUSION PDE", "text": "The reaction-diffusion equation is a linear parabolic partial differential equation. In fission reactor analysis, it is called the one-group diffusion equation or one-speed diffusion equation, shown below.\n∂φ ∂t = D∇2φ+ Cφ+ S\nHere φ represents the neutron flux, D is a diffusion coefficient, and Cφ and S are source terms related to the local production or removal of neutron flux. In this paper, we solve the one-speed diffusion equation in two spatial dimensions on the unit square with the condition that φ = 0 on the boundary. We assume that D is constant equal to 1/4, C(x, y, t,θ) is a function of control parameters θ described below, and S is zero. We discretize φ on a regular grid in space and time, which motivates the notation φ→ φt. The grid spacing is ∆x = 1/32 and the timestep is ∆t = 1/4096. We simulate from t = 0 to t = 10. We use the explicit forward-time, centered-space (FTCS) method to timestep φ. The timestep is chosen to satisfy the stability criterion, D∆t/(∆x)2 ≤ 14 . In matrix notation, the FTCS update rule can be written φt+1 = Mφt + ∆tCt φt, in index notation it can be written as follows:\nφi,jt+1 = φ i,j t +\nD∆t\n(∆x)2\n( φi+1,jt + φ i−1,j t + φ i,j+1 t + φ i,j−1 t − 4φi,jt ) + ∆tCi,jt φ i,j t\nThe termCφ in the one-speed diffusion equation relates to the local production or removal of neutrons due to nuclear interactions. In a real fission reactor, C is a complicated function of the material properties of the reactor and the heights of the control rods. We make the simplifying assumption that C can be described by a 7-term Fourier series in x and t, written below. Physically, this is equivalent to the assumption that the material properties of the reactor are constant in space and time, and the heights of the control rods are sinusoidally varied in x and t. φ0 is initialized so that the reactor begins in a stable state, the other parameters are initialized from a uniform distribution.\nC(x, y, t,θ) = θ0 + θ1 sin(πt) + θ2 cos(πt) + θ3 sin(2πx) sin(πt)+\nθ4 sin(2πx) cos(πt) + θ5 cos(2πx) sin(πt) + θ6 cos(2πx) cos(πt)\nThe details of the stochastic gradient estimate and optimization are described in the main text. The Adam optimizer is used. Each experiment of 800 optimization iterations runs in about 4 hours on a GPU." }, { "heading": "APPENDIX C: PATH SAMPLING ALGORITHM AND ANALYSIS", "text": "Here we present an algorithm for path sampling and provide a proof that it leads to an unbiased estimate for the gradient. The main idea is to sample edges from the set of outgoing edges for each vertex in topological order, and scale appropriately. Vertices that have no incoming edges sampled can be skipped.\nAlgorithm 1 RMAD with path sampling 1: Inputs: 2: G = (V,E) - Computational Graph. dv denotes outdegree, v.succ successor set of vertex v. 3: y - Output vertex 4: Θ = (θ1, θ2, . . . , θm) ⊂ V - Input vertices 5: k > 0 - Number of samples per vertex 6: Initialization: 7: Q(e) = 0,∀e ∈ E 8: for v in topological order; synchronous with forward computation do 9: if No incoming edge of v has been sampled then 10: Continue 11: for k times do 12: Sample i from [dv] uniformly. 13: Q(v, v.succ[i])← Q(v, v.succ[i]) + dvk ∂v.succ[i] ∂v 14: Run backpropagation from y to Θ using Q as intermediate partials. 15: Output: ∇Θy\nThe main storage savings from Algorithm 1 will come from Line 9, where we only consider a vertex if it has an incoming edge that has been sampled. In computational graphs with a large number of independent paths, this will significantly reduce memory required, whether we record intermediate variables and recompute the LCG, or store entries of the LCG directly.\nTo see that path sampling gives an unbiased estimate, we use induction on the vertices in reverse topological order. For every vertex z, we denote z̄ = ∂y∂z and ẑ as our approximation for z̄. For our base case, we let ŷ = dydy = 1, so Eŷ = ȳ. For all other vertices z, we define\nẑ = dz ∑\n(z,v)∈E\nIv=vi ∂v\n∂z v̂ (9)\nwhere dz is the out-degree of z, vi is sampled uniformly from the set of successors of z, and Iv=vi is an indicator random variable denoting if v = vi. We then have\nEẑ = ∑\n(z,v)∈E\ndzE[Iv=vi ] ∂v\n∂z Ev̂ = ∑ (z,v)∈E ∂v ∂z Ev̂ (10)\nassuming that the randomness over the sampling of outgoing edges is independent of v̂, which must be true because our induction is in reverse topological order. Since by induction we assumed Ev̂ = v̄, we have\nEẑ = ∑\n(z,v)∈E\n∂v ∂z v̄ = z̄ (11)\nwhich completes the proof." } ]
2,021
RANDOMIZED AUTOMATIC DIFFERENTIATION
SP:5b707bffe506d9556ffedbe49425c57d0e21c9fa
[ "This paper studies the multi-source domain adaptation problem. The authors examine the existing MDA solutions, i.e. using a domain discriminator for each source-target pair, and argue that the existing ones are likely to distribute the domain-discriminative information across multiple discriminators. By theoretically analyzing from the information regularization point, the authors present a simple yet powerful architecture called multi-source information-regularized adaptation network, MIAN." ]
Adversarial learning strategy has demonstrated remarkable performance in dealing with single-source unsupervised Domain Adaptation (DA) problems, and it has recently been applied to multi-source DA problems. Although most existing DA methods use multiple domain discriminators, the effect of using multiple discriminators on the quality of latent space representations has been poorly understood. Here we provide theoretical insights into potential pitfalls of using multiple domain discriminators: First, domain-discriminative information is inevitably distributed across multiple discriminators. Second, it is not scalable in terms of computational resources. Third, the variance of stochastic gradients from multiple discriminators may increase, which significantly undermines training stability. To fully address these issues, we situate adversarial DA in the context of information regularization. First, we present a unified information regularization framework for multi-source DA. It provides a theoretical justification for using a single and unified domain discriminator to encourage the synergistic integration of the information gleaned from each domain. Second, this motivates us to implement a novel neural architecture called a Multi-source Information-regularized Adaptation Networks (MIAN). The proposed model significantly reduces the variance of stochastic gradients and increases computational-efficiency. Large-scale simulations on various multi-source DA scenarios demonstrate that MIAN, despite its structural simplicity, reliably outperforms other state-of-the-art methods by a large margin especially for difficult target domains.
[]
[ { "authors": [ "Alexander A Alemi", "Ian Fischer", "Joshua V Dillon", "Kevin Murphy" ], "title": "Deep variational information bottleneck", "venue": "arXiv preprint arXiv:1612.00410,", "year": 2016 }, { "authors": [ "Orly Alter", "Patrick O Brown", "David Botstein" ], "title": "Singular value decomposition for genome-wide expression data processing and modeling", "venue": "Proceedings of the National Academy of Sciences,", "year": 2000 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Fernando Pereira" ], "title": "Analysis of representations for domain adaptation", "venue": "In Advances in neural information processing systems,", "year": 2007 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine learning,", "year": 2010 }, { "authors": [ "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman" ], "title": "Learning bounds for domain adaptation", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Rita Chattopadhyay", "Qian Sun", "Wei Fan", "Ian Davidson", "Sethuraman Panchanathan", "Jieping Ye" ], "title": "Multisource domain adaptation and its application to early detection of fatigue", "venue": "ACM Transactions on Knowledge Discovery from Data (TKDD),", "year": 2012 }, { "authors": [ "Xinyang Chen", "Sinan Wang", "Mingsheng Long", "Jianmin Wang" ], "title": "Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Nicolas Courty", "Rémi Flamary", "Amaury Habrard", "Alain Rakotomamonjy" ], "title": "Joint distribution optimal transportation for domain adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Lixin Duan", "Dong Xu", "Shih-Fu Chang" ], "title": "Exploiting web images for event recognition in consumer videos: A multiple source domain adaptation approach", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Lixin Duan", "Dong Xu", "Ivor Wai-Hung Tsang" ], "title": "Domain adaptation from multiple sources: A domain-dependent regularization approach", "venue": "IEEE Transactions on neural networks and learning systems,", "year": 2012 }, { "authors": [ "Yaroslav Ganin", "Victor Lempitsky" ], "title": "Unsupervised domain adaptation by backpropagation", "venue": "arXiv preprint arXiv:1409.7495,", "year": 2014 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Boqing Gong", "Kristen Grauman", "Fei Sha" ], "title": "Reshaping visual datasets for domain adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Rui Gong", "Wen Li", "Yuhua Chen", "Luc Van Gool" ], "title": "Dlow: Domain flow for adaptation and generalization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Arthur Gretton", "Alex Smola", "Jiayuan Huang", "Marcel Schmittfull", "Karsten Borgwardt", "Bernhard Schölkopf" ], "title": "Covariate shift by kernel mean matching", "venue": "Dataset shift in machine learning,", "year": 2009 }, { "authors": [ "Judy Hoffman", "Brian Kulis", "Trevor Darrell", "Kate Saenko" ], "title": "Discovering latent domains for multisource domain adaptation", "venue": "In European Conference on Computer Vision,", "year": 2012 }, { "authors": [ "Judy Hoffman", "Eric Tzeng", "Taesung Park", "Jun-Yan Zhu", "Phillip Isola", "Kate Saenko", "Alexei A Efros", "Trevor Darrell" ], "title": "Cycada: Cycle-consistent adversarial domain adaptation", "venue": "arXiv preprint arXiv:1711.03213,", "year": 2017 }, { "authors": [ "Judy Hoffman", "Mehryar Mohri", "Ningshan Zhang" ], "title": "Algorithms and theory for multiple-source adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Yitong Li", "David E Carlson" ], "title": "Extracting relationships by multi-domain matching", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hong Liu", "Mingsheng Long", "Jianmin Wang", "Michael Jordan" ], "title": "Transferable adversarial training: A general approach to adapting deep classifiers", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Mingsheng Long", "Jianmin Wang", "Guiguang Ding", "Jiaguang Sun", "Philip S Yu" ], "title": "Transfer joint matching for unsupervised domain adaptation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Mingsheng Long", "Yue Cao", "Jianmin Wang", "Michael I Jordan" ], "title": "Learning transferable features with deep adaptation networks", "venue": "arXiv preprint arXiv:1502.02791,", "year": 2015 }, { "authors": [ "Mingsheng Long", "Han Zhu", "Jianmin Wang", "Michael I Jordan" ], "title": "Deep transfer learning with joint adaptation networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yawei Luo", "Ping Liu", "Tao Guan", "Junqing Yu", "Yi Yang" ], "title": "Significance-aware information bottleneck for domain adaptive semantic segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Massimiliano Mancini", "Lorenzo Porzi", "Samuel Rota Bulò", "Barbara Caputo", "Elisa Ricci" ], "title": "Boosting domain adaptation by discovering latent domains", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yishay Mansour", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "Domain adaptation with multiple sources", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "Xudong Mao", "Qing Li", "Haoran Xie", "Raymond YK Lau", "Zhen Wang", "Stephen Paul Smolley" ], "title": "Least squares generative adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Zak Murez", "Soheil Kolouri", "David Kriegman", "Ravi Ramamoorthi", "Kyungnam Kim" ], "title": "Image to image translation for domain adaptation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Paul K Newton", "Stephen A DeSalvo" ], "title": "The shannon entropy of sudoku matrices", "venue": "Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences,", "year": 1957 }, { "authors": [ "Xingchao Peng", "Qinxun Bai", "Xide Xia", "Zijun Huang", "Kate Saenko", "Bo Wang" ], "title": "Moment matching for multi-source domain adaptation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Yuji Roh", "Kangwook Lee", "Steven Euijong Whang", "Changho Suh" ], "title": "Fr-train: A mutual informationbased approach to fair and robust training", "venue": "arXiv preprint arXiv:2002.10234,", "year": 2020 }, { "authors": [ "Kate Saenko", "Brian Kulis", "Mario Fritz", "Trevor Darrell" ], "title": "Adapting visual category models to new domains", "venue": "In European conference on computer vision,", "year": 2010 }, { "authors": [ "Kuniaki Saito", "Yoshitaka Ushiku", "Tatsuya Harada", "Kate Saenko" ], "title": "Adversarial dropout regularization", "venue": "arXiv preprint arXiv:1711.01575,", "year": 2017 }, { "authors": [ "Kuniaki Saito", "Kohei Watanabe", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Maximum classifier discrepancy for unsupervised domain adaptation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Swami Sankaranarayanan", "Yogesh Balaji", "Carlos D Castillo", "Rama Chellappa" ], "title": "Generate to adapt: Aligning domains using generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Swami Sankaranarayanan", "Yogesh Balaji", "Arpit Jain", "Ser Nam Lim", "Rama Chellappa" ], "title": "Learning from synthetic data: Addressing domain shift for semantic segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yuxuan Song", "Lantao Yu", "Zhangjie Cao", "Zhiming Zhou", "Jian Shen", "Shuo Shao", "Weinan Zhang", "Yong Yu" ], "title": "Improving unsupervised domain adaptation with variational information", "venue": null, "year": 1911 }, { "authors": [ "Baochen Sun", "Kate Saenko" ], "title": "Deep coral: Correlation alignment for deep domain adaptation", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Baochen Sun", "Jiashi Feng", "Kate Saenko" ], "title": "Return of frustratingly easy domain adaptation", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Naftali Tishby", "Noga Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": "In 2015 IEEE Information Theory Workshop (ITW),", "year": 2015 }, { "authors": [ "Naftali Tishby", "Fernando C Pereira", "William Bialek" ], "title": "The information bottleneck method", "venue": "arXiv preprint physics/0004057,", "year": 2000 }, { "authors": [ "Yi-Hsuan Tsai", "Wei-Chih Hung", "Samuel Schulter", "Kihyuk Sohn", "Ming-Hsuan Yang", "Manmohan Chandraker" ], "title": "Learning to adapt structured output space for semantic segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Eric Tzeng", "Judy Hoffman", "Kate Saenko", "Trevor Darrell" ], "title": "Adversarial discriminative domain adaptation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Hemanth Venkateswara", "Jose Eusebio", "Shayok Chakraborty", "Sethuraman Panchanathan" ], "title": "Deep hashing network for unsupervised domain adaptation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Haotian Wang", "Wenjing Yang", "Zhipeng Lin", "Yue Yu" ], "title": "Tmda: Task-specific multi-source domain adaptation via clustering embedded adversarial training", "venue": "IEEE International Conference on Data Mining (ICDM),", "year": 2019 }, { "authors": [ "Jindong Wang", "Wenjie Feng", "Yiqiang Chen", "Han Yu", "Meiyu Huang", "Philip S Yu" ], "title": "Visual domain adaptation with manifold embedded distribution alignment", "venue": "In Proceedings of the 26th ACM international conference on Multimedia,", "year": 2018 }, { "authors": [ "Ruijia Xu", "Ziliang Chen", "Wangmeng Zuo", "Junjie Yan", "Liang Lin" ], "title": "Deep cocktail network: Multi-source unsupervised domain adaptation with category shift", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Han Zhao", "Shanghang Zhang", "Guanhang Wu", "José MF Moura", "Joao P Costeira", "Geoffrey J Gordon" ], "title": "Adversarial multiple source domain adaptation", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Han Zhao", "Remi Tachet des Combes", "Kun Zhang", "Geoffrey J Gordon" ], "title": "On learning invariant representation for domain adaptation", "venue": "arXiv preprint arXiv:1901.09453,", "year": 2019 }, { "authors": [ "Sicheng Zhao", "Bo Li", "Xiangyu Yue", "Yang Gu", "Pengfei Xu", "Runbo Hu", "Hua Chai", "Kurt Keutzer" ], "title": "Multi-source domain adaptation for semantic segmentation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sicheng Zhao", "Guangzhi Wang", "Shanghang Zhang", "Yang Gu", "Yaxian Li", "Zhichao Song", "Pengfei Xu", "Runbo Hu", "Hua Chai", "Kurt Keutzer" ], "title": "Multi-source distilling domain adaptation", "venue": "arXiv preprint arXiv:1911.11554,", "year": 2019 }, { "authors": [ "Song" ], "title": "The existing DA work on semantic segmentation tasks (Luo et al", "venue": null, "year": 2019 }, { "authors": [ "Luo" ], "title": "2019)) is that (Luo et al. (2019)) employed the shared encoding PZ|x(z) instead of PZ|x,v(z), whereas some adversarial DA approaches use the unshared one (Tzeng", "venue": null, "year": 2017 }, { "authors": [ "MNIST (LeCun" ], "title": "Synthetic Digits (Ganin", "venue": "For USPS,", "year": 1998 }, { "authors": [ "Saenko" ], "title": "2010)) is a popular benchmark dataset including 31 categories of objects in an office environment. Note that it is a more difficult problem than Digits-Five, which includes 4652 images in total from the three domains: Amazon, DSLR, and Webcam. All the images are interpolated to 224× 224 using bicubic filters", "venue": null, "year": 2010 }, { "authors": [ "Office-Home (Venkateswara" ], "title": "2017)) is a challenging dataset that includes 65 categories of objects in office and home environments. It includes 15,500 images in total from the four domains: Artistic images (Art), Clip Art(Clipart), Product images (Product), and Real-World images (Realworld). All the images are interpolated to 224× 224 using bicubic filters", "venue": null, "year": 2017 }, { "authors": [ "Adaptation Network (DAN", "Long" ], "title": "Joint Adaptation Network (JAN", "venue": "Long et al", "year": 2015 }, { "authors": [ "Batch Spectral Penalization (BSP", "Chen" ], "title": "Adversarial Discriminative Domain Adaptation (ADDA", "venue": "Tzeng et al", "year": 2019 }, { "authors": [ "Liu" ], "title": "leading to a increase in optimal joint risk λ∗", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Although a large number of studies have demonstrated the ability of deep neural networks to solve challenging tasks, the tasks solved by networks are mostly confined to a similar type or a single domain. One remaining challenge is the problem known as domain shift (Gretton et al. (2009)), where a direct transfer of information gleaned from a single source domain to unseen target domains may lead to significant performance impairment. Domain adaptation (DA) approaches aim to mitigate this problem by learning to map data of both domains onto a common feature space. Whereas several theoretical results (Ben-David et al. (2007); Blitzer et al. (2008); Zhao et al. (2019a)) and algorithms for DA (Long et al. (2015; 2017); Ganin et al. (2016)) have focused on the case in which only a single-source domain dataset is given, we consider a more challenging and generalized problem of knowledge transfer, referred to as Multi-source unsupervised DA (MDA). Following a seminal theoretical result on MDA (Blitzer et al. (2008); Ben-David et al. (2010)), technical advances have been made, mainly on the adversarial methods. (Xu et al. (2018); Zhao et al. (2019c)).\nWhile most of adversarial MDA methods use multiple independent domain discriminators (Xu et al. (2018); Zhao et al. (2018); Li et al. (2018); Zhao et al. (2019c;b)), the potential pitfalls of this setting have not been fully explored. The existing works do not provide a theoretical guarantee that the unnecessary domain-specific information is fully filtered out, because the domain-discriminative information is inevitably distributed across multiple discriminators. For example, the multiple domain discriminators focus only on estimating the domain shift between source domains and the target, while the discrepancies between the source domains are neglected, making it hard to align all the given domains. This necessitates garnering the domain-discriminative information with a\nunified discriminator. Moreover, the multiple domain discriminator setting is not scalable in terms of computational resources especially when large number of source domains are given, e.g., medical reports from multiple patients. Finally, it may undermine the stability of training, as earlier works solve multiple independent adversarial minimax problems.\nTo overcome such limitations, we propose a novel MDA method, called Multi-source Informationregularized Adaptation Networks (MIAN), that constrains the mutual information between latent representations and domain labels. First, we show that such mutual information regularization is closely related to the explicit optimization of theH-divergence between the source and target domains. This affords the theoretical insight that the conventional adversarial DA can be translated into an information-theoretic-regularization problem. Second, based on our findings, we propose a new optimization problem for MDA: minimizing adversarial loss over multiple domains with a single domain discriminator. We show that the domain shift between each source domain can be indirectly penalized, which is known to be beneficial in MDA (Li et al. (2018); Peng et al. (2019)), with a single domain discriminator. Moreover, by analyzing existing studies in terms of information regularization, we found that the variance of the stochastic gradients increases when using multiple discriminators.\nDespite its structural simplicity, we found that MIAN works efficiently across a wide variety of MDA scenarios, including the DIGITS-Five (Peng et al. (2019)), Office-31 (Saenko et al. (2010)), and Office-Home datasets (Venkateswara et al. (2017)). Intriguingly, MIAN reliably and significantly outperformed several state-of-the-art methods that either employ a domain discriminator separately for each source domain (Xu et al. (2018)) or align the moments of deep feature distribution for every pairwise domain (Peng et al. (2019))." }, { "heading": "2 RELATED WORKS", "text": "Several DA methods have been used in attempt to learn domain-invariant representations. Along with the increasing use of deep neural networks, contemporary work focuses on matching deep latent representations from the source domain with those from the target domain. Several measures have been introduced to handle domain shift, such as maximum mean discrepancy (MMD) (Long et al. (2014; 2015)), correlation distance (Sun et al. (2016); Sun & Saenko (2016)), and Wasserstein distance (Courty et al. (2017)). Recently, adversarial DA methods (Ganin et al. (2016); Tzeng et al. (2017); Hoffman et al. (2017); Saito et al. (2018; 2017)) have become mainstream approaches owing to the development of generative adversarial networks (Goodfellow et al. (2014)). However, the abovementioned single-source DA approaches inevitably sacrifice performance for the sake of multi-source DA.\nSome MDA studies (Blitzer et al. (2008); Ben-David et al. (2010); Mansour et al. (2009); Hoffman et al. (2018)) have provided the theoretical background for algorithm-level solutions. (Blitzer et al. (2008); Ben-David et al. (2010)) explore the extended upper bound of true risk on unlabeled samples from the target domain with respect to a weighted combination of multiple source domains. Following these theoretical studies, MDA studies with shallow models (Duan et al. (2012b;a); Chattopadhyay et al. (2012)) as well as with deep neural networks (Mancini et al. (2018); Peng et al. (2019); Li et al. (2018)) have been proposed. Recently, some adversarial MDA methods have also been proposed. Xu et al. (2018) implemented a k-way domain discriminator and classifier to battle both domain and category shifts. Zhao et al. (2018) also used multiple discriminators to optimize the average case generalization bounds. Zhao et al. (2019c) chose relevant source training samples for the DA by minimizing the empirical Wasserstein distance between the source and target domains. Instead of using separate encoders, domain discriminators or classifiers for each source domain as in earlier works, our approach uses unified networks, thereby improving resource-efficiency and scalability.\nSeveral existing MDA works have proposed methods to estimate the source domain weights following (Blitzer et al. (2008); Ben-David et al. (2010)). Mansour et al. (2009) assumed that the target hypothesis can be approximated by a convex combination of the source hypotheses. (Peng et al. (2019); Zhao et al. (2018)) suggested ad-hoc schemes for domain weights based on the empirical risk of each source domain. Li et al. (2018) computed a softmax-transformed weight vector using the empirical Wasserstein-like measure instead of the empirical risks. Compared to the proposed methods without robust theoretical justifications, our analysis does not require any assumption or estimation for the domain coefficients. In our framework, the representations are distilled to be independent of the domain, thereby rendering the performance relatively insensitive to explicit weighting strategies." }, { "heading": "3 THEORETICAL INSIGHTS", "text": "We first introduce the notations for the MDA problem in classification. A set of source domains\nand the target domain are denoted by {DSi} N i=1 and DT , respectively. Let XSi = { xjSi }m j=1 and\nYSi = { yjSi }m j=1 be a set of m i.i.d. samples from DSi . Let XT = { xT j }m j=1 ∼ (DXT )m be the set of m i.i.d. samples generated from the marginal distribution DXT . The domain label and its probability distribution are denoted by V and PV (v), where v ∈ V and V is the set of domain labels. In line with prior works (Hoffman et al. (2012); Gong et al. (2013); Mancini et al. (2018); Gong et al. (2019)), domain label can be generally treated as a stochastic latent random variable in our framework. However, for simplicity, we take the empirical version of the true distributions with given samples assuming that the domain labels for all samples are known. The latent representation of the sample is given by Z, and the encoder is defined as F : X → Z , with X and Z representing data space and latent space, respectively. Accordingly, ZSi and ZT refer to the outputs of the encoder F (XSi) and F (XT ), respectively. For notational simplicity, we will omit the index i from DSi , XSi and ZSi when N = 1. A classifier is defined as C : Z → Y where Y is the class label space." }, { "heading": "3.1 PROBLEM FORMULATION", "text": "For comparison with our formulation, we recast single-source DA as a constrained optimization problem. The true risk T (h) on unlabeled samples from the target domain is bounded above the sum of three terms (Ben-David et al. (2010)): (1) true risk S(h) of hypothesis h on the source domain; (2) H-divergence dH(DS , DT ) between a source and a target domain distribution; and (3) the optimal joint risk λ∗. Theorem 1 (Ben-David et al. (2010)). Let hypothesis classH be a set of binary classifiers h : X → {0, 1}. Then for the given domain distributions DS and DT ,\n∀h ∈ H, T (h) ≤ S(h) + dH(DS , DT ) + λ∗, (1)\nwhere dH(DS , DT ) = 2sup h∈H ∣∣∣ E x∼DXS [ I(h(x) = 1) ] − E x∼DXT [ I(h(x) = 1) ]∣∣∣ and I(a) is an indicator function whose value is 1 if a is true, and 0 otherwise.\nThe empiricalH-divergence d̂H(XS , XT ) can be computed as follows (Ben-David et al. (2010)): Lemma 1.\nd̂H(XS , XT ) = 2 ( 1−min\nh∈H [ 1 m ∑ x∈XS I[h(x) = 1] + 1 m ∑ x∈XT I[h(x) = 0] ])\n(2)\nFollowing Lemma 1, a domain classifier h : Z → V can be used to compute the empirical Hdivergence. Suppose the optimal joint risk λ∗ is sufficiently small as assumed in most adversarial DA studies (Saito et al. (2017); Chen et al. (2019)). Thus, one can obtain the ideal encoder and classifier minimizing the upper bound of T (h) by solving the following min-max problem:\nF ∗, C∗ = argmin F,C L(F,C) + βd̂H(ZS , ZT )\n= argmin F,C max h∈H\nL(F,C) + β 1\nm ( ∑ i:zi∈ZS I[h(zi) = 1] + ∑ j:zj∈ZT I[h(zj) = 0] ) , (3)\nwhere L(F,C) is the loss function on samples from the source domain, β is a Lagrangian multiplier, V = {0, 1} such that each source instance and target instance are labeled as 1 and 0, respectively, and h is the binary domain classifier. Note that the latter min–max problem is obtained by converting −min into max and removing the constant term from Lemma 1." }, { "heading": "3.2 INFORMATION-REGULARIZED MIN–MAX PROBLEM FOR MDA", "text": "Intuitively, it is not highly desirable to adapt the learned representation in the given domain to the other domains, particularly when the representation itself is not sufficiently domain-independent.\nThis motivates us to explore ways to learn representations independent of domains. Inspired by a contemporary fair model training study (Roh et al. (2020)), the mutual information between the latent representation and the domain label I(Z;V ) can be expressed as follows: Theorem 2. Let PZ(z) be the distribution ofZ where z ∈ Z . Let h be a domain classifier h : Z → V , where Z is the feature space and V is the set of domain labels. Let hv(z) be a conditional probability of V where v ∈ V given Z = z, defined by h. Then the following holds:\nI(Z;V ) = max hv(z): ∑ v∈V hv(z)=1,∀z ∑ v∈V PV (v)Ez∼PZ|v [ log hv(z) ] +H(V ) (4)\nThe detailed proof is provided in the Roh et al. (2020) and Supplementary Material. As done in Roh et al. (2020), we can derive the empirical version of Theorem 2 as follows:\nÎ(Z;V ) = max hv(z): ∑ v∈V hv(z)=1,∀z 1 M ∑ v∈V ∑ i:vi=v log hvi(zi) +H(V ), (5)\nwhereM is the number of total representation samples, i is the sample index, and vi is the corresponding domain label of the ith sample. Using this equation, we combine our information-constrained objective function and the results of Lemma 1. For binary classification V = {0, 1} with ZS and ZT of equal size M/2, we propose the following information-regularized minimax problem:\nF ∗, C∗ = argmin F,C L(F,C) + βÎ(Z;V )\n= argmin F,C max h∈H\nL(F,C) + β 1\nM [ ∑ i:zi∈ZS log h(zi) + ∑ j:zj∈ZT log(1− h(zj)) ] ,\n(6)\nwhere β is a Lagrangian multiplier, h(zi) , hvi=1(zi) and 1 − h(zi) , hvi=0(zi), with h(zi) representing the probability that zi belongs to the source domain. This setting automatically dismisses the condition ∑ v∈V hv(z) = 1,∀z. Note that we have accommodated a simple situation in which the entropy H(V ) remains constant." }, { "heading": "3.3 ADVANTAGES OVER OTHER MDA METHODS", "text": "The relationship between (3) and (6) provides us a theoretical insights that the problem of minimizing mutual information between the latent representation and the domain label is closely related to minimizing the H-divergence using the adversarial learning scheme. This relationship clearly underlines the significance of information regularization for MDA. Compared to the existing MDA approaches (Xu et al. (2018); Zhao et al. (2018)), which inevitably distribute domain-discriminative knowledge over N different domain classifiers, the above objective function (6) enables us to seamlessly integrate such information with the single-domain classifier h.\nUsing a single domain discriminator also helps reduce the variance of gradient. Large variances in the stochastic gradients slow down the convergence, which leads to poor performance (Johnson & Zhang (2013)). Herein, we analyze the variances of the stochastic gradients of existing optimization constraints. By excluding the weighted source combination strategy, we can simplify the optimization constraint of existing adversarial MDA methods as sum of the information constraints:\nN∑ k=1 I(Zk;Uk) = N∑ k=1 max hku(z): ∑ u∈U h k u(z)=1,∀z ∑ u∈U PUk(u)Ezk∼PZk|u [ log hku(zk) ] + N∑ k=1 H(Uk),\n(7) where Uk is the kth domain label with U = {0, 1}, PZk|u=0(·) = PZ|v=N+1(·) corresponding to the target domain, PZk|u=1(·) = PZ|v=k(·) corresponding to the kth source domain, and hku(zk) being the conditional probability of u ∈ U given zk defined by the kth discriminator indicating that the sample is generated from the kth source domain. Again, we treat the entropy H(Uk) as a constant. Note that the interaction information cannot be measured with (7).\nGiven M = m(N +1) samples with m representing the number of samples per domain, an empirical version of (7) is:\nN∑ k=1 Î(Zk;Uk) = 1 M N∑ k=1 max hku(z): ∑ u∈U h k u(z)=1,∀z ∑ u∈U ∑ i:ui=u log hku(z i k) + N∑ k=1 H(Uk). (8)\nLet Ik be a shorthand for the kth term inside the first summation. Without loss of generality we make simplifying assumptions that all V ar[Ik] are the same for all k and so are Cov[Ik, Ij ] for all pairs. Then the variance of (8) is given by:\nV ar [ N∑ k=1 Î(Zk;Uk) ] = 1 M2 ( N∑ k=1 V ar[Ik] + 2 N∑ k=1 N∑ j=k Cov[Ik, Ij ] )\n= 1\nm2 ( N (N + 1)2 V ar[Ik] + N(N − 1) (N + 1)2 Cov[Ik, Ij ] ) .\n(9)\nAs earlier works solve N adversarial minimax problems, the covariance term is additionally included and its contribution to the variance does not decrease with increasing N . In other words, the covariance term may dominate the variance of the gradients as the number of domain increases. In contrast, the variance of our constraint (5) is inversely proportional to (N + 1)2. Let Im be a shorthand for the maximization term except 1M in (5). Then the variance of (5) is given by:\nV ar [ Î(Z;V ) ] =\n1\nm2(N + 1)2\n( V ar[Im] ) . (10)\nIt implies that our framework can significantly improve the stability of stochastic gradient optimization compared to existing approaches, especially when the model is deemed to learn from many domains." }, { "heading": "3.4 SITUATING DOMAIN ADAPTATION IN CONTEXT OF INFORMATION BOTTLENECK THEORY", "text": "In this Section, we bridge the gap between the existing adversarial DA method and the information bottleneck (IB) theory (Tishby et al. (2000); Tishby & Zaslavsky (2015); Alemi et al. (2016)). Tishby et al. (2000) examined the problem of learning an encoding Z such that it is maximally informative about the class Y while being minimally informative about the sample X:\nmin Penc(z|x)\nβI(Z;X)− I(Z;Y ), (11)\nwhere β is a Lagrangian multiplier. Indeed, the role of the bottleneck term I(Z;X) matches our mutual information I(Z;V ) between the latent representation and the domain label. We foster close collaboration between two information bottleneck terms by incorporating those into I(Z;X,V ). Theorem 3. Let PZ|x,v(z) be a conditional probabilistic distribution of Z where z ∈ Z , defined by the encoder F , given a sample x ∈ X and the domain label v ∈ V . Let RZ(z) denotes a prior marginal distribution of Z. Then the following inequality holds:\nI(Z;X,V ) ≤ max hv(z): ∑ v∈V hv(z)=1,∀z ∑ v∈V PV (v)EPz∼Z|v [ log hv(z) ] +H(V )\n+ Ex,v∼PX,V [ DKL[PZ|x,v ‖ RZ ] ] (12) The proof of Theorem 3 uses the chain rule: I(Z;X,V ) = I(Z;V ) + I(Z;X | V ). The detailed proof is provided in the Supplementary Material. Whereas the role of I(Z;X | V ) is to purify the latent representation generated from the given domain, I(Z;V ) serves as a proxy for regularization that aligns the purified representations across different domains. Thus, the existing DA approaches (Luo et al. (2019); Song et al. (2019)) using variational information bottleneck (Alemi et al. (2016)) can be reviewed as special cases for Theorem 3 with a single-source domain." }, { "heading": "4 MULTI-SOURCE INFORMATION-REGULARIZED ADAPTATION NETWORKS", "text": "In this Section, we provide the details of our proposed architecture, referred to as a multi-source information-regularized adaptation network (MIAN). MIAN addresses the information-constrained min–max problem for MDA (Section 3.2) using the three subcomponents depicted in Figure 1: information regularization, source classification, and Decaying Batch Spectral Penalization (DBSP).\nInformation regularization. To estimate the empirical mutual information Î(Z;V ) in (5), the domain classifier h should be trained to minimize softmax cross enropy. Let V = {1, 2, ..., N + 1}\nand denote h(z) as N + 1 dimensional vector of the conditional probability for each domain given the sample z. Let 1 be a N + 1 dimensional vector of all ones, and 1[k=v] be a N + 1 dimensional vector whose vth value is 1 and 0 otherwise. Given M = m(N + 1) samples, the objective is:\nmin h − 1 M ∑ v∈V ∑ i:vi=v [ 1 T [k=vi] log h(zi) ] . (13)\nIn this study, we slightly modify the objective (13). Specifically, we explicitly minimized the conditional probability of the remaining domains excepting the vth domain. Let 1[k 6=v] be the flipped version of 1[k=v]. Then the objective function for the domain discriminator is:\nmin h − 1 M ∑ v∈V ∑ i:vi=v [ 1 T [k=vi] log h(zi) + 1 T [k 6=vi] log(1− h(zi)) ] , (14)\nwhere the objective function for encoder training is to maximize (14). Our objective function is also closely related to that of GAN (Goodfellow et al. (2014)), and we experimentally found that using the variant objective function of GAN (Mao et al. (2017)) works slightly better.\nThe above objective is closely related to optimizing every pairwise domain discrepancy between the given domain and the mixture of the others. Let each Dv and Dvc represent the vth domain and the mixture of the remaining N domains with the same mixture weight 1N , relatively. Then we can defineH-divergence as dH(Dv, Dvc), and an average of suchH-divergence for every v as dH(V). Assume that the samples of size m, Zv and Zvc , are generated from each Dv and Dvc , where Zvc = ⋃ v′ 6=v Zv′ with |Zv′ | = m/N for all v′ ∈ V . Thus the domain label vj 6= v for every jth sample in Zvc . Then the average of empiricalH-divergence d̂H(V) is defined as follows:\nd̂H(V) = 1\nN + 1 ∑ v∈V d̂H(Zv, Zvc)\n= 1\nN + 1 ∑ v∈V 2 ( 1−min h∈H [ 1 m ∑ i:vi=v I[hv(zi) = 1] + 1 m ∑ j:vj 6=v I[hv(zj) = 0] ]) , (15)\nwhere hv(z) represents the vth value of h(z). Note that h(z) corresponds to N + 1 dimensional one-hot classification vector in (15), unlike in (14). Then, let I[h(z)] := [ I(hv(z) = 1) ] v∈V be the N + 1 dimensional one-hot indicator vector. Given the unified domain discriminator h in the inner minimization for every v in (15), we train h to approximate the lower bound of d̂H(V) as follows:\nh∗ = argmax h∈H\n1\nM ∑ v∈V ( ∑ i:vi=v I[hv(zi) = 1] + ∑ j:vj 6=v I[hv(zj) = 0] )\n= argmin h∈H − 1 M ∑ v∈V ∑ i:vi=v 1 T [k=vi] I[h(zi)] + 1 T [k 6=vi] ( 1− I[h(zi)] ) ,\n(16)\nwhere the latter equality is obtained by rearranging the summation terms in the first equality.\nBased on the close relationship between (14) and (16), we can make the link between information regularization and H-divergence optimization given multi-source domain; minimizing d̂H(V) is closely related to implicit regularization of the mutual information between latent representations and domain labels. Because the output vector h(z) in (15) often comes from the argmax operation, (15) is not differentiable w.r.t. z. However, our framework has a differentiable objective as in (14).\nThere are two benefits of minimizing dH(V). First, it includesH-divergence between the target and a mixture of sources, which directly affects the upper bound of the empirical risk on target samples (Theorem 5 in Ben-David et al. (2010)). Second, dH(V) lower-bounds the average of every pairwise H-divergence between domains. The detailed proof is provided in the appendix (Lemma 2). Note that unlike our single domain classifier setting, existing methods (Li et al. (2018)) require a number of about O(N2) domain classifiers to approximate all pairwise combinations of domain discrepancy. Source classification. Along with learning domain-independent latent representations illustrated in the above, we train the classifier with the labeled source domain datasets that can be directly applied to the target domain representations in practice. To minimize the empirical risk on source domain, we use a generic softmax cross-entropy loss function with labeled source domain samples as L(F,C).\nDecaying batch spectral penalization. Applying above information-theoretic insights, we further describe a potential side effect of existing adversarial DA methods. Information regularization may lead to overriding implicit entropy minimization, particularly in the early stages of the training, impairing the richness of latent feature representations. To prevent such a pathological phenomenon, we introduce a new technique called Decaying Batch Spectral Penalization (DBSP), which is intended to control the SVD entropy of the feature space. Our version improves training efficiency compared to original Batch Spectral Penalization (Chen et al. (2019)). We refer to this version of our model as MIAN-γ. As vanila MIAN is sufficient to outperform other state-of-the-art methods (Section 5), MIAN-γ is further discussed in the Supplementary Material." }, { "heading": "5 EXPERIMENTS", "text": "To assess the performance of MIAN, we ran a large-scale simulation using the following benchmark datasets: Digits-Five, Office-31 and Office-Home. For a fair comparison, we reproduced all the other baseline results using the same backbone architecture and optimizer settings as the proposed method. For the source-only and single-source DA standards, we introduce two MDA approaches (Xu et al. (2018); Peng et al. (2019)): (1) source-combined, i.e., all source-domains are incorporated into a\nsingle source domain; (2) single-best, i.e., the best adaptation performance on the target domain is reported. Owing to limited space, details about simulation settings, used baseline models and datasets are presented in the Supplementary Material." }, { "heading": "5.1 SIMULATION RESULTS", "text": "The classification accuracy for Digits-Five, Office-31, and Office-Home are summarized in Tables 1, 2, and 3, respectively. We found that MIAN outperforms most of other state-of-the-art single-source and multi-source DA methods by a large margin. Note that our method demonstrated a significant improvement in challenging domains, such as MNIST-M, Amazon or Clipart." }, { "heading": "5.2 QUALITATIVE AND QUANTITATIVE ANALYSES", "text": "Design of domain discriminator. To quantify the extent to which performance improvement is achieved by unifying the domain discriminators, we compared the performances of the four different versions of MIAN (Figure 2a, 2b). No S-S align is the same as MIAN with the exception that only the target and each source domains are aligned. No LS uses the objective function as in (14), and unlike (Mao et al. (2017)). Multi D employs as many discriminators as the number of source domains which is analogous to the existing approaches. For a fair comparison, all the other experimental settings are fixed. The results illustrate that all the versions with the unified discriminator reliably outperform Multi D in terms of both accuracy and reliability. This suggests that unification of the domain discriminators can substantially improves the task performance.\nVariance of stochastic gradients. With respect to the above analysis, we compared the variance of the stochastic gradients computed with different available domain discriminators. We trained MIAN and Multi D using mini-batches of samples. The number of samples in a batch was fixed as 128 per\ndomain. After the early stages of training, we computed the gradients for the weights and biases of both the top and bottom layers of the encoder on the full training set. Figures 2c, 2d show that MIAN with the unified discriminator yields exponentially lower variance of the gradients compared to Multi D. Thus it is more feasible to use the unified discriminator when a large number of domains are given.\nProxy A-distance. To analyze the performance improvement in depth, we measured Proxy ADistance (PAD) as an empirical approximation of domain discrepancy (Ganin et al. (2016)). Given the generalization error on discriminating between the target and source samples, PAD is defined as d̂A = 2(1−2 ). Figure 3a shows that MIAN yields lower PAD between the source and target domain on average, potentially associated with optimizing d̂H(V). To test this conjecture, we conducted an ablation study on the objective of domain discriminator (Figure 3b, 3c). All the other experimental settings were fixed except for using the objective of the unified domain discriminator as (13), or (14). While both cases help the adaptation, using (14) yields lower d̂H(V) and higher test accuracy.\nEstimation of mutual information. We measure the empirical mutual information Î(Z;V ) with assumingH(V ) as a constant. For the measurement, we trained the domain discriminator to minimize the softmax cross entropy (13) with sufficient iterations. Figure 3d shows that MIAN yields the lowest Î(Z;V ), guaranteeing that the obtained representation achieves low-level of domain dependence." }, { "heading": "6 CONCLUSION", "text": "In this paper, we have presented a unified information-regularization framework for MDA. The proposed framework allows us to examine the existing adversarial DA methods and also motivates us to implement a novel neural architecture for MDA. We provided both theoretical arguments and empirical evidence to fully justify three potential pitfalls of using multiple discriminators: dispersed domain-discriminative knowledge, lack of scalability and high variance in the objective. Our framework also establishes a bridge between adversarial DA and Information Bottleneck theory. The proposed model does not require complicated settings such as image generation, pretraining, multiple discriminators, multiple encoders or classifiers, which are often adopted in the existing MDA methods (Zhao et al. (2019b;c); Wang et al. (2019))." }, { "heading": "A PROOFS", "text": "In this Section, we present the detailed proofs for Theorems 2 and 3, explained in the main paper. We also present Lemma 2, as mentioned in Section 4. Following (Roh et al. (2020)), we provide a proof of Theorem 2 below for the sake of completeness.\nA.1 PROOF OF THEOREM 2\nTheorem 2. Let PZ(z) be the distribution ofZ where z ∈ Z . Let h be a domain classifier h : Z → V , where Z is the feature space and V is the set of domain labels. Let hv(Z) be a conditional probability of V where v ∈ V given Z = z, defined by h. Then the following holds:\nI(Z;V ) = max hv(z): ∑ v∈V hv(z)=1,∀z ∑ v∈V PV (v)Ez∼PZ|v [ log hv(z) ] +H(V ) (17)\nProof. By definition, I(Z;V ) = DKL ( P (Z, V ) ‖ P (Z)P (V ) ) = ∑ v∈V PV (v)Ez∼PZ|v [ log PZ,V (z,v) PZ(z) ] +H(V )\n(18)\nLet us constrain the term inside the log by hv(z) = PZ,V (z,v) PZ(z) where hv(z) represents the conditional probability of V = v for any v ∈ V given Z = z. Then we have: ∑ v∈V hv(z) = 1 for all possible values of z according to the law of total probability. Let h denote the collection of hv(z) for all possible values of v and z, and λ be the collection of λz for all values of z. Then, we can construct the Lagrangian function by incorporating the constraint ∑ v∈V hv(z) = 1 as follows:\nL(h,λ) = ∑ v∈V PV (v)Ez∼PZ|v [ log ( hv(z) )] +H(V ) + ∑ z∈Z λz ( 1− ∑ v∈V hv(z) )\n(19)\nWe can use the following KKT conditions:\n∂L(h,λ)\n∂hv(z) = PV (v)\nPZ|v(z)\nh∗v(z) − λ∗z = 0, ∀(z,v) ∈ Z × V (20)\n1− ∑ v∈V h∗v(z) = 0, ∀z ∈ Z (21)\nSolving the two equations, we have 1− ∑ v∈V PV (v)PZ|v(z) λ∗z = 0 such that λ∗z = PZ(z) for all z. Then for all the possible values of z,\nh∗v(z) = PZ,V (z,v)\nPZ(z)\n= PV |z(v),\n(22)\nwhere the given h∗v(z) is same as the term inside log in (18). Thus, the optimal solution of concave Lagrangian function (19) obtained by h∗v(z) is equal to the mutual information in (18). The substitution of h∗v(z) into (18) completes the proof.\nOur framework can further be applied to segmentation problems because it provides a new perspective on pixel space (Sankaranarayanan et al. (2018a;b); Murez et al. (2018)) and segmentation space (Tsai et al. (2018)) adaptation. The generator in pixel space and segmentation space adaptation learns to transform images or segmentation results from one domain to another. In the context of information regularization, we can view these approaches as limiting information I(X̂;V ) between the generated output X̂ and the domain label V , which is accomplished by involving the encoder for pixel-level generation. This alleviates the domain shift in a raw pixel level. Note that one can choose between limiting the feature-level or pixel-level mutual information. These different regularization terms may be complementary to each other depending on the given task.\nA.2 PROOF OF THEOREM 3\nTheorem 3. Let PZ|x,v(z) be a conditional probabilistic distribution of Z where z ∈ Z , defined by the encoder F , given a sample x ∈ X and the domain label v ∈ V . Let RZ(z) denotes a prior marginal distribution of Z. Then the following inequality holds:\nI(Z;X,V ) ≤ max hv(z): ∑ v∈V hv(z)=1,∀z ∑ v∈V PV (v)EPz∼Z|v [ log hv(z) ] +H(V )\n+ Ex,v∼PX,V [ DKL[PZ|x,v ‖ RZ ] ] (23)\nProof. Based on the chain rule for mutual information,\nI(Z;X,V ) = I(Z;V ) + I(Z;X | V )\n= max hv(z): ∑ v∈V hv(z)=1,∀z ∑ v∈V PV (v)Ez∼PZ|v [ log hv(z) ] +H(V ) + I(Z;X | V ),\n(24)\nwhere the latter equality is given by Theorem 2. Considering I(Z;X | V ),\nI(Z;X | V ) = Ev∼PV [ Ez,x∼PZ,X|v [ log PZ,X|v(z,x)\nPZ|v(z)PX|v(x) ]] = Ex,v∼PX,V [ Ez∼PZ|x,v [ log PZ|x,v(z)\nPZ|v(z) ]] = Ex,v∼PX,V [ Ez∼PZ|x,v [ logPZ|x,v(z) ]] − Ev∼PV [ Ez∼PZ|v [ logPZ|v(z)\n]] ≤ Ex,v∼PX,V [ Ez∼PZ|x,v [ logPZ|x,v(z) ]] − Ev∼PV [ Ez∼PZ|v [ logRZ(z)\n]] = Ex,v∼PX,V [ Ez∼PZ|x,v [ log PZ|x,v(z)\nRZ(z) ]] = Ex,v∼PX,V [ DKL [ PZ|x,v ‖ RZ ]]\n(25)\nThe second equality is obtained by using PZ,X|v(z,x) = PX|v(x)PZ|x,v(z). The inequality is obtained by using DKL[PZ|v ‖ RZ ] = Ez∼PZ|v [ logPZ|v(z)− logRZ(z) ] ≥ 0, where RZ(z) is a variational approximation of the prior marginal distribution of Z. The last equality is obtained from the definition of KL-divergence. The substitution of (25) into (24) completes the proof.\nThe existing DA work on semantic segmentation tasks (Luo et al. (2019); Song et al. (2019)) can be explained as the process of fostering close collaboration between the aforementioned information bottleneck terms. The only difference between Theorem 3 for V = {0, 1} and the objective function in (Luo et al. (2019)) is that (Luo et al. (2019)) employed the shared encoding PZ|x(z) instead of PZ|x,v(z), whereas some adversarial DA approaches use the unshared one (Tzeng et al. (2017)).\nA.3 PROOF OF LEMMA 2 Lemma 2. Let dH(V) = 1N+1 ∑ v∈V dH(Dv, Dvc). LetH be a hypothesis class. Then,\ndH(V) ≤ 1\nN(N + 1) ∑ v,u∈V dH(Dv, Du) (26)\nProof. Let α = 1N represents the uniform domain weight for the mixture of domain Dvc . Then,\ndH(V) = 1\nN + 1 ∑ v∈V dH(Dv, Dvc)\n= 1\nN + 1 ∑ v∈V 2 sup h∈H ∣∣∣Ex∼PDXv [I(h(x = 1))]− Ex∼PDXvc [I(h(x = 1))]∣∣∣ = 1\nN + 1 ∑ v∈V 2 sup h∈H ∣∣∣∣ ∑ u∈V:u6=v α ( Ex∼PDXv [ I ( h(x = 1) )] − Ex∼PDXu [ I ( h(x = 1) )])∣∣∣∣ ≤ 1 N + 1 ∑ v∈V ∑ u∈V:u6=v α · 2 sup h∈H ∣∣∣∣Ex∼PDXv [I(h(x = 1))]− Ex∼PDXu [I(h(x = 1))] ∣∣∣∣\n= 1\nN(N + 1) ∑ v,u∈V dH(Dv, Du),\n(27)\nwhere the inequality follows from the triangluar inequality and jensen’s inequality." }, { "heading": "B EXPERIMENTAL SETUP", "text": "In this Section, we describe the datasets, network architecture and hyperparameter configuration.\nB.1 DATASETS\nWe validate the Multi-source Information-regularized Adaptation Networks (MIAN) with the following benchmark datasets: Digits-Five, Office-31 and Office-Home. Every experiment is repeated four times and the average accuracy in target domain is reported.\nDigits-Five (Peng et al. (2019)) dataset is a unified dataset including five different digit datasets: MNIST (LeCun et al. (1998)), MNIST-M (Ganin & Lempitsky (2014)), Synthetic Digits (Ganin & Lempitsky (2014)), SVHN, and USPS. Following the standard protocols of unsupervised MDA (Xu et al. (2018); Peng et al. (2019)), we used 25000 training images and 9000 test images sampled from a training and a testing subset for each of MNIST, MNIST-M, SVHN, and Synthetic Digits. For USPS, all the data is used owing to the small sample size. All the images are bilinearly interpolated to 32× 32. Office-31 (Saenko et al. (2010)) is a popular benchmark dataset including 31 categories of objects in an office environment. Note that it is a more difficult problem than Digits-Five, which includes 4652 images in total from the three domains: Amazon, DSLR, and Webcam. All the images are interpolated to 224× 224 using bicubic filters. Office-Home (Venkateswara et al. (2017)) is a challenging dataset that includes 65 categories of objects in office and home environments. It includes 15,500 images in total from the four domains: Artistic images (Art), Clip Art(Clipart), Product images (Product), and Real-World images (Realworld). All the images are interpolated to 224× 224 using bicubic filters.\nB.2 ARCHITECTURES\nFor the Digits-Five dataset, we use the same network architecture and optimizer setting as in (Peng et al. (2019)). For all the other experiments, the results are based on ResNet-50, which is pre-trained on ImageNet. The domain discriminator is implemented as a three-layer neural network. Detailed architecture is shown in Figure 5.\nWe compare our method with the following state-of-the-art domain adaptation methods: Deep Adaptation Network (DAN, Long et al. (2015)), Joint Adaptation Network (JAN, Long et al. (2017)), Manifold Embedded Distribution Alignment (MEDA, Wang et al. (2018)), Correlation Alignment (CORAL, Sun et al. (2016)), Domain Adversarial Neural Network (DANN, Ganin et al. (2016)),\n(a) Encoder, domain discriminator, and classifier used in Digits-Five experiments (b) Encoder, domain discriminator, and classifier used in Office-31 and Office-Home experiments\nFigure 5: Network architectures. BN denotes Batch Normalization (Ioffe & Szegedy (2015)) and SVD denotes differentiable SVD in PyTorch for MIAN-γ (Section E)\nBatch Spectral Penalization (BSP, Chen et al. (2019)), Adversarial Discriminative Domain Adaptation (ADDA, Tzeng et al. (2017)), Maximum Classifier Discrepancy (MCD, Saito et al. (2018)), Deep Cocktail Network (DCTN, Xu et al. (2018)), and Moment Matching for Multi-Source Domain Adaptation (M3SDA, Peng et al. (2019)).\nHyperparameters Details of the experimental setup are summarized in Table 4. Other state-of-theart adaptation models are trained based on the same setup except for these cases: DCTN show poor performance with the learning rate shown in Table 4 for both Office-31 and Office-Home datasets. Following the suggestion of the original authors, 1e−5 is used as a learning rate with the Adam optimizer (Kingma & Ba (2014)); MCD show poor performance for the Office-Home dataset with the learning rate shown in Table 4. 1e−4 is selected as a learning rate. For both the proposed and other baseline models, the learning rate of the classifier or domain discriminator trained from the scratch is set to be 10 times of those of ImageNet-pretrained weights, in Office-31 and Office-Home datasets. More hyperparameter configurations are summarized in Table 5 (Section E)" }, { "heading": "C PSEUDOCODE", "text": "Due to the limited space, we provide the algorithm of MIAN in this Section. Details about trainingdependent scaling of βt are in Section E.\nAlgorithm 1: Multi-source Information-regularized Adaptation Networks (MIAN) mini-batch size for each domain=m, Number of source domains=N , Training iteration T . M=m(N + 1), Set of domain labels V = {1, . . . , N + 1}. for t← 1 to T do\nX = {xi}Mi=1 is a union of samples {XS1 , . . . , XSN , XT } Y = {yi}mNi=1 is a union of samples {YS1 , . . . , YSN } Let zi = F (xi), and ŷi = C(F (xi)),∀xi ∈ X L(h) = − 1M ∑ v∈V ∑ i:vi=v [ 1 T [k=vi] log h(zi) + 1 T [k 6=vi] log(1− h(zi))\n] Backpropagate gradient of L(h), or the variant (Mao et al. (2017)), to h. L(F,C) = − 1mN ∑ y∈Y ∑ i:yi=y [ 1 T [k=yi] log ŷi ]\nβt = β0 · 2 ( 1− 11+exp(−σ·t/T ) ) // See Appendix E\nL(F ) = L(F,C)− βtL(h) Backpropagate gradient of L(F ) to F . Backpropagate gradient of L(F,C) to C." }, { "heading": "D ADDITIONAL RESULTS", "text": "Visualization of learned latent representations. We visualized domain-independent representations extracted by the input layer of the classifier with t-SNE (Figure 6). Before the adaptation process, the representations from the target domain were isolated from the representations from each source domain. However, after adaptation, the representations were well-aligned with respect to the class of digits, as opposed to the domain.\nHyperparameter sensitivity. We conducted the analysis on hyperparameter sensitivity with degree of regularization β. The target domain is set as Amazon or Art, where the value β0 changes from 0.1 to 0.5. The accuracy is high when β0 is approximately between 0.1 and 0.3. We thus choose β0 = 0.2 for Office-31, and β0 = 0.3 for Office-Home." }, { "heading": "E DECAYING BATCH SPECTRAL PENALIZATION", "text": "In this Section, we provides details on the Decaying Batch Spectral Penalization (DBSP) which expands MIAN into MIAN-γ.\nE.1 BACKGROUNDS\nThere is little motivation for models to control the complex mutual dependence to domains if reducing the entropy of representations is sufficient to optimize the value of I(Z;V ) = H(Z) − H(Z | V ). If so, such implicit entropy minimization substantially reduce the upper bound of I(Z;Y ), leading to a increase in optimal joint risk λ∗. In other words, the decrease in the entropy of representations may occur as the side effect of I(Z;V ) regularization. Such unexpected side effect of information regularization is highly intertwined with the hidden deterioration of discriminability through adversarial training (Chen et al. (2019); Liu et al. (2019)).\nBased on these insights, we employ the SVD-entropy HSV D(Z) (Alter et al. (2000)) of a representation matrix Z to assess the richness of the latent representations during adaptation, since it is difficult to compute H(Z). Note that while HSV D(Z) is not precisely equivalent to H(Z), HSV D(Z) can be used as a proxy of the level of disorder of the given matrix (Newton & DeSalvo (2010)). In future works, it would be interesting to evaluate the temporal change in entropy with other metrics. We found that HSV D(Z) indeed decreases significantly during adversarial adaptation, suggesting that some eigenfeatures (or eigensamples) become redundant and, thus, the inherent feature-richness diminishes. (Figure 7a) To preclude such deterioration, we employ Batch Spectral Penalization (BSP) (Chen et al. (2019)), which imposes a constraint on the largest singular value to solicit the contribution of other eigenfeatures. The overall objective function in the multi-domain setting is defined as:\nmin F,C L(F,C) + βÎ(Z;V ) + γ N+1∑ i=1 k∑ j=1 s2i,j , (28)\nwhere β and γ are Lagrangian multipliers and si,j is the jth singular value from the ith domain. We found that SVD entropy of representations is severely deteriorated especially in the early stages of training, suggesting the possibility of over-regularization. The noisy domain discriminative signals in the initial phase (Ganin et al. (2016)) may distort and simplify the representations. To circumvent the impaired discriminability in the early stages of the training, the discriminability should be prioritized first with high γ and low β, followed by a gradual decaying and annealing in γ and β, respectively, so that a sufficient level of domain transferability is guaranteed. Based on our temporal analysis, we introduce the training-dependent scaling of β and γ by modifying the progressive training schedule (Ganin et al. (2016)):\nβp = β0 · 2 ( 1− 1 1 + exp(−σ · p) )\nγp = γ0 · ( 2 1 + exp(−σ · p) − 1 ) ,\n(29)\nwhere β0 and γ0 are initial values, σ is a decaying parameter, and p is the training progress from 0 to 1. We refer to this version of our model as MIAN-γ. Note that MIAN only includes annealing-β, excluding DBSP. For the proposed method, β0 is chosen from {0.1, 0.2, 0.3, 0.4, 0.5} for Office-31 and Office-Home dataset, while β0 = 1.0 is fixed in Digits-Five. γ0 is fixed to {1e−4} following Chen et al. (2019).\nSVD-entropy. We evaluated the degree of compromise of SVD-entropy owing to transfer learning. For this, DSLR was fixed as the source domain, and each Webcam and Amazon target domain\nwas used to simulate low (DSLR→Webcam; DW) and high domain (DSLR→Amazon; DA) shift conditions, respectively. SVD-entropy was applied to the representation matrix extracted from ResNet50 and MIAN (denoted as Adapt in Figure 7a) with constant β = 0.1. For accurate assessment, we avoided using spectral penalization. As depicted in the Figure 7a, adversarial adaptation, or information regularization, significantly decreases the SVD-entropy of both the source and target domain representations, especially in the early stages of training, indicating that the representations are simplified in terms of feature-richness. Moreover, when comparing the Adapt_DA_source and Adapt_DW_source conditions, we found that SVD-entropy decreases significantly as the degree of domain shift increases.\nWe additionally conducted analyses on temporal changes of SVD entropy by comparing BSP and decaying BSP (Figure 7b). SVD entropy gradually decreases as the degree of compensation decreases in DBSP which leads to improved transferability and accuracy. Thus DBSP can control the trade-off between the richness of the feature representations and adversarial adaptation as the training proceeds.\nAblation study. We performed an ablation study to assess the contribution of the decaying spectral penalization and annealing information regularization to DA performance (Table 6, 7). We found that the prioritization of feature-richness in early stages (by controlling β and γ) significantly improves the performance. We also found that the constant penalization schedule (Chen et al. (2019)) is not reliable and sometimes impedes transferability in the low domain shift condition (Webcam, DSLR in Table 6). This implies that the conventional BSP may over-regularize the transferability when the degree of domain shift and SVD-entropy decline are relatively small." } ]
2,020
null
SP:825132782872f2167abd5e45773bfdef83e4bb2e
[ "This paper tackles the problem of geometrical and topological 3D reconstruction of a (botanical) tree using a drone-mounted stereo vision system and deep learning-based/aided tree branch image annotation procedures. This is an interesting computer vision 3D reconstruction task, which has important practical applications (e.g in AR/VR or for plant phonemics study), however has not been extensively researched in the past. Part of the reasons are due to some unique challenges that the problem of tree reconstruction is facing, in particular, how to accurately recover complex visual occlusions caused by dense tree branches and leaves, and how to ensure the reconstructed topology is accurate. " ]
We tackle the challenging problem of creating full and accurate three dimensional reconstructions of botanical trees with the topological and geometric accuracy required for subsequent physical simulation, e.g. in response to wind forces. Although certain aspects of our approach would benefit from various improvements, our results exceed the state of the art especially in geometric and topological complexity and accuracy. Starting with two dimensional RGB image data acquired from cameras attached to drones, we create point clouds, textured triangle meshes, and a simulatable and skinned cylindrical articulated rigid body model. We discuss the pros and cons of each step of our pipeline, and in order to stimulate future research we make the raw and processed data from every step of the pipeline as well as the final geometric reconstructions publicly available.
[ { "affiliations": [], "name": "SIMULATABLE GEOMETRY" } ]
[ { "authors": [ "Sameer Agarwal", "Noah Snavely", "Ian Simon", "Steven M Seitz", "Richard Szeliski" ], "title": "Building rome in a day", "venue": "In Computer Vision,", "year": 2009 }, { "authors": [ "Iro Armeni", "Ozan Sener", "Amir R Zamir", "Helen Jiang", "Ioannis Brilakis", "Martin Fischer", "Silvio Savarese" ], "title": "3d semantic parsing of large-scale indoor spaces", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Iro Armeni", "Sasha Sax", "Amir R Zamir", "Silvio Savarese" ], "title": "Joint 2d-3d-semantic data for indoor scene understanding", "venue": "arXiv preprint arXiv:1702.01105,", "year": 2017 }, { "authors": [ "Jules Bloomenthal" ], "title": "Modeling the mighty maple", "venue": "In ACM SIGGRAPH Computer Graphics,", "year": 1985 }, { "authors": [ "Navneet Dalal", "Bill Triggs" ], "title": "Histograms of oriented gradients for human detection", "venue": "In Computer Vision and Pattern Recognition,", "year": 2005 }, { "authors": [ "Rolando Estrada", "Carlo Tomasi", "Scott C Schmidler", "Sina Farsiu" ], "title": "Tree topology estimation", "venue": "IEEE Transactions on Pattern Analysis & Machine Intelligence,", "year": 2015 }, { "authors": [ "Martin A Fischler", "Robert C Bolles" ], "title": "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography", "venue": "Communications of the ACM,", "year": 1981 }, { "authors": [ "Alvaro Fuentes", "Sook Yoon", "Sang Cheol Kim", "Dong Sun Park" ], "title": "A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition", "venue": null, "year": 2022 }, { "authors": [ "Yasutaka Furukawa", "Jean Ponce" ], "title": "Accurate, dense, and robust multiview stereopsis", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2010 }, { "authors": [ "Jingwei Huang", "Angela Dai", "Leonidas J Guibas", "Matthias Nießner" ], "title": "3DLite: towards commodity 3d scanning for content creation", "venue": "ACM Trans. Graph.,", "year": 2017 }, { "authors": [ "Anil K Jain", "Farshid Farrokhnia" ], "title": "Unsupervised texture segmentation using gabor filters", "venue": "Pattern recognition,", "year": 1991 }, { "authors": [ "Yifeng Jiang", "C Karen Liu" ], "title": "Data-augmented contact model for rigid body simulation", "venue": "arXiv preprint arXiv:1803.04019,", "year": 2018 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and superresolution", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Angjoo Kanazawa", "Shubham Tulsiani", "Alexei A. Efros", "Jitendra Malik" ], "title": "Learning category-specific mesh reconstruction from image collections", "venue": "In The European Conf. on Comput. Vision (ECCV),", "year": 2018 }, { "authors": [ "Alina Kloss", "Stefan Schaal", "Jeannette Bohg" ], "title": "Combining learned and analytical models for predicting action effects", "venue": "arXiv preprint arXiv:1710.04102,", "year": 2017 }, { "authors": [ "Guosheng Lin", "Anton Milan", "Chunhua Shen", "Ian Reid" ], "title": "Refinenet: Multi-path refinement networks for highresolution semantic segmentation", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Aristid Lindenmayer" ], "title": "Mathematical models for cellular interactions in development i. filaments with one-sided inputs", "venue": "Journal of theoretical biology,", "year": 1968 }, { "authors": [ "Yotam Livny", "Feilong Yan", "Matt Olson", "Baoquan Chen", "Hao Zhang", "Jihad El-sana" ], "title": "Automatic reconstruction of tree skeletal structures from point clouds", "venue": "Proc. SIGGRAPH Asia 2010,", "year": 2010 }, { "authors": [ "Matthew M Loper", "Michael J Black" ], "title": "OpenDR: An approximate differentiable renderer", "venue": "In European Conf. on Comput. Vision,", "year": 2014 }, { "authors": [ "Gellért Máttyus", "Wenjie Luo", "Raquel Urtasun" ], "title": "Deeproadmapper: Extracting road topology from aerial images", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Agata Mosinska", "Pablo Mrquez-Neila", "Mateusz Koziski", "Pascal Fua" ], "title": "Beyond the pixel-wise loss for topology-aware delineation", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Pierre Moulon", "Pascal Monasse", "Romuald Perrot", "Renaud Marlet" ], "title": "Openmvg: Open multiple view geometry", "venue": "In International Workshop on Reproducible Research in Pattern Recognition,", "year": 2016 }, { "authors": [ "Xue Bin Peng", "Glen Berseth", "KangKang Yin", "Michiel Van De Panne" ], "title": "Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning", "venue": "ACM Trans. Graph.,", "year": 2017 }, { "authors": [ "Przemyslaw Prusinkiewicz", "Mark Hammel", "Jim Hanan", "Radomı́r Měch" ], "title": "Visual models of plant development", "venue": "In Handbook of formal languages,", "year": 1997 }, { "authors": [ "Guoxiang Qu", "Wenwei Zhang", "Zhe Wang", "Xing Dai", "Jianping Shi", "Junjun He", "Fei Li", "Xiulan Zhang", "Yu Qiao" ], "title": "Stripnet: Towards topology consistent strip structure segmentation", "venue": "In Proceedings of the 26th ACM International Conference on Multimedia, MM", "year": 2018 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computer-assisted intervention,", "year": 2015 }, { "authors": [ "Johannes L Schönberger", "Jan-Michael Frahm" ], "title": "Structure-from-motion revisited", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Steven M Seitz", "Brian Curless", "James Diebel", "Daniel Scharstein", "Richard Szeliski" ], "title": "A comparison and evaluation of multi-view stereo reconstruction algorithms", "venue": "In Proc. of the IEEE Conf. on Comput. Vision and Pattern Recognition (CVPR),", "year": 2006 }, { "authors": [ "Ondrej Stava", "Sören Pirk", "Julian Kratt", "Baoquan Chen", "Radomı́r Měch", "Oliver Deussen", "Bedrich Benes" ], "title": "Inverse procedural modelling of trees", "venue": "In Computer Graphics Forum,", "year": 2014 }, { "authors": [ "Ping Tan", "Gang Zeng", "Jingdong Wang", "Sing Bing Kang", "Long Quan" ], "title": "Image-based tree modeling", "venue": "In ACM Trans. Graph.,", "year": 2007 }, { "authors": [ "Carles Ventura", "Jordi Pont-Tuset", "Sergi Caelles", "Kevis-Kokitsi Maninis", "Luc Van Gool" ], "title": "Iterative deep learning for road topology extraction", "venue": "In Proc. of the British Machine Vision Conf. (BMVC),", "year": 2018 }, { "authors": [ "Jason Weber", "Joseph Penn" ], "title": "Creation and rendering of realistic trees", "venue": "In Proc. 22nd Ann. Conf. Comput. Graph. Int. Tech.,", "year": 1995 }, { "authors": [ "Changchang Wu" ], "title": "VisualSFM: A visual structure from motion system", "venue": "http://ccwu.me/vsfm/,", "year": 2011 }, { "authors": [ "Changchang Wu" ], "title": "Towards linear-time incremental structure from motion", "venue": "International Conference on,", "year": 2013 }, { "authors": [ "Ke Xie", "Feilong Yan", "Andrei Sharf", "Oliver Deussen", "Baoquan Chen", "Hui Huang" ], "title": "Tree modeling with real tree-parts examples", "venue": "IEEE TVCG,", "year": 2015 }, { "authors": [ "Weipeng Xu", "Avishek Chatterjee", "Michael Zollhöfer", "Helge Rhodin", "Dushyant Mehta", "Hans-Peter Seidel", "Christian Theobalt" ], "title": "MonoPerfCap: Human performance capture from monocular video", "venue": "ACM Transactions on Graphics (TOG),", "year": 2018 }, { "authors": [ "Tianfan Xue", "Jiajun Wu", "Zhoutong Zhang", "Chengkai Zhang", "Joshua B. Tenenbaum", "William T. Freeman" ], "title": "Seeing tree structure from vibration", "venue": "In The European Conf. on Comput. Vision (ECCV),", "year": 2018 }, { "authors": [ "Ying Zheng", "Steve Gu", "Herbert Edelsbrunner", "Carlo Tomasi", "Philip Benfey" ], "title": "Detailed reconstruction of 3d plant root shape", "venue": "In Comput. Vision (ICCV),", "year": 2011 }, { "authors": [ "Jin Zhou", "Ananya Das", "Feng Li", "Baoxin Li" ], "title": "Circular generalized cylinder fitting for 3d reconstruction in endoscopic imaging based on MRF", "venue": "In Comput. Vision and Pattern Recognition Workshops,", "year": 2008 }, { "authors": [ "Silvia Zuffi", "Angjoo Kanazawa", "Michael J Black" ], "title": "Lions and tigers and bears: Capturing non-rigid, 3d, articulated shape from images", "venue": "In Proc. of the IEEE Conf. on Comput. Vision and Pattern Recognition (CVPR),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Human-inhabited outdoor environments typically contain ground surfaces such as grass and roads, transportation vehicles such as cars and bikes, buildings and structures, and humans themselves, but are also typically intentionally populated by a large number of trees and shrubbery; most of the motion in such environments comes from humans, their vehicles, and wind-driven plants/trees. Tree reconstruction and simulation are obviously useful for AR/VR, architectural design and modeling, film special effects, etc. For example, when filming actors running through trees, one would like to create virtual versions of those trees with which a chasing dinosaur could interact. Other uses include studying roots and plants for agriculture (Zheng et al., 2011; Estrada et al., 2015; Fuentes et al., 2017) or assessing the health of trees especially in remote locations (similar in spirit to Zuffi et al. (2018)). 2.5D data, i.e. 2D images with some depth information, is typically sufficient for robotic navigation, etc.; however, there are many problems that require true 3D scene understanding to the extent one could 3D print objects and have accurate geodesics. Whereas navigating around objects might readily generalize into categories or strategies such as ‘move left,’ ‘move right,’ ‘step up,’ ‘go under,’ etc., the 3D object understanding required for picking up a cup, knocking down a building, moving a stack of bricks or a pile of dirt, or simulating a tree moving in the wind requires significantly higher fidelity. As opposed to random trial and error, humans often use mental simulations to better complete a task, e.g. consider stacking a card tower, avoiding a falling object, or hitting a baseball (visualization is quite important in sports); thus, physical simulation can play an important role in end-to-end tasks, e.g. see Kloss et al. (2017); Peng et al. (2017); Jiang & Liu (2018) for examples of combining simulation and learning.\nAccurate 3D shape reconstruction is still quite challenging. Recently, Malik argued1 that one should not apply general purpose reconstruction algorithms to say a car and a tree and expect both reconstructions to be of high quality. Rather, he said that one should use domain-specific knowledge as he has done for example in Kanazawa et al. (2018). Another example of this specialization strategy is to rely on the prior that many indoor surfaces are planar in order to reconstruct office spaces (Huang et al., 2017) or entire buildings (Armeni et al., 2016; 2017). Along the same lines, Zuffi et al. (2018) uses a base animal shape as a prior for their reconstructions of wild animals. Thus, we similarly take a specialized approach using a generalized cylinder prior for both large and medium scale features.\nIn Section 3, we discuss our constraints on data collection as well as the logistics behind the choices we made for the hardware (cameras and drones) and software (structure from motion, multi-view\n1Jitendra Malik, Stanford cs231n guest lecture, 29 May 2018\nstereo, inverse rendering, etc.) used to obtain our raw and processed data. Section 4 discusses our use of machine learning, and Section 5 presents a number of experimental results. In Appendices A, B, and C we describe how we create geometry from the data with enough efficacy for physical simulation." }, { "heading": "2 PREVIOUS WORK", "text": "Tree Modeling and Reconstruction: Researchers in computer graphics have been interested in modeling trees and plants for decades (Lindenmayer, 1968; Bloomenthal, 1985; Weber & Penn, 1995; Prusinkiewicz et al., 1997; Stava et al., 2014). SpeedTree2 is probably the most popular software utilized, and their group has begun to consider the incorporation of data-driven methods. Amongst the data-driven approaches, Tan et al. (2007) is most similar to ours combining point cloud and image segmentation data to build coarse-scale details of a tree; however, they generate fine-scale details procedurally using a self-similarity assumption and image-space growth constraints, whereas we aim to capture more accurate finer structures from the image data. Other data-driven approaches include Livny et al. (2010) which automatically estimates skeletal structure of trees from point cloud data, Xie et al. (2015) which builds tree models by assembling pieces from a database of scanned tree parts, etc.\nMany of these specialized, data-driven approaches for trees are built upon more general techniques such as the traditional combination of structure from motion (see e.g. Wu (2013)) and multi-view stereo (see e.g. Furukawa & Ponce (2010)). In the past, researchers studying 3D reconstruction have engineered general approaches to reconstruct fine details of small objects captured by sensors in highly controlled environments (Seitz et al., 2006). At the other end of the spectrum, researchers have developed approaches for reconstructing building- or even city-scale objects using large amounts of image data available online (Agarwal et al., 2009). Our goal is to obtain a 3D model of a tree with elements from both of these approaches: the scale of a large structure with the fine details of its many branches and twigs. However, unlike in general reconstruction approaches, we cannot simply collect images online or capture data using a high-end camera.\nTo address similar challenges in specialized cases, researchers take advantage of domain-specific prior knowledge. Zhou et al. (2008) uses a generalized cylinder prior (similar to us) for reconstructing tubular structures observed during medical procedures and illustrates that this approach performs better than simple structure from motion. The process of creating a mesh that faithfully reflects topology and subsequently refining its geometry is similar in spirit to Xu et al. (2018), which poses a human model first via its skeleton and then by applying fine-scale deformations.\nLearning and Networks: So far, our use of networks is limited to segmentation tasks, where we rely on segmentation masks for semi-automated tree branch labeling. Due to difficulties in getting sharp details from convolutional networks, the study of network-based segmentation of thin structures is still an active field in itself; there has been recent work on designing specialized multiscale architectures (Ronneberger et al., 2015; Lin et al., 2017; Qu et al., 2018) and also on incorporating perceptual losses (Johnson et al., 2016) during network training (Mosinska et al., 2018)." }, { "heading": "3 RAW AND PROCESSED DATA", "text": "As a case study, we select a California oak (quercus agrifolia) as our subject for tree reconstruction and simulation (see Figure 1). The mere size of this tree imposes a number of restrictions on our data capture: one has to deal with an outdoor, unconstrained environment, wind and branch motion will be an issue, it will be quite difficult to observe higher up portions of the tree especially at close proximities, there will be an immense number of occluded regions because of the large number of branches that one cannot see from any feasible viewpoint, etc.\nIn an outdoor setting, commodity structured light sensors that use infrared light (e.g. the Kinect) fail to produce reliable depth maps as their projected pattern is washed out by sunlight; thus, we opted to use standard RGB cameras. Because we want good coverage of the tree, we cannot simply capture images from the ground; instead, we mounted our cameras on a quadcopter drone that was piloted around the tree. The decision to use a drone introduces additional constraints: the cameras must be\n2https://speedtree.com\nlightweight, the camera locations cannot be known a priori, the drone creates its own air currents which can affect the tree’s motion, etc. Balancing the weight constraint with the benefits of using cameras with a global shutter and minimal distortion, we mounted a pair of Sony rx100 v cameras to a DJI Matrice 100 drone. We calibrated the stereo offset between the cameras before flight, and during flight each camera records a video with 4K resolution at 30 fps.\nData captured in this manner is subject to a number of limitations. Compression artifacts in the recorded videos may make features harder to track than when captured in a RAW format. Because the drone must keep a safe distance from the tree, complete 360◦ coverage of a given branch is often infeasible. This lack of coverage is compounded by occlusions caused by other branches and leaves (in seasons when the latter are present). Furthermore, the fact that the tree may be swaying slightly in the wind even on a calm day violates the rigidity assumption upon which many multi-view reconstruction algorithms rely. Since we know from the data collection phase that our data coverage will be incomplete, we will need to rely on procedural generation, inpainting, “hallucinating” structure, etc. in order to complete the model.\nAfter capturing the raw data, we augment it to begin to estimate the 3D structure of the environment. We subsample the videos at a sparse 1 or 2 fps and use the Agisoft PhotoScan tool3 to run structure from motion and multi-view stereo on those images, yielding a set of estimated camera frames and a dense point cloud. We align cameras and point clouds from separate structure from motion problems by performing a rigid fit on a sparse set of control points. This is a standard workflow also supported by open-source tools (Wu, 2011; Schönberger & Frahm, 2016; Moulon et al., 2016). Some cameras may be poorly aligned (or in some cases, so severely incorrect that they require manual correction). Once the cameras are relatively close, one can utilize an inverse rendering approach like that of Loper & Black (2014) adjusting the misaligned cameras’ parameters relative to the point cloud. In the case of more severely misaligned cameras, one may select correspondences between 3D points and points in the misaligned image and then find the camera’s extrinsics by solving a perspective-n-point problem (Fischler & Bolles, 1981).\nIn the supplemental appendices, we describe our approach to constructing large scale geometry using this processed data. Recovering “medium” scale structures that are not captured in the point cloud, however, is a problem that lends itself well to a learning-based treatment." }, { "heading": "4 ANNOTATION AND LEARNING", "text": "Annotating images is a challenging task for human labelers and automated methods alike. Branches and twigs heavily occlude one another, connectivity can be difficult to infer, and the path of even a relatively large branch can often not be traced visually from a single view. Thus it is desirable to augment the image data during annotation to aid human labelers.\n3Agisoft PhotoScan, http://www.agisoft.com/\nOne method for aiding the labeler is to automatically extract a “flow field” of vectors tracking the anisotropy of the branches in image space (see Figure 6). The flow field is overlaid on the image in the annotation tool, and the labeler may select endpoints to be automatically connected using the projection-advection scheme discussed in Section 5.3. Section 5.3 also discusses how we generate the flow field itself, after first creating a segmentation mask. Note that segmentation (i.e. discerning tree or not tree for each pixel in the image) is a simpler problem than annotation (i.e. discerning medial axes, topology, and thickness in image space).\nObtaining segmentation masks is straightforward under certain conditions, e.g. in areas where branches and twigs are clearly silhouetted against the grass or sky, but segmentation can be difficult in visually dense regions of an image. Thus, we explore deep learning-based approaches for performing semantic segmentation on images from our dataset. In particular, we use UNet (Ronneberger et al., 2015), a state-of-the-art fully convolutional architecture for segmentation; the strength of this model lies in its many residual connections, which give the model the capacity to retain sharp edges despite its hourglass structure. See Section 5.2 for further discussion." }, { "heading": "5 EXPERIMENTS", "text": "Since the approach to large scale structure discussed in Appendix A works sufficiently well, we focus here on medium scale branches." }, { "heading": "5.1 IMAGE ANNOTATION", "text": "We present a human labeler with an interface for drawing piecewise linear curves on an overlay of a tree image. User annotations consist of vertices with 2D positions in image space, per-vertex branch thicknesses, and edges connecting the vertices. Degree-1 vertices are curve endpoints, degree-2 vertices lie on the interior of a curve, and degree-3 vertices exist where curves connect. A subset of the annotated vertices are additionally given unique identifiers that are used to match common points between images; these will be referred to as “keypoints” and are typically chosen as bifurcation points or points on the tree that are easy to identify in multiple images. See Figure 2.\nWe take advantage of our estimated 3D knowledge of the tree’s environment in order to aid human labelers and move towards automatic labeling. After some annotations have been created, their corresponding 3D structures are generated and projected back into each image, providing rough visual cues for annotating additional images. Additionally, since we capture stereo information, we augment our labeling interface to be aware of stereo pairs: users annotate one image, copy those annotations to the stereo image, and translate the curve endpoints along their corresponding epipolar lines to the correct location in the stereo image. This curve translation constrained to epipolar lines (with additional unconstrained translation if necessary to account for error) is much less time consuming than labeling the stereo image from scratch.\nHuman labelers often identify matching branches and twigs across images by performing human optical flow, toggling between adjacent frames of the source video and using the parallax effect to determine branch connectivity. This practice is an obvious candidate for automation, e.g. by annotating an initial frame then automatically carrying the annotated curves through subsequent frames via optical flow. Unfortunately, the features of interest are often extremely small and thin and the image data contains compression artifacts, making automatic optical flow approaches quite difficult. However, it is our hope that in future work the same tools that aid human labelers can be applied to automatic approaches making them more effective for image annotation." }, { "heading": "5.2 DEEP LEARNING", "text": "In order to generate flow fields for assisting the human labeler as discussed in Section 4, we first obtain semantic segmentations of tree and not tree using a deep learning approach. To train a network for semantic segmentation, we generate a training dataset by rasterizing the image annotations as binary segmentation masks of the labeled branches. From these 4K masks, we then generate a dataset of 512 × 512 crops containing more than 4000 images. The crop centers are guaranteed to be at least 50 pixels away from one another, and each crop is guaranteed to correspond to a segmentation mask containing both binary values. The segmentation problem on the raw 4K images must work on image patches with distinctly different characteristics: the more straightforward case of branches silhouetted against the grass, and the more complex case of highly dense branch regions. Therefore, we split the image patches into two sets via k-means clustering, and train two different models to segment the two different cases. For the same number of training epochs, our two-model approach yields qualitatively better results than the single-model approach.\nInstead of directly using the standard binary cross entropy loss, the sparseness and incompleteness of our data led us to use a weighted variant, in order to penalize false negatives more than false positives. As a further step to induce smoothness and sparsity in our results, we introduce a second order regularizer through the L2 difference of the output and ground truth masks’ gradients. We also experiment with an auxiliary loss similar to the VGG perceptual loss described in Mosinska et al. (2018), but instead of using arbitrary feature layers of a pretrained network, we look at the L1 difference of hand-crafted multiscale directional activation maps. These activation maps are produced by convolving the segmentation mask with a series of Gabor filter-esque (Jain & Farrokhnia, 1991) feature kernels {k(θ, r, σ) : R2 → [0, . . . , N ]2}, where each kernel is scale-aware and piecewise sinusoidal (see Figure 3). A given kernel k(θ, r, σ) detects branches that are at an angle θ and have thicknesses within the interval [r, σr]. For our experiments, we generate 18 kernels spaced 10 degrees apart and use N = 35, r = 4, and σ = 1.8.\nFigure 4 illustrates two annotated images used in training and the corresponding learned semantic segmentations. Note that areas of the semantic segmentation that are not part of the labeled annotation may correspond to true branches or may be erroneous; for the time being a human must still choose which pieces of the semantic segmentation to use in adding further annotations." }, { "heading": "5.3 LEARNING-ASSISTED ANNOTATION", "text": "To generate a flow field, we create directional activation maps as in Section 5.2 again using the kernels from Figure 3, then perform a clustering step on the resulting per-pixel histograms of gradients (Dalal & Triggs, 2005) to obtain flow vectors. Each pixel labeled as tree with sufficient confidence is assigned one or more principal directions; pixels with more than one direction are\npotentially branching points. We find the principal directions by detecting clusters in each pixel’s activation weights; for each cluster, we take the sum of all relevant directional slopes weighted by their corresponding activation values.\nHaving generated a flow field of sparse image space vectors, we trace approximate medial axes through the image via an alternating projection-advection scheme. From a given point on a branch, we estimate the thickness of the branch by examining the surrounding flow field and project the point to the estimated center of the branch. We then advect the point through the flow field and repeat this process. In areas with multiple directional activations (e.g. at branch crossings or bifurcations), our advection scheme prefers the direction that deviates least from the previous direction. More details about this scheme may be found in the supplemental material. By applying this strategy to flow fields generated from ground truth image segmentations, we are able to recover visually plausible medial axes (see Figure 5). However, medial axes automatically extracted from images without ground truth labels are error prone. Thus, we overlay the flow field on the annotation interface and rely on the human labeler. The labeler may select curve endpoints in areas where the flow field is visually plausible, and these endpoints are used to guide the medial axis generation. See Figure 6 for an example flow field generated from the learned segmentation mask and the supplemental material for a demonstration of semi-automated medial axis generation." }, { "heading": "5.4 RECOVERING MEDIUM SCALE BRANCHES", "text": "Given a set of image annotations and camera extrinsics obtained via structure from motion and stereo calibration, we first construct piecewise linear branches in 3D. We triangulate keypoints that have been labeled in multiple images, obtaining 3D positions by solving for the point that minimizes the sum of squared distances to the rays originating at each camera’s optical center and passing through the camera’s annotated keypoint. We then transfer the topology of the annotations to the 3D model by connecting each pair of 3D keypoints with a line segment if a curve exists between the corresponding keypoint pair in any image annotation.\nNext, we subdivide and perturb the linear segments connecting the 3D keypoints to match the curvature of the annotated data. Each segment between two keypoints is subdivided by introducing additional vertices evenly spaced along the length of the segment. For each newly introduced vertex, we consider the point that is the same fractional length along the image-space curve between the corresponding annotated keypoints in each image for which such a curve exists. We trace rays through these intra-curve points to triangulate the position of each new vertex in the same way that we triangulated the original keypoints.\nFinally, we estimate the thickness of each 3D vertex beginning with the 3D keypoints. We estimate the world space thickness of each keypoint by considering the corresponding thickness in all annotated camera frames. For each camera in which the keypoint is labeled, we estimate world space thickness using similar triangles, then average these estimates to get the final thickness value. We then set the branch thickness of each of the vertices obtained through subdivision simply by interpolating between the thicknesses of the keypoints at either end of the 3D curve. Using this strategy, we recover a set of 3D positions with local cross-sectional thicknesses connected by edges, which is equivalent to the generalized cylinder representation employed in Appendix A.\nThe human users of our annotation tools encounter the traditional trade-off of stereo vision: it is easy to identify common features in images with a small baseline, but these features triangulate poorly exhibiting potentially extreme variance in the look-at directions of the corresponding cameras. Conversely, cameras whose look-at directions are close to orthogonal yield more stable triangulations, but common features between such images are more difficult to identify. One heuristic approach is to label each keypoint three times: twice in similar images and once from a more diverse viewpoint. However, it may be the case that some branches are only labeled in two images with a small baseline (e.g. a stereo pair). In this case, we propose a clamping strategy based on the topological prior of the tree. Designating a “root” vertex of a subtree for such annotations, we triangulate the annotated keypoints as usual obtaining noisy positions in the look-at directions of the stereo cameras. We then march from the root vertex to the leaf vertices. For each vertex p with location px, we consider\neach outboard child vertex c with location cx. For each camera in which the point c is labeled, we consider the intersection of the ray from the location of c’s annotation to cx with the plane parallel to the image plane that contains px; let c′x be the intersection point. We then clamp the location of c between c′x and the original location cx based on a user-specified fraction. This process is repeated for each camera in which c is annotated, and we obtain the final location for c by averaging the clamped location from each camera. See Figure 7." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "We presented an end-to-end pipeline for reconstructing a 3D model of a botanical tree from RGB image data. Our reconstructed model may be readily simulated to create motion in response to external forces, e.g. to model the tree blowing in the wind (see Figure 8). We use generalized cylinders to initialize an articulated rigid body system, noting that one could subdivide these primitives as desired for additional bending degrees of freedom, or decrease their resolution for faster performance on mobile devices. The simulated bodies drive the motion of the textured triangulated surfaces and/or the point cloud data as desired.\nAlthough we presented one set of strategies to go all the way from the raw data to a simulatable mesh, it is our hope that various researchers will choose to apply their expertise to and improve upon various stages of this pipeline, yielding progressively better results. In particular, the rich topological information in the annotations has great potential for additional deep learning applications, particularly for topology extraction (Ventura et al., 2018; Máttyus et al., 2017; Xue et al., 2018) and 2D to 3D topology generation (Estrada et al., 2015)." } ]
2,020
null
SP:8e3a07ed19e7b0c677aae1106da801d246f5aa0c
[ "This paper addresses the task of adversarial defense, particularly against untargeted attack. It starts from the observation that these attacks mostly minimize the perturbation and the classification loss, and proposes a new training strategy named Target Training. The method duplicate training examples with a special ground-truth label, to fool the adversarial attackers. Experiments are conducted on MNIST and CIFAR10 under several attacks." ]
Recent adversarial defense approaches have failed. Untargeted gradient-based attacks cause classifiers to choose any wrong class. Our novel white-box defense tricks untargeted attacks into becoming attacks targeted at designated target classes. From these target classes, we derive the real classes. The Target Training defense tricks the minimization at the core of untargeted, gradientbased adversarial attacks: minimize the sum of (1) perturbation and (2) classifier adversarial loss. Target Training changes the classifier minimally, and trains it with additional duplicated points (at 0 distance) labeled with designated classes. These differently-labeled duplicated samples minimize both terms (1) and (2) of the minimization, steering attack convergence to samples of designated classes, from which correct classification is derived. Importantly, Target Training eliminates the need to know the attack and the overhead of generating adversarial samples of attacks that minimize perturbations. Without using adversarial samples and against an adaptive attack aware of our defense, Target Training exceeds even default, unsecured classifier accuracy of 84.3% for CIFAR10 with 86.6% against DeepFool attack; and achieves 83.2% against CW-L2(κ=0) attack. Using adversarial samples, we achieve 75.6% against CW-L2(κ=40). Due to our deliberate choice of low-capacity classifiers, Target Training does not withstand L∞ adaptive attacks in CIFAR10 but withstands CW-L∞(κ=0) in MNIST. Target Training presents a fundamental change in adversarial defense strategy.
[]
[ { "authors": [ "Dario Amodei", "Chris Olah", "Jacob Steinhardt", "Paul F. Christiano", "John Schulman", "Dan Mané" ], "title": "Concrete problems in AI safety", "venue": "CoRR, abs/1606.06565,", "year": 2016 }, { "authors": [ "Anish Athalye", "Logan Engstrom", "Andrew Ilyas", "Kevin Kwok" ], "title": "Synthesizing robust adversarial examples", "venue": "arXiv preprint arXiv:1707.07397,", "year": 2017 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "arXiv preprint arXiv:1802.00420,", "year": 2018 }, { "authors": [ "Battista Biggio", "Igino Corona", "Davide Maiorca", "Blaine Nelson", "Nedim Šrndić", "Pavel Laskov", "Giorgio Giacinto", "Fabio Roli" ], "title": "Evasion attacks against machine learning at test time", "venue": "In Joint European conference on machine learning and knowledge discovery in databases,", "year": 2013 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "arXiv preprint arXiv:1712.04248,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Defensive distillation is not robust to adversarial examples", "venue": "arXiv preprint arXiv:1607.04311,", "year": 2016 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Magnet and” efficient defenses against adversarial attacks” are not robust to adversarial examples", "venue": "arXiv preprint arXiv:1711.08478,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Nicholas Carlini", "Anish Athalye", "Nicolas Papernot", "Wieland Brendel", "Jonas Rauber", "Dimitris Tsipras", "Ian Goodfellow", "Aleksander Madry", "Alexey Kurakin" ], "title": "On evaluating adversarial robustness", "venue": null, "year": 1902 }, { "authors": [ "Jianbo Chen", "Michael I Jordan", "Martin J Wainwright" ], "title": "Hopskipjumpattack: A query-efficient decision-based attack", "venue": "arXiv preprint arXiv:1904.02144,", "year": 1904 }, { "authors": [ "Pin-Yu Chen", "Huan Zhang", "Yash Sharma", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Zhihua Cui", "Fei Xue", "Xingjuan Cai", "Yang Cao", "Gai-ge Wang", "Jinjun Chen" ], "title": "Detection of malicious code variants based on deep learning", "venue": "IEEE Transactions on Industrial Informatics,", "year": 2018 }, { "authors": [ "Guneet S Dhillon", "Kamyar Azizzadenesheli", "Zachary C Lipton", "Jeremy Bernstein", "Jean Kossaifi", "Aran Khanna", "Anima Anandkumar" ], "title": "Stochastic activation pruning for robust adversarial defense", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kevin Eykholt", "Ivan Evtimov", "Earlence Fernandes", "Bo Li", "Amir Rahmati", "Chaowei Xiao", "Atul Prakash", "Tadayoshi Kohno", "Dawn Song" ], "title": "Robust physical-world attacks on deep learning visual classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Oliver Faust", "Yuki Hagiwara", "Tan Jen Hong", "Oh Shu Lih", "U Rajendra Acharya" ], "title": "Deep learning for healthcare applications based on physiological signals: a review", "venue": "Computer methods and programs in biomedicine,", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens Van Der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Shengyuan Hu", "Tao Yu", "Chuan Guo", "Wei-Lun Chao", "Kilian Q Weinberger" ], "title": "A new defense against adversarial images: Turning a weakness into a strength", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "Cifar-10 and cifar-100 datasets", "venue": "URl: https://www. cs. toronto. edu/kriz/cifar. html,", "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "Christopher JC Burges" ], "title": "The mnist database of handwritten digits", "venue": "URL http://yann. lecun. com/exdb/mnist,", "year": 1998 }, { "authors": [ "Yingzhen Li", "John Bradshaw", "Yash Sharma" ], "title": "Are generative classifiers more robust to adversarial attacks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M Erfani", "Sudanthi Wijewickrema", "Grant Schoenebeck", "Dawn Song", "Michael E Houle", "James Bailey" ], "title": "Characterizing adversarial subspaces using local intrinsic dimensionality", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Dongyu Meng", "Hao Chen" ], "title": "Magnet: a two-pronged defense against adversarial examples", "venue": "In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2017 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Omar Fawzi", "Pascal Frossard" ], "title": "Universal adversarial perturbations", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Maria-Irina Nicolae", "Mathieu Sinn", "Minh Ngoc Tran", "Beat Buesser", "Ambrish Rawat", "Martin Wistuba", "Valentina Zantedeschi", "Nathalie Baracaldo", "Bryant Chen", "Heiko Ludwig", "Ian Molloy", "Ben Edwards" ], "title": "Adversarial robustness toolbox v1.0.1", "venue": null, "year": 2018 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Chao Du", "Ning Chen", "Jun Zhu" ], "title": "Improving adversarial robustness via promoting ensemble diversity", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Yinpeng Dong", "Chao Du", "Ning Chen", "Jun Zhu" ], "title": "Rethinking softmax crossentropy loss for adversarial robustness", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "In 2016 IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick D. McDaniel", "Ian J. Goodfellow" ], "title": "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples", "venue": "CoRR, abs/1605.07277,", "year": 2016 }, { "authors": [ "Kevin Roth", "Yannic Kilcher", "Thomas Hofmann" ], "title": "The odds are odd: A statistical test for detecting adversarial examples", "venue": null, "year": 1902 }, { "authors": [ "Sara Sabour", "Yanshuai Cao", "Fartash Faghri", "David J Fleet" ], "title": "Adversarial manipulation of deep representations", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Pouya Samangouei", "Maya Kabkab", "Rama Chellappa" ], "title": "Defense-gan: Protecting classifiers against adversarial attacks using generative models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sanchari Sen", "Balaraman Ravindran", "Anand Raghunathan" ], "title": "Empir: Ensembles of mixed precision deep networks for increased robustness against adversarial attacks", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jiawei Su", "Danilo Vasconcellos Vargas", "Kouichi Sakurai" ], "title": "One pixel attack for fooling deep neural networks", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian J. Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2013 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "arXiv preprint arXiv:1705.07204,", "year": 2017 }, { "authors": [ "Florian Tramèr", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "The space of transferable adversarial examples", "venue": "arXiv preprint arXiv:1704.03453,", "year": 2017 }, { "authors": [ "Florian Tramer", "Nicholas Carlini", "Wieland Brendel", "Aleksander Madry" ], "title": "On adaptive attacks to adversarial example defenses", "venue": "arXiv preprint arXiv:2002.08347,", "year": 2020 }, { "authors": [ "Gunjan Verma", "Ananthram Swami" ], "title": "Error correcting output codes improve probability estimation and adversarial robustness of deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "In International Conference on Learning Representations,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural network classifiers are vulnerable to malicious adversarial samples that appear indistinguishable from original samples (Szegedy et al., 2013), for example, an adversarial attack can make a traffic stop sign appear like a speed limit sign (Eykholt et al., 2018) to a classifier. An adversarial sample created using one classifier can also fool other classifiers (Szegedy et al., 2013; Biggio et al., 2013), even ones with different structure and parameters (Szegedy et al., 2013; Goodfellow et al., 2014; Papernot et al., 2016b; Tramèr et al., 2017b). This transferability of adversarial attacks (Papernot et al., 2016b) matters because it means that classifier access is not necessary for attacks. The increasing deployment of neural network classifiers in security and safety-critical domains such as traffic (Eykholt et al., 2018), autonomous driving (Amodei et al., 2016), healthcare (Faust et al., 2018), and malware detection (Cui et al., 2018) makes countering adversarial attacks important.\nGradient-based attacks use the classifier gradient to generate adversarial samples from nonadversarial samples. Gradient-based attacks minimize at the same time classifier adversarial loss and perturbation (Szegedy et al., 2013), though attacks can relax this minimization to allow for bigger perturbations, for example in the Carlini&Wagner attack (CW) (Carlini & Wagner, 2017c) for κ > 0, in the Projected Gradient Descent attack (PGD) (Kurakin et al., 2016; Madry et al., 2017), in FastGradientMethod (FGSM) (Goodfellow et al., 2014). Other gradient-based adversarial attacks include DeepFool (Moosavi-Dezfooli et al., 2016), Zeroth order optimization (ZOO) (Chen et al., 2017), Universal Adversarial Perturbation (UAP) (Moosavi-Dezfooli et al., 2017).\nMany recent proposed defenses have been broken (Athalye et al., 2018; Carlini & Wagner, 2016; 2017a;b; Tramer et al., 2020). They fall largely into these categories: (1) adversarial sample detection, (2) gradient masking and obfuscation, (3) ensemble, (4) customized loss. Detection defenses (Meng & Chen, 2017; Ma et al., 2018; Li et al., 2019; Hu et al., 2019) aim to detect, cor-\nrect or reject adversarial samples. Many detection defenses have been broken (Carlini & Wagner, 2017b;a; Tramer et al., 2020). Gradient obfuscation is aimed at preventing gradient-based attacks from access to the gradient and can be achieved by shattering gradients (Guo et al., 2018; Verma & Swami, 2019; Sen et al., 2020), randomness (Dhillon et al., 2018; Li et al., 2019) or vanishing or exploding gradients (Papernot et al., 2016a; Song et al., 2018; Samangouei et al., 2018). Many gradient obfuscation methods have also been successfully defeated (Carlini & Wagner, 2016; Athalye et al., 2018; Tramer et al., 2020). Ensemble defenses (Tramèr et al., 2017a; Verma & Swami, 2019; Pang et al., 2019; Sen et al., 2020) have also been broken (Carlini & Wagner, 2016; Tramer et al., 2020), unable to even outperform their best performing component. Customized attack losses defeat defenses (Tramer et al., 2020) with customized losses (Pang et al., 2020; Verma & Swami, 2019) but also, for example ensembles (Sen et al., 2020). Even though it has not been defeated, Adversarial Training (Kurakin et al., 2016; Szegedy et al., 2013; Madry et al., 2017) assumes that the attack is known in advance and takes time to generate adversarial samples at every iteration. The inability of recent defenses to counter adversarial attacks calls for new kinds of defensive approaches.\nIn this paper, we make the following major contributions:\n• We develop Target Training - a novel, white-box adversarial defense that converts untargeted gradient-based attacks into attacks targeted at designated target classes, from which correct classes are derived. Target Training is based on the minimization at the core of untargeted gradient-based adversarial attacks.\n• For all attacks that minimize perturbation, we eliminate the need to know the attack or to generate adversarial samples during training.\n• We show that Target Training withstands non-L∞ adversarial attacks without resorting to increased network capacity. With default accuracy of 84.3% in CIFAR10, Target Training achieves 86.6% against the DeepFool attack, and 83.2% against the CW-L2(κ=0) attack without using adversarial samples and against an adaptive attack aware of our defense. Against an adaptive CW-L2(κ=40) attack, we achieve 75.6% while using adversarial samples. Our choice of low-capacity classifiers makes Target Training not withstand L∞ adaptive attacks, except for CW-L∞(κ=0) in MNIST.\n• We conclude that Adversarial Training might not be defending by populating sparse areas with samples, but by minimizing the same minimization that Target Training minimizes." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Here, we present the state-of-the-art in adversarial attacks and defenses, as well as a summary.\nNotation A k-class neural network classifier that has θ parameters is denoted by a function f(x) that takes input x ∈ Rd and outputs y ∈ Rk , where d is the dimensionality and k is the number of classes. An adversarial sample is denoted by xadv . Classifier output is y, yi is the probability that the input belongs to class i. Norms are denoted as L0, L2 and L∞." }, { "heading": "2.1 ADVERSARIAL ATTACKS", "text": "Szegedy et al. (2013) were first to formulate the generation of adversarial samples as a constrained minimization of the perturbation under an Lp norm. Because this formulation can be hard to solve, Szegedy et al. (2013) reformulated the problem as a gradient-based, two-term minimization of the weighted sum of perturbation, and classifier loss. For untargeted attacks, this minimization is:\nminimize c · ‖xadv − x‖22 + lossf (xadv) (Minimization 1) subject to xadv ∈ [0, 1]n\nwhere f is the classifier, lossf is classifier loss on adversarial input, and c a constant value evaluated in the optimization. Term (1) is a norm to ensure a small adversarial perturbation. Term (2) utilizes the classifier gradient to find adversarial samples that minimize classifier adversarial loss.\nMinimization 1 is the foundation for many gradient-based attacks, though many tweaks can and have been applied. Some attacks follow Minimization 1 implicitly (Moosavi-Dezfooli et al., 2016),\nand others explicitly (Carlini & Wagner, 2017c). The type of Lp norm in term (1) of the minimization also varies. For example the CW attack (Carlini & Wagner, 2017c) uses L0, L2 and L∞, whereas DeepFool (Moosavi-Dezfooli et al., 2016) uses the L2 norm. A special perturbation case is the Pixel attack by Su et al. (2019) which changes exactly one pixel. Some attacks even exclude term (1) from the Minimization 1 and introduce an external parameter to control perturbation. The FGSM attack by Goodfellow et al. (2014), for example, uses an parameter, while the CW attack (Carlini & Wagner, 2017c) uses a κ confidence parameter.\nThe Fast Gradient Sign Method by Goodfellow et al. (2014) is a simple, L∞-bounded attack that constructs adversarial samples by perturbing each input dimension in the direction of the gradient by a magnitude of : xadv = x+ · sign(∇xloss(θ, x, y)). The current strongest attack is CW (Carlini & Wagner, 2017c). CW customizes Minimization 1 by passing c to the second term, and using it to tune the relative importance of the terms. With a further change of variable, CW obtains an unconstrained minimization problem that allows it to optimize directly through back-propagation. In addition, CW has a κ parameter for controlling the confidence of the adversarial samples. For κ > 0 and up to 100, the CW attack allows for more perturbation in the adversarial samples it generates.\nThe DeepFool attack by Moosavi-Dezfooli et al. (2016) follows the Minimization 1 implicitly. DeepFool (Moosavi-Dezfooli et al., 2016) looks at the smallest distance of a point from the classifier decision boundary as the minimum amount of perturbation needed to change its classification. DeepFool approximates the classifier with a linear one, estimates the distance from the linear boundary, and then takes steps in the direction of the closest boundary until an adversarial sample is found.\nBlack-box attacks Black-box attacks assume no access to classifier gradients. Such attacks with access to output class probabilities are called score-based attacks, for example the ZOO attack (Chen et al., 2017), a black-box variant of the CW attack (Carlini & Wagner, 2017c). Attacks with access to only the final class label are decision-based attacks, for example the Boundary (Brendel et al., 2017) and the HopSkipJumpAttack (Chen et al., 2019) attacks.\nMulti-step attacks The PGD attack (Kurakin et al., 2016) is an iterative method with an α parameter that determines a step-size perturbation magnitude. PGD starts at a random point x0, projects the perturbation on an Lp-ball B at each iteration: x(j + 1) = ProjB(x(j) + α · sign(∇xloss(θ, x(j), y)). The BIM attack (Kurakin et al., 2016) applies FGSM (Goodfellow et al., 2014) iteratively with an α step. To find a universal perturbation, UAP (Moosavi-Dezfooli et al., 2017) iterates over the images and aggregates perturbations calculated as in DeepFool." }, { "heading": "2.2 ADVERSARIAL DEFENSES", "text": "Adversarial Training. Adversarial Training (Szegedy et al., 2013; Kurakin et al., 2016; Madry et al., 2017) is one of the first and few, undefeated defenses. It defends by populating low probability, so-called blind spots (Szegedy et al., 2013; Goodfellow et al., 2014) with adversarial samples labelled correctly, redrawing boundaries. The drawback of Adversarial Training is that it needs to know the attack in advance, and it needs to generate adversarial samples during training. The Adversarial Training algorithm 2 in the Appendix is based on Kurakin et al. (2016). Madry et al. (2017) formulate their defense as a robust optimization problem, and use adversarial samples to augment the training. Their solution however necessitates high-capacity classifiers - bigger models with more parameters.\nDetection defenses Such defenses detect adversarial samples implicitly or explicitly, then correct or reject them. So far, many detection defenses have been defeated. For example, ten diverse detection methods (other network, PCA, statistical properties) were defeated by attack loss customization (Carlini & Wagner, 2017a); Tramer et al. (2020) used attack customization against (Hu et al., 2019); attack transferability (Carlini & Wagner, 2017b) was used against MagNet by Meng & Chen (2017); deep feature adversaries (Sabour et al., 2016) against (Roth et al., 2019).\nGradient masking and obfuscation Many defenses that mask or obfuscate the classifier gradient have been defeated (Carlini & Wagner, 2016; Athalye et al., 2018). Athalye et al. (2018) identify three types of gradient obfuscation: (1) Shattered gradients - incorrect gradients caused by nondifferentiable components or numerical instability, for example with multiple input transformations by Guo et al. (2018). Athalye et al. (2018) counter such defenses with Backward Pass Differentiable\nApproximation. (2) Stochastic gradients in randomized defenses are overcome with Expectation Over Transformation (Athalye et al., 2017) by Athalye et al. Examples are Stochastic Activation Pruning (Dhillon et al., 2018), which drops layer neurons based on a weighted distribution, and (Xie et al., 2018) which adds a randomized layer to the classifier input. (3) Vanishing or exploding gradients are used, for example, in Defensive Distillation (DD) (Papernot et al., 2016a) which reduces the amplitude of gradients of the loss function. Other examples are PixelDefend (Song et al., 2018) and Defense-GAN (Samangouei et al., 2018). Vanishing or exploding gradients are broken with parameters that avoid vanishing or exploding gradients (Carlini & Wagner, 2016).\nComplex defenses Defenses combining several approaches, for example (Li et al., 2019) which uses detection, randomization, multiple models and losses, can be defeated by focusing on the main defense components (Tramer et al., 2020).(Verma & Swami, 2019; Pang et al., 2019; Sen et al., 2020) are defeated ensemble defenses combined with numerical instability (Verma & Swami, 2019), regularization (Pang et al., 2019), or mixed precision on weights and activations (Sen et al., 2020)." }, { "heading": "2.3 SUMMARY", "text": "Many defenses have been broken. They focus on changing the classifier. Instead, our defense changes the classifier minimally, but forces attacks to change convergence. Target Training is the first defense based on Minimization 1 at the core of untargeted gradient-based adversarial attacks." }, { "heading": "3 TARGET TRAINING", "text": "Target Training converts untargeted attacks to attacks targeted at designated classes, from which correct classification is derived. Untargeted gradient-based attacks are based on Minimization 1 (on page 2) of the sum of (1) perturbation and (2) classifier adversarial loss. Target Training trains the classifier with samples of designated classes that minimize both terms of the minimization at the same time. These samples are exactly what adversarial attacks look for, based on Minimization 1: nearby points (at 0 distance) that minimize adversarial loss. For attacks that relax the minimization by removing the perturbation term, we adjust Target Training to use adversarial samples against attacks that do not minimize perturbation.\nTarget Training eliminates the need to know the attack or to generate adversarial samples against attacks that minimize perturbation. Instead of adversarial samples, we use original samples labelled as designated classes because they have minimum 0-distance from original samples. Following, we outline how Target Training minimizes both terms of Minimization 1 simultaneously.\nTerm (1) of the minimization - perturbation. Against attacks that do minimize perturbation, such as CW (κ = 0) and DeepFool, Target Training uses duplicates of original samples in each batch instead of adversarial samples because these samples minimize the perturbation to 0 - no other points can have smaller distance. This eliminates the overhead of calculating adversarial samples against all attacks that minimize perturbation. Algorithm 1 shows Target Training against attacks that minimize perturbations. Against attacks that do not minimize perturbation, such as CW (κ > 0), PGD and FGSM, Target Training replaces duplicated samples with adversarial samples from the attack. The adjusted algorithm is shown in Algorithm 3 in the Appendix.\nAlgorithm 1 Target Training of classifier N against attacks that minimize perturbation Require: Batch size is m, number of dataset classes is k, untrained classifier N with 2k output\nclasses, TRAIN trains a classifier on a batch and its ground truth Ensure: Classifier N is Target-Trained against all attacks that minimize perturbation\nwhile training not converged do B = {x1, ..., xm} . Get random batch G = {y1, ..., ym} . Get batch ground truth B′ = {x1, ..., xm, x1, ..., xm} . Duplicate batch G′ = {y1, ..., ym, y1 + k, ..., ym + k} . Duplicate ground truth and increase duplicates by k TRAIN(N , B′, G′) . Train classifier on duplicated batch and new ground truth end while\nTerm (2) of the minimization - classifier adversarial loss. In a converged, multi-class classifier, the probabilities of adversarial classes are ∼ 0 with no distinction among them (the real class has high ∼ 1 probability). This causes untargeted attacks to converge to samples of whichever highest probability adversarial class, due to the minimization of adversarial loss in term (2). In order to force attack convergence to designated classes, a classifier would need to train so that one of the adversarial classes has higher probability than the other adversarial classes. Which means that the classifier would need to have two high probability outputs: the real class, and the designated adversarial class.\nTarget Training. Against attacks that minimize perturbation, Target Training duplicates training samples in each batch and labels duplicates with designated classes. Against attacks that do not minimize perturbation, adversarial samples are used instead of duplicates. As a result, the real class and the designated class have equal inference probabilities of ∼ 0.5, as shown in Figure 1. Since attacks minimize adversarial loss in term (2) of Minimization 1, attacks converge to adversarial samples from the designated class. The same samples also minimize term (1) because the classifier was trained with: samples with 0 distance, or adversarial samples close to original samples. It does not affect Target Training whether the c constant of the Minimization 1 is at term (1) or term (2) in an attack because both terms are minimized. Target Training could be extended to defend simultaneously against many attacks using a designated class for each type of attack. During training, adversarial samples would be labeled with the corresponding designated class. In this paper, we do not conduct experiments on simultaneous multi-attack defense.\nModel structure and inference. The only change to classifier structure is doubling the number of output classes from k to 2k, loss function remains standard softmax cross entropy. Inference calculation is: C(x) = argmax\ni (yi + yi+k), i ∈ [0 . . . (k − 1)]." }, { "heading": "4 EXPERIMENTS AND RESULTS", "text": "Threat model We assume that the adversary goal is to generate adversarial samples that cause untargeted misclassification. We perform white-box evaluations, assuming the adversary has complete knowledge of the classifier and how the defense works. In terms of capabilities, we assume\nthat the adversary is gradient-based, has access to the CIFAR10 and MNIST image domains and is able to manipulate pixels. Against attacks that minimize perturbations, no adversarial samples are used in training and no further assumptions are made about attacks. For attacks that do not minimize perturbations, we assume that the attack is of the same kind as the attack used to generate the adversarial samples used during training. Perturbations are assumed to be Lp-constrained.\nAttack parameters For PGD, we use parameters based on the PGD paper Madry et al. (2017): 7 steps of size 2 with a total = 8 for CIFAR10, and 40 steps of size 0.01 with a total = 0.3 for MNIST. For all PGD attacks, we use 0 random initialisations within the ball, effectively starting PGD attacks from the original images. For CW, we use 1000 iterations by default but we also run experiments with up to 100000 iterations. Confidence values in CW range from 0 to 40. For the ZOO attack, we use parameters based on the ZOO attack paper (Chen et al., 2017): 9 binary search steps, 1000 iterations, initial constant c = 0.01. Additionally, for the ZOO attack, we use 200 randomly selected images from MNIST and CIFAR10 test sets, same as is done in ZOO (Chen et al., 2017) for untargeted attacks. For FGSM, = 0.3, as in (Madry et al., 2017).\nClassifier models We purposefully do not use high capacity models, such as ResNet (He et al., 2016), used for example by Madry et al. (2017). The reason is to show that Target Training does not necessitate high model capacity to defend against adversarial attacks. The architectures of MNIST and CIFAR datasets are shown in Table 5 in the Appendix, and no data augmentation was used. Default accuracies without attack are 99.1% for MNIST and 84.3% for CIFAR10.\nDatasets The MNIST (LeCun et al., 1998) and the CIFAR10 (Krizhevsky et al., 2009) datasets are 10-class datasets that have been used throughout previous work. The MNIST (LeCun et al., 1998) dataset has 60K, 28 × 28 × 1 hand-written, digit images. The CIFAR10 (Krizhevsky et al., 2009) dataset has 70K, 32× 32× 3 images. All experimental evaluations are with testing samples.\nTools Adversarial samples generated with CleverHans 3.0.1 (Papernot et al., 2018) for CW (Carlini & Wagner, 2017c), DeepFool (Moosavi-Dezfooli et al., 2016), FGSM (Goodfellow et al., 2014) attacks and IBM Adversarial Robustness 360 Toolbox (ART) toolbox 1.2 (Nicolae et al., 2018) for other attacks. Target Training is written in Python 3.7.3, using Keras 2.2.4 (Chollet et al., 2015)." }, { "heading": "4.1 TARGET TRAINING AGAINST ATTACKS THAT MINIMIZE PERTURBATION", "text": "We compare Target Training with unsecured classifier as well as Adversarial Training, since other defenses have been defeated (Carlini & Wagner, 2017b;a; 2016; Athalye et al., 2018; Tramer et al., 2020) successfully. Table 1 shows that, while not using adversarial samples against adversarial attacks that minimize perturbation, Target Training exceeds by far accuracies by unsecured classifier and Adversarial Training in both CIFAR10 and MNIST. In CIFAR10, Target Training even exceeds the accuracy of the unsecured classifier on non-adversarial samples (84.3%) for most attacks." }, { "heading": "4.2 TARGET TRAINING AGAINST ATTACKS THAT DO NOT MINIMIZE PERTURBATION", "text": "Against adversarial attacks that do not minimize perturbation, Target Training uses adversarial samples and performs slightly better than Adversarial Training. We choose Adversarial Training as a baseline as an undefeated adversarial defense, more details in Section 2. Our implementation of Adversarial Training is based on (Kurakin et al., 2016), shown in Algorithm 2. In Table 2, we show that Target Training performs slightly better than Adversarial Training against attacks that do not minimize perturbation.\nIn Table 6 in the Appendix, we show that both Target Training and Adversarial Training are able to defend in some cases against attacks, the adversarial samples of which have not been used during training - Target Training in more cases than Adversarial Training." }, { "heading": "4.3 TARGET TRAINING PERFORMANCE ON ORIGINAL, NON-ADVERSARIAL SAMPLES", "text": "In Table 7 in the Appendix, we show that Target Training exceeds default classifier accuracy on original, non-adversarial images when trained without adversarial samples against attacks that minimize perturbation: 86.7% (up from 84.3%) in CIFAR10. Furthermore, Table 7 shows that when\nusing adversarial samples against attacks that do not minimize perturbation, Target Training equals Adversarial Training performance on original, non-adversarial images. This holds for both MNIST and CIFAR10." }, { "heading": "4.4 SUMMARY OF RESULTS", "text": "Our Section 4.1 experiments show that we substantially improve performance against attacks that minimize perturbation without using adversarial samples, surpassing even default accuracy for CIFAR10. Section 4.2 experiments show that against attacks that do not minimize perturbation, Target Training has slightly better performance than the current non-broken defense, Adversarial Training." }, { "heading": "4.5 TRANSFERABILITY ANALYSIS", "text": "For a defense to be strong, it needs to be shown to break the transferability of attacks. A good source of adversarial samples for transferability is the unsecured classifier (Carlini et al., 2019). We experiment on the transferability of attacks from the unsecured classifier to a classifier secured with Target Training. In Table 3, we show that Target Training breaks the transferability of adversarial samples generated by attacks that minimize perturbation. Against attacks that do not minimize perturbation, the transferability is broken in MNIST, and partially in CIFAR10." }, { "heading": "5 ADAPTIVE EVALUATION", "text": "Many recent defenses have failed to anticipate attacks that have defeated them (Carlini et al., 2019; Carlini & Wagner, 2017a; Athalye et al., 2018). To avoid that, we perform an adaptive evaluation (Carlini et al., 2019; Tramer et al., 2020) of our Target Training defense.\nWhether Target Training could be defeated by methods used to break other defenses. Adaptive attack approaches (Carlini & Wagner, 2017b;a; 2016; Athalye et al., 2018; Tramer et al., 2020) used to defeat most current defenses cannot break Target Training defense because we use none of the previous defense approaches: adversarial sample detection, preprocessing, obfuscation, ensemble, customized loss, subcomponent, non-differentiable component, or special model layers. We also keep the loss function simple - standard softmax cross-entropy and no additional loss.\nAdaptive attack against Target Training. We consider an adversarial attack that is aware that a classifier uses Target Training. The attack adds a new layer at the top of the classifier, with 20 inputs and 10 outputs, where output i is the sum of input i and i + k. The classifier with the extra layer is used to generate adversarial samples that are tested on the original classifier. We summarize the results of the adaptive attack in Table 4.\nTable 4 shows that Target Training withstands the adaptive attack when the norm of the attack is not L∞, even surpassing default classifier accuracy on non-adversarial images for DeepFool (L2) in CIFAR10. For L∞-based attacks, PGD and CW-L∞(κ = 0), Target Training performace decreases. Withstanding L∞ attacks has previously been shown by Madry et al. (2017) to require high-capacity architecture in classifiers. We suspect that the low capacity of our classifiers causes Target Training to not defend the classifiers against L∞ attacks. The capacity of our classifiers was deliberately chosen to be low in order to investigate whether Target Training can defend low-capacity classifiers.\nHow does Target Training withstand this adaptive attack when the norm is not L∞? The addition of the extra layer to the Target-Trained classifier effectively converts the classifier into an AdversarialTrained classifier: the classifier was originally trained with additional nearby samples that are now labelled correctly. As a result, the adversarial samples generated from the extra-layer classifier are non-adversarial, as in Adversarial Training, and classify correctly in the original classifier. Therefore, what makes Adversarial Training work, also makes Target Training work in this adaptive attack.\nOther adaptive attack against Target Training. Most current untargeted attacks, including the strongest one, CW, are based on Minimization 1. But this minimization is a simplification of the hard problem of generating adversarial attacks, as outlined by Szegedy et al. (2013). In order to defeat Target Training, attacks would need to solve the much harder problem of generating adversarial attacks without using gradients. This remains an open problem.\nIterative attacks. The multi-step PGD (Kurakin et al., 2016) attack decreases Target Training accuracy more than single-step attacks, which suggests that our defense is working correctly, according to Carlini et al. (2019).\nTransferability. We conduct transferability attacks to show that Target Training can withstand attacks that use another classifier to generate adversarial samples. Table 3 shows that Target Training breaks the transferability of attacks generated using an unsecured classifier, as defenses should according to Carlini et al. (2019).\nStronger CW attack leads to better Target Training accuracy. Many attacks fail to defend from attacks when attacks become stronger. Increasing iterations for CW-L2(κ = 0) 100-fold from 1K to 100K increases our defense’s accuracy. In CIFAR10, the accuracy increases from 85.6% to 86.2%, in MNIST from 96.3% to 96.6%. Target Training accuracy increases because higher number of iterations means that the attack converges better to samples of the designated classes, leading to higher accuracy.\nNo gradient masking or obfuscation. The fact that Target Training defends against black-box ZOO attack is also an indicator that Target Training does not do gradient masking or obfuscation according to Carlini et al. (2019)." }, { "heading": "6 DISCUSSION AND CONCLUSIONS", "text": "In conclusion, Target Training enables low-capacity classifiers to defend against non-L∞ attacks, in the case of attacks that minimize perturbation even without using adversarial samples. Target Training achieves this by replacing adversarial samples with original samples, which minimize the adversarial perturbation better. This eliminates the need to know the attack in advance, and the overhead of adversarial samples, for all attacks that minimize perturbation. In contrast, the previous non-broken Adversarial Training defense needs to know the attack and to generate adversarial samples of the attack during training. In addition, Target Training minimizes adversarial loss using designated classes. Our experiments show that Target Training can even exceed default accuracy on non-adversarial samples in CIFAR10, when against non-L∞ adaptive attacks that are aware of Target Training defense. We attribute L∞ adaptive attack performance to low classifier capacity. In CIFAR10, Target Training achieves 86.6% against the adaptive DeepFool attack without using adversarial samples, exceeding default accuracy of 84.3%. Against the CW-L2(κ=0) adaptive attack and without using adversarial samples, Target Training achieves 83.2%. Against adaptive CWL2(κ=40) attack, we achieve 75.6% while using adversarial samples.\nTarget Training resilience to non-L∞ adaptive attacks can offer a different explanation for how Adversarial Training defends classifiers. The commonly-accepted explanation is that Adversarial Training defends by populating low-density areas with adversarial samples labeled correctly. However, the adaptive attack in Section 5 fails for CW-L2(κ = 0) and DeepFool attacks when using what has become an Adversarial-Trained classifier trained with only original samples instead of adversarial samples. The failure of the adaptive attack when using this Adversarial-Trained classifier trained without adversarial samples raises a question. Might it be that Adversarial Training defends not by filling out the space with more adversarial samples labeled correctly, but in the same way that Target Training does: by minimizing the terms of Minimization 1? If the answer were yes, it would also explain the similarity of the results of Target Training and Adversarial Training against attacks that do not minimize perturbation." } ]
2,020
TARGET TRAINING: TRICKING ADVERSARIAL ATTACKS TO FAIL
SP:8e2ac7405015f9d2d59c4a511df83d796ac00a9e
[ "This paper proposes the signal propagation plot (spp) which is a tool for analyzing residual networks and analyzes ResNet with/without BN. Based on the investigation, the authors first provide ResNet results without normalization with the proposed scaled weight standardization. Furthermore, the authors provide a bunch of models that are competitive to EfficientNets based on RegNetY-400MF, which seem to be highly tuned in terms of architecture design." ]
Batch Normalization is a key component in almost all state-of-the-art image classifiers, but it also introduces practical challenges: it breaks the independence between training examples within a batch, can incur compute and memory overhead, and often results in unexpected bugs. Building on recent theoretical analyses of deep ResNets at initialization, we propose a simple set of analysis tools to characterize signal propagation on the forward pass, and leverage these tools to design highly performant ResNets without activation normalization layers. Crucial to our success is an adapted version of the recently proposed Weight Standardization. Our analysis tools show how this technique preserves the signal in networks with ReLU or Swish activation functions by ensuring that the per-channel activation means do not grow with depth. Across a range of FLOP budgets, our networks attain performance competitive with the state-of-the-art EfficientNets on ImageNet. Our code is available at http://dpmd.ai/nfnets.
[ { "affiliations": [], "name": "Andrew Brock" }, { "affiliations": [], "name": "Soham De" }, { "affiliations": [], "name": "Samuel L. Smith" } ]
[ { "authors": [ "Devansh Arpit", "Yingbo Zhou", "Bhargava Kota", "Venu Govindaraju" ], "title": "Normalization propagation: A parametric technique for removing internal covariate shift in deep networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Thomas Bachlechner", "Bodhisattwa Prasad Majumder", "Huanru Henry Mao", "Garrison W Cottrell", "Julian McAuley" ], "title": "Rezero is all you need: Fast convergence at large depth", "venue": null, "year": 2003 }, { "authors": [ "David Balduzzi", "Marcus Frean", "Lennox Leary", "JP Lewis", "Kurt Wan-Duo Ma", "Brian McWilliams" ], "title": "The shattered gradients problem: If resnets are the answer, then what is the question", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Nils Bjorck", "Carla P Gomes", "Bart Selman", "Kilian Q Weinberger" ], "title": "Understanding batch normalization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "James Bradbury", "Roy Frostig", "Peter Hawkins", "Matthew James Johnson", "Chris Leary", "Dougal Maclaurin", "Skye Wanderman-Milne" ], "title": "JAX: composable transformations of Python+NumPy programs, 2018", "venue": "URL http://github.com/google/jax", "year": 2018 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V Le" ], "title": "Autoaugment: Learning augmentation strategies from data", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search space", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2020 }, { "authors": [ "Soham De", "Sam Smith" ], "title": "Batch normalization biases residual blocks towards the identity function in deep networks", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Stefan Elfwing", "Eiji Uchibe", "Kenji Doya" ], "title": "Sigmoid-weighted linear units for neural network function approximation in reinforcement learning", "venue": "Neural Networks,", "year": 2018 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent-a new approach to self-supervised learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martı́n Arjovsky", "Vincent Dumoulin", "Aaron C. Courville" ], "title": "Improved training of Wasserstein GANs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Boris Hanin", "David Rolnick" ], "title": "How to start training: The effect of initialization and architecture", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Tong He", "Zhi Zhang", "Hang Zhang", "Zhongyue Zhang", "Junyuan Xie", "Mu Li" ], "title": "Bag of tricks for image classification with convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "Gaussian error linear units (gelus)", "venue": "arXiv preprint arXiv:1606.08415,", "year": 2016 }, { "authors": [ "Elad Hoffer", "Itay Hubara", "Daniel Soudry" ], "title": "Train longer, generalize better: closing the generalization gap in large batch training of neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Gao Huang", "Yu Sun", "Zhuang Liu", "Daniel Sedra", "Kilian Q Weinberger" ], "title": "Deep networks with stochastic depth", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Lei Huang", "Xianglong Liu", "Yang Liu", "Bo Lang", "Dacheng Tao" ], "title": "Centered weight normalization in accelerating training of deep neural networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp. 2803–2811,", "year": 2017 }, { "authors": [ "Lei Huang", "Jie Qin", "Yi Zhou", "Fan Zhu", "Li Liu", "Ling Shao" ], "title": "Normalization techniques in training DNNs: Methodology, analysis and application", "venue": "arXiv preprint arXiv:2009.12836,", "year": 2020 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Freeze and chaos for dnns: an ntk view of batch normalization, checkerboard and boundary effects", "venue": null, "year": 1907 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Günter Klambauer", "Thomas Unterthiner", "Andreas Mayr", "Sepp Hochreiter" ], "title": "Self-normalizing neural networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Alexander Kolesnikov", "Lucas Beyer", "Xiaohua Zhai", "Joan Puigcerver", "Jessica Yung", "Sylvain Gelly", "Neil Houlsby" ], "title": "Large scale learning of general visual representations for transfer", "venue": null, "year": 1912 }, { "authors": [ "Iro Laina", "Christian Rupprecht", "Vasileios Belagiannis", "Federico Tombari", "Nassir Navab" ], "title": "Deeper depth prediction with fully convolutional residual networks", "venue": "In 2016 Fourth international conference on 3D vision (3DV),", "year": 2016 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "SGDR: stochastic gradient descent with warm restarts", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ping Luo", "Xinjiang Wang", "Wenqi Shao", "Zhanglin Peng" ], "title": "Towards understanding regularization in batch normalization", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "venue": "In Proceedings of the European conference on computer vision (ECCV),", "year": 2018 }, { "authors": [ "Dmytro Mishkin", "Jiri Matas" ], "title": "All you need is a good init", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Y. Nesterov" ], "title": "A method for unconstrained convex minimization problem with the rate of convergence O(1/k2)", "venue": "Doklady AN USSR,", "year": 1983 }, { "authors": [ "Art B Owen" ], "title": "A robust hybrid of lasso and ridge regression", "venue": null, "year": 2007 }, { "authors": [ "Hung Viet Pham", "Thibaud Lutellier", "Weizhen Qi", "Lin Tan" ], "title": "Cradle: cross-backend validation to detect and localize bugs in deep learning libraries", "venue": "IEEE/ACM 41st International Conference on Software Engineering (ICSE),", "year": 2019 }, { "authors": [ "Boris Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "USSR Computational Mathematics and Mathematical Physics, pp", "year": 1964 }, { "authors": [ "Haozhi Qi", "Chong You", "Xiaolong Wang", "Yi Ma", "Jitendra Malik" ], "title": "Deep isometric learning for visual recognition", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Ilija Radosavovic", "Raj Prateek Kosaraju", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Designing network design spaces", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc V. Le" ], "title": "Searching for activation functions", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "The Annals of Mathematical Statistics,", "year": 1951 }, { "authors": [ "Samuel Rota Bulò", "Lorenzo Porzi", "Peter Kontschieder" ], "title": "In-place activated batchnorm for memory-optimized training of dnns", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Brendan Ruff", "Taylor Beck", "Joscha Bach" ], "title": "Mean shift rejection: Training deep neural networks without minibatch statistics or normalization", "venue": null, "year": 1911 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "ImageNet large scale visual recognition challenge", "venue": null, "year": 2015 }, { "authors": [ "Tim Salimans", "Durk P Kingma" ], "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Shibani Santurkar", "Dimitris Tsipras", "Andrew Ilyas", "Aleksander Madry" ], "title": "How does batch normalization help optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Nathan Silberman", "Derek Hoiem", "Pushmeet Kohli", "Rob Fergus" ], "title": "Indoor segmentation and support inference from rgbd images", "venue": "In European conference on computer vision,", "year": 2012 }, { "authors": [ "Samuel Smith", "Erich Elsen", "Soham De" ], "title": "On the generalization benefit of noise in stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "C Szegedy", "V Vanhoucke", "S Ioffe", "J Shlens", "Z Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Christian Szegedy", "Sergey Ioffe", "Vincent Vanhoucke", "Alexander A. Alemi" ], "title": "Inception-v4, inception-resnet and the impact of residual connections on learning", "venue": "In Proceedings of the ThirtyFirst AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Masato Taki" ], "title": "Deep residual networks and weight initialization", "venue": "arXiv preprint arXiv:1709.02956,", "year": 2017 }, { "authors": [ "Mingxing Tan", "Quoc Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hugo Touvron", "Andrea Vedaldi", "Matthijs Douze", "Hervé Jégou" ], "title": "Fixing the train-test resolution discrepancy", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "Longhui Wei", "An Xiao", "Lingxi Xie", "Xiaopeng Zhang", "Xin Chen", "Qi Tian" ], "title": "Circumventing outliers of autoaugment with knowledge distillation", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Yunyang Xiong", "Hanxiao Liu", "Suyog Gupta", "Berkin Akin", "Gabriel Bender", "Pieter-Jan Kindermans", "Mingxing Tan", "Vikas Singh", "Bo Chen" ], "title": "Mobiledets: Searching for object detection architectures for mobile accelerators", "venue": null, "year": 2004 }, { "authors": [ "Ge Yang", "Samuel Schoenholz" ], "title": "Mean field residual networks: On the edge of chaos", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Greg Yang", "Jeffrey Pennington", "Vinay Rao", "Jascha Sohl-Dickstein", "Samuel S" ], "title": "Schoenholz. A mean field theory of batch normalization", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In Proceedings of the British Machine Vision Conference 2016,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cissé", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Hongyi Zhang", "Yann N. Dauphin", "Tengyu Ma" ], "title": "Fixup initialization: Residual learning without normalization", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "He" ], "title": "2019), in our NF-RegNet models we replace the strided 1x1 convolutions with average pooling followed by 1x1 convolutions, a common alternative also employed in Zagoruyko & Komodakis (2016). We found that average pooling with a kernel of size k × k tended to attenuate the signal by a factor", "venue": null, "year": 2016 }, { "authors": [ "D. Appendix" ], "title": "We train using SGD with Nesterov Momentum, using a batch size of 1024 for 360 epochs, which is chosen to be in line with EfficientNet’s schedule of 360 epoch training at batch size 4096. We employ a 5 epoch warmup to a learning rate of 0.4 (Goyal et al., 2017), and cosine annealing", "venue": null, "year": 2017 }, { "authors": [ "Grill" ], "title": "Results on Pascal VOC Semantic Segmentation. We present additional results investigating the transferability of our normalizer-free models to downstream tasks, beginning with the Pascal VOC Semantic Segmentation task. We use the FCN architecture (Long et al., 2015) following He et al", "venue": "ResNet", "year": 2020 } ]
[ { "heading": null, "text": "Batch Normalization is a key component in almost all state-of-the-art image classifiers, but it also introduces practical challenges: it breaks the independence between training examples within a batch, can incur compute and memory overhead, and often results in unexpected bugs. Building on recent theoretical analyses of deep ResNets at initialization, we propose a simple set of analysis tools to characterize signal propagation on the forward pass, and leverage these tools to design highly performant ResNets without activation normalization layers. Crucial to our success is an adapted version of the recently proposed Weight Standardization. Our analysis tools show how this technique preserves the signal in networks with ReLU or Swish activation functions by ensuring that the per-channel activation means do not grow with depth. Across a range of FLOP budgets, our networks attain performance competitive with the state-of-the-art EfficientNets on ImageNet. Our code is available at http://dpmd.ai/nfnets." }, { "heading": "1 INTRODUCTION", "text": "BatchNorm has become a core computational primitive in deep learning (Ioffe & Szegedy, 2015), and it is used in almost all state-of-the-art image classifiers (Tan & Le, 2019; Wei et al., 2020). A number of different benefits of BatchNorm have been identified. It smoothens the loss landscape (Santurkar et al., 2018), which allows training with larger learning rates (Bjorck et al., 2018), and the noise arising from the minibatch estimates of the batch statistics introduces implicit regularization (Luo et al., 2019). Crucially, recent theoretical work (Balduzzi et al., 2017; De & Smith, 2020) has demonstrated that BatchNorm ensures good signal propagation at initialization in deep residual networks with identity skip connections (He et al., 2016b;a), and this benefit has enabled practitioners to train deep ResNets with hundreds or even thousands of layers (Zhang et al., 2019).\nHowever, BatchNorm also has many disadvantages. Its behavior is strongly dependent on the batch size, performing poorly when the per device batch size is too small or too large (Hoffer et al., 2017), and it introduces a discrepancy between the behaviour of the model during training and at inference time. BatchNorm also adds memory overhead (Rota Bulò et al., 2018), and is a common source of implementation errors (Pham et al., 2019). In addition, it is often difficult to replicate batch normalized models trained on different hardware. A number of alternative normalization layers have been proposed (Ba et al., 2016; Wu & He, 2018), but typically these alternatives generalize poorly or introduce their own drawbacks, such as added compute costs at inference.\nAnother line of work has sought to eliminate layers which normalize hidden activations entirely. A common trend is to initialize residual branches to output zeros (Goyal et al., 2017; Zhang et al., 2019; De & Smith, 2020; Bachlechner et al., 2020), which ensures that the signal is dominated by the skip path early in training. However while this strategy enables us to train deep ResNets with thousands of layers, it still degrades generalization when compared to well-tuned baselines (De & Smith, 2020). These simple initialization strategies are also not applicable to more complicated architectures like EfficientNets (Tan & Le, 2019), the current state of the art on ImageNet (Russakovsky et al., 2015).\nThis work seeks to establish a general recipe for training deep ResNets without normalization layers, which achieve test accuracy competitive with the state of the art. Our contributions are as follows:\n• We introduce Signal Propagation Plots (SPPs): a simple set of visualizations which help us inspect signal propagation at initialization on the forward pass in deep residual networks. Leveraging these SPPs, we show how to design unnormalized ResNets which are constrained to have signal propagation properties similar to batch-normalized ResNets.\n• We identify a key failure mode in unnormalized ResNets with ReLU or Swish activations and Gaussian weights. Because the mean output of these non-linearities is positive, the squared mean of the hidden activations on each channel grows rapidly as the network depth increases. To resolve this, we propose Scaled Weight Standardization, a minor modification of the recently proposed Weight Standardization (Qiao et al., 2019; Huang et al., 2017b), which prevents the growth in the mean signal, leading to a substantial boost in performance.\n• We apply our normalization-free network structure in conjunction with Scaled Weight Standardization to ResNets on ImageNet, where we for the first time attain performance which is comparable or better than batch-normalized ResNets on networks as deep as 288 layers.\n• Finally, we apply our normalization-free approach to the RegNet architecture (Radosavovic et al., 2020). By combining this architecture with the compound scaling strategy proposed by Tan & Le (2019), we develop a class of models without normalization layers which are competitive with the current ImageNet state of the art across a range of FLOP budgets." }, { "heading": "2 BACKGROUND", "text": "Deep ResNets at initialization: The combination of BatchNorm (Ioffe & Szegedy, 2015) and skip connections (Srivastava et al., 2015; He et al., 2016a) has allowed practitioners to train deep ResNets with hundreds or thousands of layers. To understand this effect, a number of papers have analyzed signal propagation in normalized ResNets at initialization (Balduzzi et al., 2017; Yang et al., 2019). In a recent work, De & Smith (2020) showed that in normalized ResNets with Gaussian initialization, the activations on the `th residual branch are suppressed by factor of O( √ `), relative to the scale of the activations on the skip path. This biases the residual blocks in deep ResNets towards the identity function at initialization, ensuring well-behaved gradients. In unnormalized networks, one can preserve this benefit by introducing a learnable scalar at the end of each residual branch, initialized to zero (Zhang et al., 2019; De & Smith, 2020; Bachlechner et al., 2020). This simple change is sufficient to train deep ResNets with thousands of layers without normalization. However, while this method is easy to implement and achieves excellent convergence on the training set, it still achieves lower test accuracies than normalized networks when compared to well-tuned baselines.\nThese insights from studies of batch-normalized ResNets are also supported by theoretical analyses of unnormalized networks (Taki, 2017; Yang & Schoenholz, 2017; Hanin & Rolnick, 2018; Qi et al., 2020). These works suggest that, in ResNets with identity skip connections, if the signal does not explode on the forward pass, the gradients will neither explode nor vanish on the backward pass. Hanin & Rolnick (2018) conclude that multiplying the hidden activations on the residual branch by a factor of O(1/d) or less, where d denotes the network depth, is sufficient to ensure trainability at initialization.\nAlternate normalizers: To counteract the limitations of BatchNorm in different situations, a range of alternative normalization schemes have been proposed, each operating on different components of the hidden activations. These include LayerNorm (Ba et al., 2016), InstanceNorm (Ulyanov et al., 2016), GroupNorm (Wu & He, 2018), and many more (Huang et al., 2020). While these alternatives remove the dependency on the batch size and typically work better than BatchNorm for very small batch sizes, they also introduce limitations of their own, such as introducing additional computational costs during inference time. Furthermore for image classification, these alternatives still tend to achieve lower test accuracies than well-tuned BatchNorm baselines. As one exception, we note that the combination of GroupNorm with Weight Standardization (Qiao et al., 2019) was recently identified as a promising alternative to BatchNorm in ResNet-50 (Kolesnikov et al., 2019)." }, { "heading": "3 SIGNAL PROPAGATION PLOTS", "text": "Although papers have recently theoretically analyzed signal propagation in ResNets (see Section 2), practitioners rarely empirically evaluate the scales of the hidden activations at different depths in-\nside a specific deep network when designing new models or proposing modifications to existing architectures. By contrast, we have found that plotting the statistics of the hidden activations at different points inside a network, when conditioned on a batch of either random Gaussian inputs or real training examples, can be extremely beneficial. This practice both enables us to immediately detect hidden bugs in our implementation before launching an expensive training run destined to fail, and also allows us to identify surprising phenomena which might be challenging to derive from scratch.\nWe therefore propose to formalize this good practice by introducing Signal Propagation Plots (SPPs), a simple graphical method for visualizing signal propagation on the forward pass in deep ResNets. We assume identity residual blocks of the form x`+1 = f`(x`) + x`, where x` denotes the input to the `th block and f` denotes the function computed by the `th residual branch. We consider 4- dimensional input and output tensors with dimensions denoted by NHWC, where N denotes the batch dimension, C denotes the channels, and H and W denote the two spatial dimensions. To generate SPPs, we initialize a single set of weights according to the network initialization scheme, and then provide the network with a batch of input examples sampled from a unit Gaussian distribution. Then, we plot the following hidden activation statistics at the output of each residual block:\n• Average Channel Squared Mean, computed as the square of the mean across the NHW axes, and then averaged across the C axis. In a network with good signal propagation, we would expect the mean activations on each channel, averaged across a batch of examples, to be close to zero. Importantly, we note that it is necessary to measure the averaged squared value of the mean, since the means of different channels may have opposite signs.\n• Average Channel Variance, computed by taking the channel variance across the NHW axes, and then averaging across the C axis. We generally find this to be the most informative measure of the signal magnitude, and to clearly show signal explosion or attenuation.\n• Average Channel Variance on the end of the residual branch, before merging with the skip path. This helps assess whether the layers on the residual branch are correctly initialized.\nWe explore several other possible choices of statistics one could measure in Appendix G, but we have found these three to be the most informative. We also experiment with feeding the network real data samples instead of random noise, but find that this step does not meaningfully affect the key trends. We emphasize that SPPs do not capture every property of signal propagation, and they only consider the statistics of the forward pass. Despite this simplicity, SPPs are surprisingly useful for analyzing deep ResNets in practice. We speculate that this may be because in ResNets, as discussed in Section 2 (Taki, 2017; Yang & Schoenholz, 2017; Hanin & Rolnick, 2018), the backward pass will typically neither explode nor vanish so long as the signal on the forward pass is well behaved.\nAs an example, in Figure 1 we present the SPP for a 600-layer pre-activation ResNet (He et al., 2016a)1 with BatchNorm, ReLU activations, and He initialization (He et al., 2015). We compare the standard BN-ReLU-Conv ordering to the less common ReLU-BN-Conv ordering. Immediately, several key patterns emerge. First, we note that the Average Channel Variance grows linearly with the depth in a given stage, and resets at each transition block to a fixed value close to 1. The linear growth arises because, at initialization, the variance of the activations satisfy Var(x`+1) = Var(x`) + Var(f`(x`)), while BatchNorm ensures that the variance of the activations at the end\n1See Appendix E for an overview of ResNet blocks and their order of operations.\nof each residual branch is independent of depth (De & Smith, 2020). The variance is reset at each transition block because in these blocks the skip connection is replaced by a convolution operating on a normalized input, undoing any signal growth on the skip path in the preceding blocks.\nWith the BN-ReLU-Conv ordering, the Average Squared Channel Means display similar behavior, growing linearly with depth between transition blocks. This may seem surprising, since we expect BatchNorm to center the activations. However with this ordering the final convolution on a residual branch receives a rectified input with positive mean. As we show in the following section, this causes the outputs of the branch on any single channel to also have non-zero mean, and explains why Var(f`(x`)) ≈ 0.68 for all depths `. Although this “mean-shift” is explicitly counteracted by the normalization layers in subsequent residual branches, it will have serious consequences when attempting to remove normalization layers, as discussed below. In contrast, the ReLU-BN-Conv ordering trains equally stably while avoiding this mean-shift issue, with Var(f`(x`)) ≈ 1 for all `." }, { "heading": "4 NORMALIZER-FREE RESNETS (NF-RESNETS)", "text": "With SPPs in hand to aid our analysis, we now seek to develop ResNet variants without normalization layers, which have good signal propagation, are stable during training, and reach test accuracies competitive with batch-normalized ResNets. We begin with two observations from Section 3. First, for standard initializations, BatchNorm downscales the input to each residual block by a factor proportional to the standard deviation of the input signal (De & Smith, 2020). Second, each residual block increases the variance of the signal by an approximately constant factor. We propose to mimic this effect by using residual blocks of the form x`+1 = x`+αf`(x`/β`), where x` denotes the input to the `th residual block and f`(·) denotes the `th residual branch. We design the network such that:\n• f(·), the function computed by the residual branch, is parameterized to be variance preserving at initialization, i.e., Var(f`(z)) = Var(z) for all `. This constraint enables us to reason about the signal growth in the network, and estimate the variances analytically.\n• β` is a fixed scalar, chosen as √ Var(x`), the expected empirical standard deviation of the\nactivations x` at initialization. This ensures the input to f`(·) has unit variance.\n• α is a scalar hyperparameter which controls the rate of variance growth between blocks.\nWe compute the expected empirical variance at residual block ` analytically according to Var(x`) = Var(x`−1) + α 2, with an initial expected variance of Var(x0) = 1, and we set β` = √ Var(x`). A similar approach was proposed by Arpit et al. (2016) for non-residual networks. As noted in Section 3, the signal variance in normalized ResNets is reset at each transition layer due to the shortcut convolution receiving a normalized input. We mimic this reset by having the shortcut convolution in transition layers operate on (x`/β`) rather than x`, ensuring unit signal variance at the start of each stage (Var(x`+1) = 1+α2 following each transition layer). For simplicity, we call residual networks employing this simple scaling strategy Normalizer-Free ResNets (NF-ResNets)." }, { "heading": "4.1 RELU ACTIVATIONS INDUCE MEAN SHIFTS", "text": "We plot the SPPs for Normalizer-Free ResNets (NF-ResNets) with α = 1 in Figure 2. In green, we consider a NF-ResNet, which initializes the convolutions with Gaussian weights using He initialization (He et al., 2015). Although one might expect this simple recipe to be sufficient to achieve good signal propagation, we observe two unexpected features in practice. First, the average value of the squared channel mean grows rapidly with depth, achieving large values which exceed the average channel variance. This indicates a large “mean shift”, whereby the hidden activations for different training inputs (in this case different vectors sampled from the unit normal) are strongly correlated (Jacot et al., 2019; Ruff et al., 2019). Second, as observed for BN-ReLU-Conv networks in Section 3, the scale of the empirical variances on the residual branch are consistently smaller than one.\nTo identify the origin of these effects, in Figure 7 (in Appendix F) we provide a similar SPP for a linearized version of ResNetV2-600 without ReLU activation functions. When the ReLU activations are removed, the averaged squared channel means remain close to zero for all block depths, and the empirical variance on the residual branch fluctuates around one. This motivates the following question: why might ReLU activations cause the scale of the mean activations on a channel to grow?\nTo develop an intuition for this phenomenon, consider the transformation z = Wg(x), where W is arbitrary and fixed, and g(·) is an activation function that acts component-wise on iid inputs x such that g(x) is also iid. Thus, g(·) can be any popular activation function like ReLU, tanh, SiLU, etc. Let E(g(xi)) = µg and Var(g(xi)) = σ2g for all dimensions i. It is straightforward to show that the expected value and the variance of any single unit i of the output zi = ∑N j Wi,jg(xj) is given by:\nE(zi) = NµgµWi,· , and Var(zi) = Nσ2g(σ2Wi,· + µ 2 Wi,·), (1)\nwhere µWi,· and σWi,· are the mean and standard deviation of the i th row of W :\nµWi,· = 1 N ∑N j Wi,j , and σ 2 Wi,· = 1N ∑N j W 2 i,j − µ2Wi,· . (2)\nNow consider g(·) to be the ReLU activation function, i.e., g(x) = max(x, 0). Then g(x) ≥ 0, which implies that the input to the linear layer has positive mean (ignoring the edge case when all inputs are less than or equal to zero). In particular, notice that if xi ∼ N (0, 1) for all i, then µg = 1/ √ 2π. Since we know that µg > 0, if µWi,· is also non-zero, then the output of the transformation, zi, will also exhibit a non-zero mean. Crucially, even if we sample W from a distribution centred around zero, any specific weight matrix drawn from this distribution will almost surely have a non-zero empirical mean, and consequently the outputs of the residual branches on any specific channel will have non-zero mean values. This simple NF-ResNet model with He-initialized weights is therefore often unstable, and it is increasingly difficult to train as the depth increases." }, { "heading": "4.2 SCALED WEIGHT STANDARDIZATION", "text": "To prevent the emergence of a mean shift, and to ensure that the residual branch f`(·) is variance preserving, we propose Scaled Weight Standardization, a minor modification of the recently proposed Weight Standardization (Qiao et al., 2019) which is also closely related to Centered Weight Normalization (Huang et al., 2017b). We re-parameterize the convolutional layers, by imposing,\nŴi,j = γ · Wi,j − µWi,· σWi,· √ N , (3)\nwhere the mean µ and variance σ are computed across the fan-in extent of the convolutional filters. We initialize the underlying parameters W from Gaussian weights, while γ is a fixed constant. As in Qiao et al. (2019), we impose this constraint throughout training as a differentiable operation in the forward pass of the network. Recalling equation 1, we can immediately see that the output of the transformation using Scaled WS, z = Ŵg(x), has expected value E(zi) = 0 for all i, thus eliminating the mean shift. Furthermore, the variance Var(zi) = γ2σ2g , meaning that for a correctly chosen γ, which depends on the non-linearity g, the layer will be variance preserving. Scaled Weight Standardization is cheap during training and free at inference, scales well (with the number of parameters rather than activations), introduces no dependence between batch elements and no discrepancy in training and test behavior, and its implementation does not differ in distributed training. These desirable properties make it a compelling alternative for replacing BatchNorm.\nThe SPP of a normalizer-free ResNet-600 employing Scaled WS is shown in Figure 2 in cyan. As we can see, Scaled Weight Standardization eliminates the growth of the average channel squared mean at initialization. Indeed, the SPPs are almost identical to the SPPs for a batch-normalized network employing the ReLU-BN-Conv ordering, shown in red. Note that we select the constant γ to ensure that the channel variance on the residual branch is close to one (discussed further below). The variance on the residual branch decays slightly near the end of the network due to zero padding.\n4.3 DETERMINING NONLINEARITY-SPECIFIC CONSTANTS γ\nThe final ingredient we need is to determine the value of the gain γ, in order to ensure that the variances of the hidden activations on the residual branch are close to 1 at initialization. Note that the value of γ will depend on the specific nonlinearity used in the network. We derive the value of γ by assuming that the input x to the nonlinearity is sampled iid from N (0, 1). For ReLU networks, this implies that the outputs g(x) = max(x, 0) will be sampled from the rectified Gaussian distribution with variance σ2g = (1/2)(1 − (1/π)) (Arpit et al., 2016). Since Var(Ŵg(x)) = γ2σ2g , we set γ = 1/σg = √ 2√\n1− 1π to ensure that Var(Ŵg(x)) = 1. While the assumption x ∼ N (0, 1) is not\ntypically true unless the network width is large, we find this approximation to work well in practice.\nFor simple nonlinearities like ReLU or tanh, the analytical variance of the non-linearity g(x) when x is drawn from the unit normal may be known or easy to derive. For other nonlinearities, such as SiLU ((Elfwing et al., 2018; Hendrycks & Gimpel, 2016), recently popularized as Swish (Ramachandran et al., 2018)), analytically determining the variance can involve solving difficult integrals, or may even not have an analytical form. In practice, we find that it is sufficient to numerically approximate this value by the simple procedure of drawing many N dimensional vectors x from the Gaussian distribution, computing the empirical variance Var(g(x)) for each vector, and taking the square root of the average of this empirical variance. We provide an example in Appendix D showing how this can be accomplished for any nonlinearity with just a few lines of code and provide reference values." }, { "heading": "4.4 OTHER BUILDING BLOCKS AND RELAXED CONSTRAINTS", "text": "Our method generally requires that any additional operations used in a network maintain good signal propagation, which means many common building blocks must be modified. As with selecting γ values, the necessary modification can be determined analytically or empirically. For example, the popular Squeeze-and-Excitation operation (S+E, Hu et al. (2018)), y = sigmoid(MLP (pool(h)))∗ h, involves multiplication by an activation in [0, 1], and will tend to attenuate the signal and make the model unstable. This attenuation can clearly be seen in the SPP of a normalizer-free ResNet using these blocks (see Figure 9 in Appendix F). If we examine this operation in isolation using our simple numerical procedure explained above, we find that the expected variance is 0.5 (for unit normal inputs), indicating that we simply need to multiply the output by 2 to recover good signal propagation. We empirically verified that this simple change is sufficient to restore training stability.\nIn practice, we find that either a similarly simple modification to any given operation is sufficient to maintain good signal propagation, or that the network is sufficiently robust to the degradation induced by the operation to train well without modification. We also explore the degree to which we can relax our constraints and still maintain stable training. As an example of this, to recover some of the expressivity of a normal convolution, we introduce learnable affine gains and biases to the Scaled WS layer (the gain is applied to the weight, while the bias is added to the activation, as is typical). While we could constrain these values to enforce good signal propagation by, for example, downscaling the output by a scalar proportional to the values of the gains, we find that this is not necessary for stable training, and that stability is not impacted when these parameters vary freely. Relatedly, we find that using a learnable scalar multiplier at the end of the residual branch initialized to 0 (Goyal et al., 2017; De & Smith, 2020) helps when training networks over 150 layers, even if we ignore this modification when computing β`. In our final models, we employ several such relaxations without loss of training stability. We provide detailed explanations for each operation and any modifications we make in Appendix C (also detailed in our model code in Appendix D)." }, { "heading": "4.5 SUMMARY", "text": "In summary, the core recipe for a Normalizer-Free ResNet (NF-ResNet) is:\n1. Compute and forward propagate the expected signal variance β2` , which grows by α 2 after\neach residual block (β0 = 1). Downscale the input to each residual branch by β`.\n2. Additionally, downscale the input to the convolution on the skip path in transition blocks by β`, and reset β`+1 = 1 + α2 following a transition block.\n3. Employ Scaled Weight Standardization in all convolutional layers, computing γ, the gain specific to the activation function g(x), as the reciprocal of the expected standard deviation,\n1√ Var(g(x)) , assuming x ∼ N (0, 1).\nCode is provided in Appendix D for a reference Normalizer-Free Network." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 AN EMPIRICAL EVALUATION ON RESNETS", "text": "We begin by investigating the performance of Normalizer-Free pre-activation ResNets on the ILSVRC dataset (Russakovsky et al., 2015), for which we compare our networks to FixUp initialization (Zhang et al., 2019), SkipInit (De & Smith, 2020), and batch-normalized ResNets. We use a training setup based on Goyal et al. (2017), and train our models using SGD (Robbins & Monro, 1951) with Nesterov’s Momentum (Nesterov, 1983; Sutskever et al., 2013) for 90 epochs with a batch size of 1024 and a learning rate which warms up from zero to 0.4 over the first 5 epochs, then decays to zero using cosine annealing (Loshchilov & Hutter, 2017). We employ standard baseline preprocessing (sampling and resizing distorted bounding boxes, along with random flips), weight decay of 5e-5, and label smoothing of 0.1 (Szegedy et al., 2016). For Normalizer-Free ResNets (NF-ResNets), we chose α = 0.2 based on a small sweep, and employ SkipInit as discussed above. For both FixUp and SkipInit we had to reduce the learning rate to 0.2 to enable stable training.\nWe find that without additional regularization, our NF-ResNets achieve higher training accuracies but lower test accuracies than their batch-normalized counterparts. This is likely caused by the known regularization effect of BatchNorm (Hoffer et al., 2017; Luo et al., 2019; De & Smith, 2020). We therefore introduce stochastic depth (Huang et al., 2016) with a rate of 0.1, and Dropout (Srivastava et al., 2014) before the final linear layer with a drop probability of 0.25. We note that adding this same regularization does not substantially improve the performance of the normalized ResNets in our setup, suggesting that BatchNorm is indeed already providing some regularization benefit.\nIn Table 1 we compare performance of our networks (NF-ResNets) against the baseline (BNResNets), across a wide range of network depths. After introducing additional regularization, NFResNets achieve performance better than FixUp/SkipInit and competitive with BN across all network depths, with our regularized NF-ResNet-288 achieving top-1 accuracy of 79.5%. However, some of the 288 layer normalizer-free models undergo training collapse at the chosen learning rate,\nbut only when unregularized. While we can remove this instability by reducing the learning rate to 0.2, this comes at the cost of test accuracy. We investigate this failure mode in Appendix A.\nOne important limitation of BatchNorm is that its performance degrades when the per-device batch size is small (Hoffer et al., 2017; Wu & He, 2018). To demonstrate that our normalizer-free models overcome this limitation, we train ResNet-50s on ImageNet using very small batch sizes of 8 and 4, and report the results in Table 2. These models are trained for 15 epochs (4.8M and 2.4M training steps, respectively) with a learning rate of 0.025 for batch size 8 and 0.01 for batch size 4. For comparison, we also include the accuracy obtained when training for 15 epochs at batch size 1024 and learning rate 0.4. The NF-ResNet achieves significantly better performance when the batch size is small, and is not affected by the shift from batch size 8 to 4, demonstrating the usefulness of our approach in the microbatch setting. Note that we do not apply stochastic depth or dropout in these experiments, which may explain superior performance of the BN-ResNet at batch size 1024. We also study the transferability of our learned representations to the downstream tasks of semantic segmentation and depth estimation, and present the results of these experiments in Appendix H." }, { "heading": "5.2 DESIGNING PERFORMANT NORMALIZER-FREE NETWORKS", "text": "We now turn our attention to developing unnormalized networks which are competitive with the state-of-the-art EfficientNet model family across a range of FLOP budgets (Tan & Le, 2019), We focus primarily on the small budget regime (EfficientNets B0-B4), but also report results for B5 and hope to extend our investigation to larger variants in future work.\nFirst, we apply Scaled WS and our Normalizer-Free structure directly to the EfficientNet backbone.2 While we succeed in training these networks stably without normalization, we find that even after extensive tuning our NormalizerFree EfficientNets still substantially underperform their batch-normalized baselines. For example, our normalization free B0 variant achieves 73.5% top-1, a 3.2% absolute degradation relative to the baseline. We hypothesize that this degradation arises because Weight Standardization imposes a very strong constraint on depth-wise convolutions (which have an input channel count of 1), and this constraint may remove a substantial fraction of the model expressivity. To support this claim, we note that removing Scaled WS from the depth-wise convolutions improves the test accuracy of Normalizer-Free EfficientNets, although this also reduces the training stability.\n2We were unable to train EfficientNets using SkipInit (De & Smith, 2020; Bachlechner et al., 2020). We speculate this may be because the EfficientNet backbone contains both residual and non-residual components.\nTherefore, to overcome the potentially poor interactions between Weight Standardization and depth-wise convolutions, we decided to instead study Normalizer-Free variants of the RegNet model family (Radosavovic et al., 2020). RegNets are slightly modified variants of ResNeXts (Xie et al., 2017), developed via manual architecture search. Crucially, RegNets employ grouped convolutions, which we anticipate are more compatible with Scaled WS than depth-wise convolutions, since the fraction of the degrees of freedom in the model weights remaining after the weight standardization operation is higher.\nWe develop a new base model by taking the 0.4B FLOP RegNet variant, and making several minor architectural changes which cumulatively substantially improve the model performance. We describe our final model in full in Appendix C, however we emphasize that most of the architecture changes we introduce simply reflect well-known best practices from the literature (Tan & Le, 2019; He et al., 2019). To assess the performance of our Normalizer-Free RegNets across a range of FLOPS budgets, we apply the EfficientNet compound scaling approach (which increases the width, depth and input resolution in tandem according to a set of three power laws learned using architecture search) to obtain model variants at a range of approximate FLOPS targets. Denoting these models NF-RegNets, we train variants B0-B5 (analogous to EfficientNet variants) using both baseline preprocessing and combined CutMix (Yun et al., 2019) and MixUp (Zhang et al., 2018) augmentation. Note that we follow the same compound scaling hyper-parameters used by EfficientNets, and do not retune these hyper-parameters on our own architecture. We compare the test accuracies of EfficientNets and NF-RegNets on ImageNet in Figure 3, and we provide the corresponding numerical values in Table 3 of Appendix A. We present a comparison of training speeds in Table 5 of Appendix A.\nFor each FLOPS and augmentation setting, NF-RegNets attain comparable but slightly lower test accuracies than EfficientNets, while being substantially faster to train. In the augmented setting, we report EfficientNet results with AutoAugment (AA) or RandAugment (RA), (Cubuk et al., 2019; 2020), which we find performs better than training EfficientNets with CutMix+MixUp. However, both AA and RA degrade the performance and stability of NF-RegNets, and hence we report results of NF-RegNets with CutMix+Mixup instead. We hypothesize that this occurs because AA and RA were developed by applying architecture search on batch-normalized models, and that they may therefore change the statistics of the dataset in a way that negatively impacts signal propagation when normalization layers are removed. To support this claim, we note that inserting a single BatchNorm layer after the first convolution in an NF-RegNet removes these instabilities and enables us to train stably with either AA or RA, although this approach does not achieve higher test set accuracies.\nThese observations highlight that, although our models do benefit from most of the architectural improvements and best practices which researchers have developed from the hundreds of thousands of device hours used while tuning batch-normalized models, there are certain aspects of existing state-of-the-art models, like AA and RA, which may implicitly rely on the presence of activation normalization layers in the network. Furthermore there may be other components, like depth-wise convolutions, which are incompatible with promising new primitives like Weight Standardization. It is therefore inevitable that some fine-tuning and model development is necessary to achieve competitive accuracies when removing a component like batch normalization which is crucial to the performance of existing state-of-the-art networks. Our experiments confirm for the first time that it is possible to develop deep ResNets which do not require batch normalization or other activation normalization layers, and which not only train stably and achieve low training losses, but also attain test accuracy competitive with the current state of the art on a challenging benchmark like ImageNet." }, { "heading": "6 CONCLUSION", "text": "We introduce Normalizer-Free Networks, a simple approach for designing residual networks which do not require activation normalization layers. Across a range of FLOP budgets, our models achieve performance competitive with the state-of-the-art EfficientNets on ImageNet. Meanwhile, our empirical analysis of signal propagation suggests that batch normalization resolves two key failure modes at initialization in deep ResNets. First, it suppresses the scale of the hidden activations on the residual branch, preventing signal explosion. Second, it prevents the mean squared scale of the activations on each channel from exceeding the variance of the activations between examples. Our Normalizer-Free Networks were carefully designed to resolve both of these failure modes." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Karen Simonyan for helpful discussions and guidance, as well as Guillaume Desjardins, Michael Figurnov, Nikolay Savinov, Omar Rivasplata, Relja Arandjelović, and Rishub Jain." }, { "heading": "APPENDIX A EXPERIMENT DETAILS", "text": "" }, { "heading": "A.1 STABILITY, LEARNING RATES, AND BATCH SIZES", "text": "Previous work (Goyal et al., 2017) has established a fairly robust linear relationship between the optimal learning rate (or highest stable learning rate) and batch size for Batch-Normalized ResNets. As noted in Smith et al. (2020), we also find that this relationship breaks down past batch size 1024 for our unnormalized ResNets, as opposed to 2048 or 4096 for normalized ResNets. Both the optimal learning rate and the highest stable learning rate decrease for higher batch sizes. This also appears to correlate with depth: when not regularized, our deepest models are not always stable with a learning rate of 0.4. While we can mitigate this collapse by reducing the learning rate for deeper nets, this introduces additional tuning expense and is clearly undesirable. It is not presently clear why regularization aids in stability; we leave investigation of this phenomenon to future work.\nTaking a closer look at collapsed networks, we find that even though their outputs have exploded (becoming large enough to go NaN), their weight magnitudes are not especially large, even if we remove our relaxed affine transforms and train networks whose layers are purely weight-standardized. The singular values of the weights, however, end up poorly conditioned, meaning that the Lipschitz constant of the network can become quite large, an effect which Scaled WS does not prevent. One might consider adopting one of the many techniques from the GAN literature to regularize or constrain this constant (Gulrajani et al., 2017; Miyato et al., 2018), but we have found that this added complexity and expense is not necessary to develop performant unnormalized networks.\nThis collapse highlights an important limitation of our approach, and of SPPs: as SPPs only show signal prop for a given state of a network (i.e., at initialization), no guarantees are provided far from initialization. This fact drives us to prefer parameterizations like Scaled WS rather than solely relying on initialization strategies, and highlights that while good signal propagation is generally necessary for stable optimization, it is not always sufficient." }, { "heading": "A.2 TRAINING SPEED", "text": "We evaluate the relative training speed of our normalizer-free models against batch-normalized models by comparing training speed (measured as the number of training steps per second). For comparing NF-RegNets against EfficientNets, we measure using the EfficientNet image sizes for each variant to employ comparable settings, but in practice we employ smaller image sizes so that our actual observed training speed for NF-RegNets is faster." }, { "heading": "APPENDIX B MODIFIED BUILDING BLOCKS", "text": "In order to maintain good signal propagation in our Normalizer-Free models, we must ensure that any architectural modifications do not compromise our model’s conditioning, as we cannot rely on activation normalizers to automatically correct for such changes. However, our models are not so fragile as to be unable to handle slight relaxations in this realm. We leverage this robustness to improve model expressivity and to incorporate known best practices for model design." }, { "heading": "B.1 AFFINE GAINS AND BIASES", "text": "First, we add affine gains and biases to our network, similar to those used by activation normalizers. These are applied as a vector gain, each element of which multiplies a given output unit of a reparameterized convolutional weight, and a vector bias, which is added to the output of each convolution. We also experimented with using these as a separate affine transform applied before the ReLU, but moved the parameters next to the weight instead to enable constant-folding for inference. As is common practice with normalizer parameters, we do not weight decay or otherwise regularize these weights.\nEven though these parameters are allowed to vary freely, we do not find that they are responsible for training instability, even in networks where we observe collapse. Indeed, we find that for settings which collapse (typically due to learning rates being too high), removing the affine transform has no impact on stability. As discussed in Appendix A, we observe that model instability arises as a result of the collapse of the spectra of the weights, rather than any consequence of the affine gains and biases." }, { "heading": "B.2 STOCHASTIC DEPTH", "text": "We incorporate Stochastic Depth (Huang et al., 2016), where the output of the residual branch of a block is randomly set to zero during training. This is often implemented such that if the block is kept, its value is divided by the keep probability. We remove this rescaling factor to help maintain signal propagation when the signal is kept, but otherwise do not find it necessary to modify this block.\nWhile it is possible that we might have an example where many blocks are dropped and signals are attenuated, in practice we find that, as with affine gains, removing Stochastic Depth does not improve stability, and adding it does not reduce stability. One might also consider a slightly more principled variant of Stochastic Depth in this context, where the skip connection is upscaled by 1+α if the residual branch is dropped, resulting in the variance growing as expected, but we did not find this strategy necessary." }, { "heading": "B.3 SQUEEZE AND EXCITE LAYERS", "text": "As mentioned in Section 4.4, we incorporate Squeeze and Excitation layers (Hu et al., 2018), which we empirically find to reduce signal magnitude by a factor of 0.5, which is simply corrected by multiplying by 2. This was determined using a similar procedure to that used to find γ values for a given nonlinearity, as demonstrated in Appendix D. We validate this empirically by training NFRegNet models with unmodified S+E blocks, which do not train stably, and NF-RegNet models with the additional correcting factor of 2, which do train stably." }, { "heading": "B.4 AVERAGE POOLING", "text": "In line with best practices determined by He et al. (2019), in our NF-RegNet models we replace the strided 1x1 convolutions with average pooling followed by 1x1 convolutions, a common alternative also employed in Zagoruyko & Komodakis (2016). We found that average pooling with a kernel of size k × k tended to attenuate the signal by a factor of k, but that it was not necessary to apply any correction due to this. While this will result in mis-estimation of β values at initialization, it does not harm training (and average pooling in fact improved results over strided 1x1 convolutions in every case we tried), so we simply include this operation as-is." }, { "heading": "APPENDIX C MODEL DETAILS", "text": "We develop the NF-RegNet architecture starting with a RegNetY-400MF architecture (Radosavovic et al. (2020)) a low-latency RegNet variant which also uses Squeeze+Excite blocks (Hu et al., 2018)) and uses grouped convolutions with a group width of 8. Following EfficientNets, we first add an additional expansion convolution after the final residual block, expanding to 1280w channels, where w is a model width multiplier hyperparameter. We find this to be very important for performance: if the classifier layer does not have access to a large enough feature basis, it will tend to underfit (as measured by higher training losses) and underperform. We also experimented with adding an additional linear expansion layer after the global average pooling, but found this not to provide the same benefit.\nNext, we replace the strided 1x1 convolutions in transition layers with average pooling followed by 1x1 convolutions (following He et al. (2019)), which we also find to improve performance. We switch from ReLU activations to SiLU activations (Elfwing et al., 2018; Hendrycks & Gimpel, 2016; Ramachandran et al., 2018). We find that SiLU’s benefits are only realized when used in conjunction with EMA (the model averaging we use, explained below), as in EfficientNets. The performance of the underlying weights does not seem to be affected by the difference in nonlinearities, so the improvement appears to come from SiLU apparently being more amenable to averaging.\nWe then tune the choice of width w and bottleneck ratio g by sweeping them on the 0.4B FLOP model. Contrary to Radosavovic et al. (2020) which found that inverted bottlenecks (Sandler et al., 2018) were not performant, we find that inverted bottlenecks strongly outperformed their compressive bottleneck counterparts, and select w = 0.75 and g = 2.25. Following EfficientNets (Tan & Le, 2019), the very first residual block in a network uses g = 1, a FLOP-reducing strategy that does not appear to harmfully impact performance.\nWe also modify the S+E layers to be wider by making their hidden channel width a function of the block’s expanded width, rather than the block’s input width (which is smaller in an inverted bottleneck). This results in our models having higher parameter counts than their equivalent FLOP target EfficientNets, but has minimal effect on FLOPS, while improving performance. While both FLOPS and parameter count play a part in the latency of a deployed model, (the quantity which is often most relevant in practice) neither are fully predictive of latency (Xiong et al., 2020). We choose to focus on the FLOPS target instead of parameter count, as one can typically obtain large improvements in accuracy at a given parameter count by, for example, increasing the resolution of the input image, which will dramatically increase the FLOPS.\nWith our baseline model in hand, we apply the EfficientNet compound scaling (increasing width, depth, and input image resolution) to obtain a family of models at approximately the same FLOP targets as each EfficientNet variant. We directly use the EfficientNet width and depth multipliers for models B0 through B5, and tune the test image resolution to attain similar FLOP counts (although our models tend to have slightly higher FLOP budgets). Again contrary to Radosavovic et al. (2020), which scales models almost entirely by increasing width and group width, we find that the EfficientNet compound scaling works effectively as originally reported, particularly with respect to image size. Improvements might be made by applying further architecture search, such as tuning the w and g values for each variant separately, or by choosing the group width separately for each variant.\nFollowing Touvron et al. (2019), we train on images of slightly lower resolution than we test on, primarily to reduce the resource costs of training. We do not employ the fine-tuning procedure of Touvron et al. (2019). The exact train and test image sizes we use are visible in our model code in Appendix D.\nWe train using SGD with Nesterov Momentum, using a batch size of 1024 for 360 epochs, which is chosen to be in line with EfficientNet’s schedule of 360 epoch training at batch size 4096. We employ a 5 epoch warmup to a learning rate of 0.4 (Goyal et al., 2017), and cosine annealing to 0 over the remaining epochs (Loshchilov & Hutter, 2017). As with EfficientNets, we also take an exponential moving average of the weights (Polyak, 1964), using a decay of 0.99999 which employs a warmup schedule such that at iteration i, the decay is decay = min(i, 1+i10+i ). We choose a larger decay than the EfficientNets value of 0.9999, as EfficientNets also take an EMA of the running average statistics of the BatchNorm layers, resulting in a longer horizon for the averaged model.\nAs with EfficientNets, we find that some of our models attain their best performance before the end of training, but unlike EfficientNets we do not employ early stopping, instead simply reporting performance at the end of training. The source of this phenomenon is that as some models (particularly larger models) reach the end of their decay schedule, the rate of change of their weights slows, ultimately resulting in the averaged weights converging back towards the underlying (less performant) weights. Future work in this area might consider examining the interaction between averaging and learning rate schedules.\nFollowing EfficientNets, we also use stochastic depth (modified to remove the rescaling by the keep rate, so as to better preserve signal) with a drop rate that scales from 0 to 0.1 with depth (reduced from the EfficientNets value of 0.2). We swept this value and found the model to not be especially sensitive to it as long as it was not chosen beyond 0.25. We apply Dropout (Srivastava et al., 2014) to the final pooled activations, using the same Dropout rates as EfficientNets for each variant. We also use label smoothing (Szegedy et al., 2016) of 0.1, and weight decay of 5e-5." }, { "heading": "APPENDIX D MODEL CODE", "text": "We here provide reference code using Numpy (Harris et al., 2020) and JAX (Bradbury et al., 2018). Our full training code is publicly available at dpmd.ai/nfnets." }, { "heading": "D.1 NUMERICAL APPROXIMATIONS OF NONLINEARITY-SPECIFIC GAINS", "text": "It is often faster to determine the nonlinearity-specific constants γ empirically, especially when the chosen activation functions are complex or difficult to integrate. One simple way to do this is for the SiLU function is to sample many (say, 1024) random C-dimensional vectors (of say size 256) and compute the average variance, which will allow for computing an estimate of the constant. Empirically estimating constants to ensure good signal propagation in networks at initialization has previously been proposed in Mishkin & Matas (2016) and Kingma & Dhariwal (2018).\nimport jax import jax.numpy as jnp key = jax.random.PRNGKey(2) # Arbitrary key # Produce a large batch of random noise vectors x = jax.random.normal(key, (1024, 256)) y = jax.nn.silu(x) # Take the average variance of many random batches gamma = jnp.mean(jnp.var(y, axis=1)) ** -0.5" }, { "heading": "APPENDIX E OVERVIEW OF EXISTING BLOCKS", "text": "This appendix contains an overview of several different types of residual blocks." }, { "heading": "APPENDIX F ADDITIONAL SPPS", "text": "In this appendix, we include additional Signal Propagation Plots. For reference, given an NHWC tensor, we compute the measured properties using the equivalent of the following Numpy (Harris et al., 2020) snippets:\n• Average Channel Mean Squared: np.mean(np.mean(y, axis=[0, 1, 2]) ** 2)\n• Average Channel Variance: np.mean(np.var(y, axis=[0, 1, 2]))\n• Residual Average Channel Variance: np.mean(np.var(f(x), axis=[0, 1, 2]))\nFigure 8: Signal Propagation Plot for a Normalizer-Free ResNetV2-600 with ReLU and Scaled WS,\nusing γ =\n√\n2, the gain for ReLU from (He et al., 2015). As this gain (derived from\n√\n1\nE[g(x)2] ) is lower than the correct gain ( √\n1 V ar(g(x)) ), signals attenuate progressively in the first stage, then are\nfurther downscaled at each transition which uses a β value that assumes a higher incoming scale." }, { "heading": "APPENDIX G NEGATIVE RESULTS", "text": "" }, { "heading": "G.1 FORWARD MODE VS DECOUPLED WS", "text": "Parameterization methods like Weight Standardization (Qiao et al., 2019), Weight Normalization (Salimans & Kingma, 2016), and Spectral Normalization (Miyato et al., 2018) are typically proposed as “foward mode” modifications applied to parameters during the forward pass of a network. This has two consequences: first, this means that the gradients with respect to the underlying parameters are influenced by the parameterization, and that the weights which are optimized may differ substantially from the weights which are actually plugged into the network.\nOne alternative approach is to implement “decoupled” variants of these parameterizers, by applying them as a projection step in the optimizer. For example, “Decoupled Weight Standardization” can be implemented atop any gradient based optimizer by replacing W with the normalized Ŵ after the update step. Most papers proposing parameterizations (including the above) argue that the parameterization’s gradient influence is helpful for learning, but this is typically argued with respect to simply ignoring the parameterization during the backward pass, rather than with respect to a strategy such as this.\nUsing a Forward-Mode parameterization may result in interesting interactions with moving averages or weight decay. For example, with WS, if one takes a moving average of the underlying weights, then applies the WS parameterization to the averaged weights, this will produce different results than if one took the EMA of the Weight-Standardized parameters. Weight decay will have a similar phenomenon: if one is weight decaying a parameter which is actually a proxy for a weight-standardized parameter, how does this change the behavior of the regularization?\nWe experimented with Decoupled WS and found that it reduced sensitivity to weight decay (presumably because of the strength of the projection step) and often improved the accuracy of the EMA weights early in training, but ultimately led to worse performance than using the originally proposed “forward-mode” formulation. We emphasize that our experiments in this regime were only cursory, and suggest that future work might seek to analyze these interactions in more depth.\nWe also tried applying Scaled WS as a regularizer (“Soft WS”) by penalizing the mean squared error between the parameter W and its Scaled WS parameterization, Ŵ . We implemented this as a direct addition to the parameters following Loshchilov & Hutter (2019) rather than as a differentiated loss, with a scale hyperparameter controlling the strength of the regularization. We found that this scale could not be meaningfully decreased from its maximal value without drastic training instability, indicating that relaxing the WS constraint is better done through other means, such as the affine gains and biases we employ." }, { "heading": "G.2 MISCELLANEOUS", "text": "• For SPPs, we initially explored plotting activation mean (np.mean(h)) instead of the average squared channel mean, but found that this was less informative.\n• We also initially explored plotting the average pixel norm: the Frobenius norm of each pixel (reduced across the C axis) then averaged across the NHW axis, np.mean(np.linalg.norm(h, axis=-1))). We found that this value did not add any information not already contained in the channel or residual variance measures, and was harder to interpret due to it varying with the channel count.\n• We explored NF-ResNet variants which maintained constant signal variance, rather than mimicking Batch-Normalized ResNets with signal growth + resets. The first of two key components in this approach was making use of “rescaled sum junctions,” where the sum junction in a residual block was rewritten to downscale the shortcut path as y = α∗f(x)+xα2 , which is approximately norm-preserving if f(x) is orthogonal to x (which we observed to generally hold in practice). Instead of Scaled WS, this variant employed SeLU (Klambauer et al., 2017) activations, which we found to work as-advertised in encouraging centering and good scaling. While these networks could be made to train stably, we found tuning them to be difficult and were not able to easily recover the performance of BN-ResNets as we were with the approach ultimately presented in this paper." }, { "heading": "APPENDIX H EXPERIMENTS WITH ADDITIONAL TASKS", "text": "" }, { "heading": "H.1 SEMANTIC SEGMENTATION ON PASCAL VOC", "text": "We present additional results investigating the transferability of our normalizer-free models to downstream tasks, beginning with the Pascal VOC Semantic Segmentation task. We use the FCN architecture (Long et al., 2015) following He et al. (2020) and Grill et al. (2020) . We take the ResNet backbones of each variant and modify the 3x3 convolutions in the final stage to have dilation 2 and stride of 1, then add two extra 3x3 convolutions with dilation of 6, and a final 1x1 convolution for classification. We train for 30000 steps at batch size 16 using SGD with Nesterov Momentum of 0.9, a learning rate of 0.003 which is reduced by a factor of 10 at 70% and 90% of training, and weight decay of 5e-5. Training images are augmented with random scaling in the range [0.5, 2.0]), random horizontal flips, and random crops. Results in mean Intersection over Union (mIoU) are reported in Table 6 on the val2012 set using a single 513 pixel center crop. We do not add any additional regularization such as stochastic depth or dropout. NF-ResNets obtain comparable performance to their BN-ResNet counterparts across all variants." }, { "heading": "H.2 DEPTH ESTIMATION ON NYU DEPTH V2", "text": "We next present results for depth estimation on the NYU v2 dataset (Silberman et al., 2012) using the protocol from (Laina et al., 2016). We downsample the images and center crop them to [304, 228] pixels, then randomly flip and apply several color augmentations: grayscale with probability 30%, brightness with a maximum difference of 0.1255, saturation with a random factor picked from [0.5, 1.5], and Hue with adjustment factor picked in [-0.2, 0.2]. We take the features from the final residual stage and feed them into the up-projection blocks from (Silberman et al., 2012), then train with a reverse Huber loss (Laina et al., 2016; Owen, 2007). We train for 7500 steps at batch size 256 using SGD with Nesterov Momentum of 0.9, a learning rate of 0.16, and cosine annealing. We report results in Table7 using five metrics commonly used for this task: the percentage of pixels where the magnitude of the relative error (taken as the ratio of the predicted depth and the ground truth, where the denominator is whichever is smaller) is below a certain threshold, as well as rootmean-squared and relative error (rms and rel). As with semantic segmentation, we do not apply any additional regularization, and find that our normalizer-free ResNets attain comparable performance across all model variants." } ]
2,021
CHARACTERIZING SIGNAL PROPAGATION TO CLOSE THE PERFORMANCE GAP IN UNNORMALIZED RESNETS
SP:206600e5bfcc9ccd494b82995a7898ae81a4e0bf
[ "The paper focused on the sample importance in the adversarial training. The authors firstly revealed that over-parameterized deep models on natural data may have insufficient model capacity for adversarial data, because the training loss is hard to zero for adversarial training. Then, the authors argued that limited capacity should be used for these important samples, that is, we should not treat samples equally important. They used the distance to the decision boundary to distinguish important samples and proposed geometry-aware instance-reweighted adversarial training. Experiments show the superiority over baselines. " ]
In adversarial machine learning, there was a common belief that robustness and accuracy hurt each other. The belief was challenged by recent studies where we can maintain the robustness and improve the accuracy. However, the other direction, we can keep the accuracy and improve the robustness, is conceptually and practically more interesting, since robust accuracy should be lower than standard accuracy for any model. In this paper, we show this direction is also promising. Firstly, we find even over-parameterized deep networks may still have insufficient model capacity, because adversarial training has an overwhelming smoothing effect. Secondly, given limited model capacity, we argue adversarial data should have unequal importance: geometrically speaking, a natural data point closer to/farther from the class boundary is less/more robust, and the corresponding adversarial data point should be assigned with larger/smaller weight. Finally, to implement the idea, we propose geometry-aware instance-reweighted adversarial training, where the weights are based on how difficult it is to attack a natural data point. Experiments show that our proposal boosts the robustness of standard adversarial training; combining two directions, we improve both robustness and accuracy of standard adversarial training.
[ { "affiliations": [], "name": "SARIAL TRAINING" }, { "affiliations": [], "name": "Jingfeng Zhang" }, { "affiliations": [], "name": "Jianing Zhu" }, { "affiliations": [], "name": "Gang Niu" }, { "affiliations": [], "name": "Bo Han" }, { "affiliations": [], "name": "Masashi Sugiyama" }, { "affiliations": [], "name": "Mohan Kankanhalli" } ]
[ { "authors": [ "Mislav Balunovic", "Martin Vechev" ], "title": "Adversarial training and provable defenses: Bridging the gap", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Denni D. Boos", "L.A. Stefanski" ], "title": "M-Estimation (Estimating Equations), pp. 297–337", "venue": null, "year": 2013 }, { "authors": [ "Qi-Zhi Cai", "Chang Liu", "Dawn Song" ], "title": "Curriculum adversarial training", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David A. Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Yair Carmon", "Aditi Raghunathan", "Ludwig Schmidt", "Percy Liang", "John C. Duchi" ], "title": "Unlabeled data improves adversarial robustness", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Chen Chen", "Jingfeng Zhang", "Xilie Xu", "Tianlei Hu", "Gang Niu", "Gang Chen", "Masashi Sugiyama" ], "title": "Guided interpolation for adversarial training", "venue": null, "year": 2021 }, { "authors": [ "Tianlong Chen", "Sijia Liu", "Shiyu Chang", "Yu Cheng", "Lisa Amini", "Zhangyang Wang" ], "title": "Adversarial robustness: From self-supervised pre-training to fine-tuning", "venue": null, "year": 2020 }, { "authors": [ "Tianlong Chen", "Zhenyu Zhang", "Sijia Liu", "Shiyu Chang", "Zhangyang Wang" ], "title": "Robust overfitting may be mitigated by properly learned smoothening", "venue": "In ICLR,", "year": 2021 }, { "authors": [ "Minhao Cheng", "Qi Lei", "Pin-Yu Chen", "Inderjit Dhillon", "Cho-Jui Hsieh" ], "title": "Cat: Customized adversarial training for improved robustness", "venue": null, "year": 2002 }, { "authors": [ "Jeremy M. Cohen", "Elan Rosenfeld", "J. Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Gavin Weiguang Ding", "Yash Sharma", "Kry Yik Chau Lui", "Ruitong Huang" ], "title": "Mma training: Direct input space margin maximization through adversarial training", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Alhussein Fawzi", "Seyed-Mohsen Moosavi-Dezfooli", "Pascal Frossard" ], "title": "The robustness of deep networks: A geometrical perspective", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "Alhussein Fawzi", "Seyed-Mohsen Moosavi-Dezfooli", "Pascal Frossard", "Stefano Soatto" ], "title": "Empirical study of the topology and geometry of deep networks", "venue": null, "year": 2018 }, { "authors": [ "Yoav Freund", "Robert E Schapire" ], "title": "A decision-theoretic generalization of on-line learning and an application to boosting", "venue": "Journal of computer and system sciences,", "year": 1997 }, { "authors": [ "Sven Gowal", "Chongli Qin", "Jonathan Uesato", "Timothy Mann", "Pushmeet Kohli" ], "title": "Uncovering the limits of adversarial training against norm-bounded adversarial examples", "venue": null, "year": 2010 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Marti A. Hearst", "Susan T Dumais", "Edgar Osuna", "John Platt", "Bernhard" ], "title": "Scholkopf. Support vector machines", "venue": "IEEE Intelligent Systems and their applications,", "year": 1998 }, { "authors": [ "Dan Hendrycks", "Kimin Lee", "Mantas Mazeika" ], "title": "Using pre-training can improve model robustness and uncertainty", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Ziyu Jiang", "Tianlong Chen", "Ting Chen", "Zhangyang Wang" ], "title": "Robust pre-training by adversarial contrastive learning", "venue": "In NeurIPS,", "year": 2020 }, { "authors": [ "Can Kanbak", "Seyed-Mohsen Moosavi-Dezfooli", "Pascal Frossard" ], "title": "Geometric robustness of deep networks: analysis and improvement", "venue": null, "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Ken Nakae", "Shin Ishii" ], "title": "Distributional smoothing by virtual adversarial examples", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Jonathan Uesato", "Pascal Frossard" ], "title": "Robustness via curvature regularization, and vice versa", "venue": null, "year": 2019 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Anh Nguyen", "Jason Yosinski", "Jeff Clune" ], "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Arunesh Sinha", "Michael Wellman" ], "title": "Towards the science of security and privacy in machine learning", "venue": null, "year": 2016 }, { "authors": [ "Chongli Qin", "James Martens", "Sven Gowal", "Dilip Krishnan", "Krishnamurthy Dvijotham", "Alhussein Fawzi", "Soham De", "Robert Stanforth", "Pushmeet Kohli" ], "title": "Adversarial robustness through local linearization", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Aditi Raghunathan", "Sang Michael Xie", "Fanny Yang", "John Duchi", "Percy Liang" ], "title": "Understanding and mitigating the tradeoff between robustness and accuracy", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Leslie Rice", "Eric Wong", "J Zico Kolter" ], "title": "Overfitting in adversarially robust deep learning", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Hadi Salman", "Andrew Ilyas", "Logan Engstrom", "Ashish Kapoor", "Aleksander Madry" ], "title": "Do adversarially robust imagenet models transfer better", "venue": "In NeurIPS,", "year": 2020 }, { "authors": [ "Vikash Sehwag", "Shiqi Wang", "Prateek Mittal", "Suman Jana" ], "title": "Hydra: Pruning adversarially robust neural networks", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Chawin Sitawarin", "Supriyo Chakraborty", "David Wagner" ], "title": "Improving adversarial robustness through progressive hardening", "venue": null, "year": 2003 }, { "authors": [ "Nitish Srivastava", "Geoffrey E. Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "J. Mach. Learn. Res.,", "year": 1929 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Antonio Torralba", "Rob Fergus", "William T Freeman" ], "title": "80 million tiny images: A large data set for nonparametric object and scene recognition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 1958 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Yusuke Tsuzuku", "Issei Sato", "Masashi Sugiyama" ], "title": "Lipschitz-Margin training: Scalable certification of perturbation invariance for deep neural networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Haotao Wang", "Tianlong Chen", "Shupeng Gui", "Ting-Kuei Hu", "Ji Liu", "Zhangyang Wang" ], "title": "Oncefor-all adversarial training: In-situ tradeoff between robustness and accuracy for free", "venue": "In NeurIPS 2020,", "year": 2020 }, { "authors": [ "Yisen Wang", "Xingjun Ma", "James Bailey", "Jinfeng Yi", "Bowen Zhou", "Quanquan Gu" ], "title": "On the convergence and robustness of adversarial training", "venue": null, "year": 2019 }, { "authors": [ "Yisen Wang", "Difan Zou", "Jinfeng Yi", "James Bailey", "Xingjun Ma", "Quanquan Gu" ], "title": "Improving adversarial robustness requires revisiting misclassified examples", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Eric Wong", "J. Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Dongxian Wu", "Shu-Tao Xia", "Yisen Wang" ], "title": "Adversarial weight perturbation helps robust generalization", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Yao-Yuan Yang", "Cyrus Rashtchian", "Hongyang Zhang", "Russ R. Salakhutdinov", "Kamalika Chaudhuri" ], "title": "A closer look at accuracy vs. robustness", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P. Xing", "Laurent El Ghaoui", "Michael I. Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": null, "year": 2019 }, { "authors": [ "Huan Zhang", "Hongge Chen", "Chaowei Xiao", "Sven Gowal", "Robert Stanforth", "Bo Li", "Duane Boning", "Cho-Jui Hsieh" ], "title": "Towards stable and efficient training of verifiably robust neural networks", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Jingfeng Zhang", "Xilie Xu", "Bo Han", "Gang Niu", "Lizhen Cui", "Masashi Sugiyama", "Mohan Kankanhalli" ], "title": "Attacks which do not kill training make adversarial learning stronger", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Jianing Zhu", "Jingfeng Zhang", "Bo Han", "Tongliang Liu", "Gang Niu", "Hongxia Yang", "Mohan Kankanhalli", "Masashi Sugiyama" ], "title": "Understanding the interaction of adversarial training with noisy labels", "venue": null, "year": 2021 }, { "authors": [], "title": "K − 1 end while GAIRAT is a general method, and the friendly adversarial training (Zhang et al., 2020b) can be easily modified to a geometry-aware instance-reweighted version, i.e. GAIR-FAT", "venue": null, "year": 2020 }, { "authors": [ "Wang" ], "title": "In the upper-right panel of Figure 4, the solid lines represent the robust training error on the adversarial training data; the dashed lines represent the robust test error on the adversarial test data. The adversarial training/test data are generated by PGD-20 attack with random start. Random start refers to the uniformly random perturbation of [− , ] added to the natural data before PGD perturbations", "venue": "The step size", "year": 2019 }, { "authors": [ "Wang" ], "title": "The PGD attack has a random start, i.e, the uniformly random perturbations", "venue": "In the bottom panels of Figure 11,", "year": 2020 }, { "authors": [ "Wang" ], "title": "PGD+ is the same as PGours used by Carmon et al", "venue": "For PGD-20,", "year": 2019 }, { "authors": [ "Zhang" ], "title": "2020b). We train WRN-34-10 for 100 epochs using SGD with 0.9 momentum. The initial learning rate is 0.1 reduced to 0.01 and 0.01 at epoch 75 and 90. The weight decay is 0.0002. For generating the adversarial data for updating the model, the perturbation bound train = 0.031, the PGD step is fixed to 10, and the step size is fixed to 0.007", "venue": "GAIR-TRADES,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Crafted adversarial data can easily fool the standard-trained deep models by adding humanimperceptible noise to the natural data, which leads to the security issue in applications such as medicine, finance, and autonomous driving (Szegedy et al., 2014; Nguyen et al., 2015). To mitigate this issue, many adversarial training methods employ the most adversarial data maximizing the loss for updating the current model such as standard adversarial training (AT) (Madry et al., 2018), TRADES (Zhang et al., 2019), robust self-training (RST) (Carmon et al., 2019), and MART (Wang et al., 2020b). The adversarial training methods seek to train an adversarially robust deep model whose predictions are locally invariant to a small neighborhood of its inputs (Papernot et al., 2016). By leveraging adversarial data to smooth the small neighborhood, the adversarial training methods acquire adversarial robustness against adversarial data but often lead to the undesirable degradation of standard accuracy on natural data (Madry et al., 2018; Zhang et al., 2019).\nThus, there have been debates on whether there exists a trade-off between robustness and accuracy. For example, some argued an inevitable trade-off: Tsipras et al. (2019) showed fundamentally different representations learned by a standard-trained model and an adversarial-trained model; Zhang et al. (2019) and Wang et al. (2020a) proposed adversarial training methods that can trade off standard accuracy for adversarial robustness. On the other hand, some argued that there is no such the trade-off: Raghunathan et al. (2020) showed infinite data could eliminate this trade-off; Yang et al. (2020) showed benchmark image datasets are class-separated.\nRecently, emerging adversarial training methods have empirically challenged this trade-off. For example, Zhang et al. (2020b) proposed the friendly adversarial training method (FAT), employing friendly adversarial data minimizing the loss given that some wrongly-predicted adversarial data have been found. Yang et al. (2020) introduced dropout (Srivastava et al., 2014) into existing AT, RST, and TRADES methods. Both methods can improve the accuracy while maintaining the robustness. However, the other direction—whether we can improve the robustness while keeping the accuracy—remains unsolved and is more interesting.\nIn this paper, we show this direction is also achievable. Firstly, we show over-parameterized deep networks may still have insufficient model capacity, because adversarial training has an overwhelming smoothing effect. Fitting adversarial data is demanding for a tremendous model capacity: It requires a large number of trainable parameters or long-enough training epochs to reach near-zero error on the adversarial training data (see Figure 2). The over-parameterized models that fit natural data entirely in the standard training (Zhang et al., 2017) are still far from enough for fitting adversarial data. Compared with standard training fitting the natural data points, adversarial training smooths the neighborhoods of natural data, so that adversarial data consume significantly more model capacity than natural data. Thus, adversarial training methods should carefully utilize the limited model capacity to fit the neighborhoods of the important data that aid to fine-tune the decision boundary. Therefore, it may be unwise to give equal weights to all adversarial data.\nSecondly, data along with their adversarial variants are not equally important. Some data are geometrically far away from the class boundary. They are relatively guarded. Their adversarial variants are hard to be misclassified. On the other hand, some data are close to the class boundary. They are relatively attackable. Their adversarial variants are easily misclassified (see Figure 3). As the adversarial training progresses, the adversarially robust model engenders an increasing number of guarded training data and a decreasing number of attackable training data. Given limited model capacity, treating all data equally may cause the vast number of adversarial variants of the guarded data to overwhelm the model, leading to the undesirable robust overfitting (Rice et al., 2020). Thus, it may be pessimistic to treat all data equally in adversarial training.\nTo ameliorate this pessimism, we propose a heuristic method, i.e., geometry-aware instancereweighted adversarial training (GAIRAT). As shown in Figure 1, GAIRAT treats data differently. Specifically, for updating the current model, GAIRAT gives larger/smaller weight to the loss of an adversarial variant of attackable/guarded data point which is more/less important in fine-tuning the decision boundary. An attackable/guarded data point has a small/large geometric distance, i.e., its distance from the decision boundary. We approximate its geometric distance by the least number of iterations κ that projected gradient descent method (Madry et al., 2018) requires to generate a misclassified adversarial variant (see the details in Section 3.3). GAIRAT explicitly assigns instancedependent weight to the loss of its adversarial variant based on the least iteration number κ.\nOur contributions are as follows. (a) In adversarial training, we identify the pessimism in treating all data equally, which is due to the insufficient model capacity and the unequal nature of different data (in Section 3.1). (b) We propose a new adversarial training method, i.e., GAIRAT (its learning objective in Section 3.2 and its realization in Section 3.3). GAIRAT is a general method: Besides standard AT (Madry et al., 2018), the existing adversarial training methods such as FAT (Zhang et al., 2020b) and TRADES (Zhang et al., 2019) can be modified to GAIR-FAT and GAIR-TRADES (in Appendices B.1 and B.2, respectively). (c) Empirically, our GAIRAT can relieve the issue of robust\noverfitting (Rice et al., 2020), meanwhile leading to the improved robustness with zero or little degradation of accuracy (in Section 4.1 and Appendix C.1). Besides, we use Wide ResNets (Zagoruyko & Komodakis, 2016) to corroborate the efficacy of our geometry-aware instance-reweighted methods: Our GAIRAT significantly boosts the robustness of standard AT; combined with FAT, our GAIRFAT improves both the robustness and accuracy of standard AT (in Section 4.2). Consequently, we conjecture no inevitable trade-off between robustness and accuracy." }, { "heading": "2 ADVERSARIAL TRAINING", "text": "In this section, we review adversarial training methods (Madry et al., 2018; Zhang et al., 2020b)." }, { "heading": "2.1 LEARNING OBJECTIVE", "text": "Let (X , d∞) denote the input feature space X with the infinity distance metric dinf(x, x′) = ‖x − x′‖∞, and B [x] = {x′ ∈ X | dinf(x, x′) ≤ } be the closed ball of radius > 0 centered at x in X . Dataset S = {(xi, yi)}ni=1, where xi ∈ X and yi ∈ Y = {0, 1, ..., C − 1}. The objective function of standard adversarial training (AT) (Madry et al., 2018) is\nmin fθ∈F\n1\nn n∑ i=1 `(fθ(x̃i), yi), (1)\nwhere\nx̃i = arg maxx̃∈B [xi] `(fθ(x̃), yi), (2)\nwhere x̃ is the most adversarial data within the -ball centered at x, fθ(·) : X → RC is a score function, and the loss function ` : RC×Y → R is a composition of a base loss `B : ∆C−1×Y → R (e.g., the cross-entropy loss) and an inverse link function `L : RC → ∆C−1 (e.g., the soft-max activation), in which ∆C−1 is the corresponding probability simplex—in other words, `(fθ(·), y) = `B(`L(fθ(·)), y). AT employs the most adversarial data generated according to Eq. (2) for updating the current model.\nThe objective function of friendly adversarial training (FAT) (Zhang et al., 2020b) is\nx̃i = arg min x̃∈B [xi]\n`(fθ(x̃), yi) s.t. `(fθ(x̃), yi)−miny∈Y `(fθ(x̃), y) ≥ ρ. (3)\nNote that the outer minimization remains the same as Eq. (1), and the operator arg max is replaced by arg min. ρ is a margin of loss values (i.e., the misclassification confidence). The constraint of Eq. (3) firstly ensures x̃ is misclassified, and secondly ensures for x̃ the wrong prediction is better than the desired prediction yi by at least ρ in terms of the loss value. Among all such x̃ satisfying the constraint, Eq. (3) selects the one minimizing `(fθ(x̃), yi) by a violation of the value ρ. There are no constraints on x̃i if x̃i is correctly classified. FAT employs the friendly adversarial data generated according to Eq. (3) for updating the current model." }, { "heading": "2.2 REALIZATIONS", "text": "AT and FAT’s objective functions imply the optimization of adversarially robust networks, with one step generating adversarial data and one step minimizing loss on the generated adversarial data w.r.t. the model parameters θ.\nThe projected gradient descent method (PGD) (Madry et al., 2018) is the most common approximation method for searching adversarial data. Given a starting point x(0) ∈ X and step size α > 0, PGD works as follows:\nx(t+1) = ΠB[x(0)] ( x(t) + α sign(∇x(t)`(fθ(x(t)), y)) ) , t ∈ N (4)\nuntil a certain stopping criterion is satisfied. ` is the loss function; x(0) refers to natural data or natural data perturbed by a small Gaussian or uniformly random noise; y is the corresponding label for natural data; x(t) is adversarial data at step t; and ΠB [x0](·) is the projection function that projects the adversarial data back into the -ball centered at x(0) if necessary.\nThere are different stopping criteria between AT and FAT. AT employs a fixed number of iterations K, namely, the PGD-K algorithm (Madry et al., 2018), which is commonly used in many adversarial training methods such as CAT (Cai et al., 2018), DAT (Wang et al., 2019), TRADES (Zhang et al., 2019), and MART (Wang et al., 2020b). On the other hand, FAT employs the misclassification-aware criterion. For example, Zhang et al. (2020b) proposed the early-stopped PGD-K-τ algorithm (τ ≤ K; K is the fixed and maximally allowed iteration number): Once the PGD-K-τ finds the current model misclassifying the adversarial data, it stops the iterations immediately (τ = 0) or slides a few more steps (τ > 0). This misclassification-aware criterion is used in the emerging adversarial training methods such as MMA (Ding et al., 2020), FAT (Zhang et al., 2020b), ATES (Sitawarin et al., 2020), and Customized AT (Cheng et al., 2020).\nAT can enhance the robustness against adversarial data but, unfortunately, degrades the standard accuracy on the natural data significantly (Madry et al., 2018). On the other hand, FAT has better standard accuracy with near-zero or little degradation of robustness (Zhang et al., 2020b).\nNevertheless, both AT and FAT treat the generated adversarial data equally for updating the model parameters, which is not necessary and sometimes even pessimistic. In the next sections, we introduce our method GAIRAT, which is compatible with existing methods such as AT, FAT, and TRADES. Consequently, GAIRAT can significantly enhance robustness with little or even zero degradation of standard accuracy." }, { "heading": "3 GEOMETRY-AWARE INSTANCE-REWEIGHTED ADVERSARIAL TRAINING", "text": "In this section, we propose geometry-aware instance-reweighted adversarial training (GAIRAT) and its learning objective as well as its algorithmic realization." }, { "heading": "3.1 MOTIVATIONS OF GAIRAT", "text": "Model capacity is often insufficient in adversarial training. In the standard training, the overparameterized networks, e.g., ResNet-18 and even larger ResNet-50, have more than enough model capacity, which can easily fit the natural training data entirely (Zhang et al., 2017). However, the left panel of Figure 2 shows that the model capacity of those over-parameterized networks is not enough for fitting the adversarial data. Under the computational budget of 100 epochs, the networks hardly reach zero error on the adversarial training data. Besides, adversarial training error only decreases by a small constant factor with the significant increase of the model’s parameters. Even worse, a slightly larger perturbation bound train significantly uncovers this insufficiency of the model capacity (right panel): Adversarial training error significantly increases with slightly larger train. Surprisingly, the standard training error on natural data hardly reaches zero with train = 16/255.\nAdversarial training methods employ the adversarial data to reduce the sensitivity of the model’s output w.r.t. small changes of the natural data (Papernot et al., 2016). During the training process, adversarial data are generated on the fly and are adaptively changed based on the current model to smooth the natural data’s local neighborhoods. The volume of this surrounding is exponentially (|1 + train||X |) large w.r.t. the input dimension |X |, even if train is small. Thus, this smoothness consumes significant model capacity. In adversarial training, we should carefully leverage the limited model capacity by fitting the important data and by ignoring the unimportant data.\nMore attackable/guarded data are closer to/farther away from the class boundary. We can measure the importance of the data by their robustness against adversarial attacks. Figure 3 shows that the robustness (more attackable or more guarded) of the data is closely related to their geometric distance from the decision boundary. From the geometry perspective, more attackable data are closer to the class boundary whose adversarial variants are more important to fine-tune the decision boundary for enhancing robustness.\nAppendix A contains experimental details of Figures 2 and 3 and more motivation figures." }, { "heading": "3.2 LEARNING OBJECTIVE OF GAIRAT", "text": "Let ω(x, y) be the geometry-aware weight assignment function on the loss of adversarial variant x̃. The inner optimization for generating x̃ still follows Eq. (2) or Eq. (3). The outer minimization is\nmin fθ∈F\n1\nn n∑ i=1 ω(xi, yi)`(fθ(x̃i), yi). (5)\nThe constraint firstly ensures that yi = arg maxi fθ(xi) and secondly ensures that ω(xi, yi) is a non-increasing function w.r.t. the geometric distance, i.e., the distance from data xi to the decision boundary, in which ω(xi, yi) ≥ 0 and 1n ∑n i=1 ω(xi, yi) = 1.\nThere are no constraints when yi 6= arg maxi fθ(xi) : for those x significantly far away from the decision boundary, we may discard them (outliers); for those x close to the decision boundary, we may assign them large weights. In this paper, we do not consider outliers, and therefore we assign large weight to the losses of adversarial data, whose natural counterparts are misclassified. Figure 1 provides an illustrative schematic of the learning objective of GAIRAT.\nA burn-in period may be introduced, i.e., during the initial period of the training epochs, ω(xi, yi) = 1 regardless of the geometric distance of input (xi, yi), because the geometric distance is less informative initially, when the classifier is not properly learned." }, { "heading": "3.3 REALIZATION OF GAIRAT", "text": "The learning objective Eq. (5) implies the optimization of an adversarially robust network, with one step generating adversarial data and then reweighting loss on them according to the geometric distance of their natural counterparts, and one step minimizing the reweighted loss w.r.t. the model parameters θ.\nWe approximate the geometric distance of a data point (x, y) by the least iteration numbers κ(x, y) that the PGD method needs to generate a adversarial variant x̃ to fool the current network, given the\nAlgorithm 1 Geometry-aware projected gradient descent (GA-PGD) Input: data x ∈ X , label y ∈ Y , model f , loss function `, maximum PGD step K, perturbation bound , step size α Output: adversarial data x̃ and geometry value κ(x, y) x̃← x; κ(x, y)← 0 while K > 0 do\nif arg maxi f(x̃) = y then κ(x, y)← κ(x, y) + 1 end if x̃← ΠB[x, ] ( α sign(∇x̃`(f(x̃), y)) + x̃\n) K ← K − 1\nend while\nAlgorithm 2 Geometry-aware instance-dependent adversarial training (GAIRAT) Input: network fθ, training dataset S = {(xi, yi)}ni=1, learning rate η, number of epochs T , batch size m, number of batches M Output: adversarially robust network fθ for epoch = 1, . . . , T do\nfor mini-batch = 1, . . . , M do Sample a mini-batch {(xi, yi)}mi=1 from S for i = 1, . . . , m (in parallel) do\nObtain adversarial data x̃i of xi and geometry value κ(xi, yi) by Algorithm 1 Calculate ω(xi, yi) according to geometry value κ(xi, yi) by Eq. 6\nend for θ ← θ − η∇θ {∑m i=1 ω(xi,yi)∑m j=1 ω(xj ,yj) `(fθ(x̃i), yi) } end for\nend for\nmaximally allowed iteration number K and step size α. Thus, the geometric distance is approximated by κ (precisely by κ×α). Thus, the value of the weight function ω should be non-increasing w.r.t. κ. We name κ(x, y) the geometry value of data (x, y).\nHow to calculate the optimal ω is still an open question; therefore, we heuristically design different non-increasing functions ω. We give one example here and discuss more examples in Appendix C.3 and Section 4.1.\nw(x, y) = (1 + tanh(λ+ 5× (1− 2× κ(x, y)/K)))\n2 , (6)\nwhere κ/K ∈ [0, 1], K ∈ N+, and λ ∈ R. If λ = +∞, GAIRAT recovers the standard AT (Madry et al., 2018), assigning equal weights to the losses of adversarial data.\nAlgorithm 1 is a geometry-aware PGD method (GA-PGD), which returns both the most adversarial data and the geometry value of its natural counterpart. Algorithm 2 is geometry-aware instancedependent adversarial training (GAIRAT). GAIRAT leverages Algorithms 1 for obtaining the adversarial data and the geometry value. For each mini-batch, GAIRAT reweighs the loss of adversarial data (x̃i, yi) according to the geometry value of their natural counterparts (xi, yi), and then updates the model parameters by minimizing the sum of the reweighted loss.\nGAIRAT is a general method. Indeed, FAT (Zhang et al., 2020b) and TRADES (Zhang et al., 2019) can be modified to GAIR-FAT and GAIR-TRADES (see Appendices B.1 and B.2, respectively).\nComparisons with SVM. The abstract concept of GAIRAT has appeared previously. For example, in the support vector machine (SVM), support vectors near the decision boundary are particularly useful in influencing the decision boundary (Hearst et al., 1998). For learning models, the magnitude of the loss function (e.g., the hinge loss and the logistic loss) can naturally capture different data’s geometric distance from the decision boundary. For updating the model, the loss function treats data differently by incurring large losses on important attackable (close to the decision bound-\nary) or misclassified data and incurring zero or very small losses on unimportant guarded (far away from the decision boundary) data.\nHowever, in adversarial training, it is critical to explicitly assign different weights on top of losses on different adversarial data due to the blocking effect: The model trained on the adversarial data that maximize the loss learns to prevent generating large-loss adversarial data. This blocking effect makes the magnitude of the loss less capable of distinguishing important adversarial data from unimportant ones for updating the model parameters, compared with the role of loss on measuring the natural data’s importance in standard training. Our GAIRAT breaks this blocking effect by explicitly extracting data’s geometric information to distinguish the different importance.\nComparisons with AdaBoost and focal loss. The idea of instance-dependent weighting has been studied in the literature. Besides robust estimator (e.g., M-estimator (Boos & Stefanski, 2013)) for learning under outliers (e.g., label-noised data), hard data mining is another branch where our GAIRAT belongs. Boosting algorithms such as AdaBoost (Freund & Schapire, 1997) select harder examples to train subsequent classifiers. Focal loss (Lin et al., 2017) is specially designed loss function for mining hard data and misclassified data. However, the previous hard data mining methods leverage the data’s losses for measuring the hardness; by comparison, our GAIRAT measures the hardness by how difficulty the natural data are attacked (i.e., geometry value κ). This new measurement κ sheds new lights on measuring the data’s hardness (Zhu et al., 2021).\nComparisons with related adversarial training methods. Some existing adversarial training methods also “treat adversarial data differently”, but in different ways to our GAIRAT. For example, CAT (Cai et al., 2018), MMA (Ding et al., 2020), and DAT (Wang et al., 2019) methods generate the differently adversarial data for updating model over the training process. CAT utilized the adversarial data with different PGD iterations K. DAT utilized the adversarial data with different convergence qualities. MMA leveraged adversarial data with instance-dependent perturbation bounds . Different from those existing methods, our GAIRAT treat adversarial data differently by explicitly assigning different weights on their losses, which can break the blocking effect.\nNote that the learning objective of MART (Wang et al., 2020b) also explicitly assigns weights, not directly on the adversarial loss but KL divergence loss (see details in Section C.7). The KL divergence loss helps to strengthen the smoothness within the norm ball of natural data, which is also used in VAT (Miyato et al., 2016) and TRADES (Zhang et al., 2019). Differently from MART, our GAIRAT explicitly assigns weights on the adversarial loss. Therefore, we can easily modify MART to GAIR-MART (see experimental comparisons in Section C.7). Besides, MART assigns weights based on the model’s prediction confidence on the natural data; GAIRAT assigns weights based on how easy the natural data can be attacked (geometry value κ).\nComparisons with the geometric studies of DNN. Researchers in adversarial robustness employed the first-order or second-order derivatives w.r.t. input data to explore the DNN’s geometric properties (Fawzi et al., 2017; Kanbak et al., 2018; Fawzi et al., 2018; Qin et al., 2019; MoosaviDezfooli et al., 2019). Instead, we have a complementary but different argument: Data points themselves are geometrically different regardless of DNN. The geometry value κ in adversarial training (AT) is an approximated measurement of data’s geometric properties due to the AT’s smoothing effect (Zhu et al., 2021)." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we empirically justify the efficacy of GAIRAT. Section 4.1 shows that GAIRAT can relieve the undesirable robust overfitting (Rice et al., 2020) of the minimax-based adversarial training (Madry et al., 2018). Note that some concurrent studies (Chen et al., 2021a;b) provided various adversarial training strategies, which can also mitigate the issue of robust overfitting. In Section 4.2, we benchmark our GAIRAT and GAIR-FAT using Wide ResNets and compare them with AT and FAT.\nIn our experiments, we consider ||x̃−x||∞ ≤ with the same in both training and evaluations. All images of CIFAR-10 (Krizhevsky, 2009) and SVHN (Netzer et al., 2011) are normalized into [0, 1]." }, { "heading": "4.1 GAIRAT RELIEVES ROBUST OVERFITTING", "text": "In Figure 4, we conduct the standard AT (all red lines) using ResNet-18 (He et al., 2016) on CIFAR10 dataset. For generating the most adversarial data for updating the model, the perturbation bound = 8/255; the PGD steps numberK = 10 with step size α = 2/255, which keeps the same as Rice et al. (2020). We train ResNet-18 using SGD with 0.9 momentum for 100 epochs with the initial learning rate of 0.1 divided by 10 at Epoch 30 and 60, respectively. At each training epoch, we collect the training statistics, i.e., the geometry value κ(x, y) of each training data, standard/robust training and test error, the flatness of loss w.r.t. adversarial test data. The detailed descriptions of those statistics and the evaluations are in the Appendix C.1.\nBottom-left panel of Figure 4 shows geometry value κ of training data of standard AT. Over the training progression, there is an increasing number of guarded training data with a sudden leap when the learning rate decays to 0.01 at Epoch 30. After Epoch 30, the model steadily engenders a increasing number of guarded data whose adversarial variants are correctly classified. Learning from those correctly classified adversarial data (large portion) will reinforce the existing knowledge and spare little focus on wrongly predicted adversarial data (small portion), thus leading to the robust overfitting. The robust overfitting is manifested by red (dashed and solid) lines in upper-middle and upper-right and bottom-middle and bottom-right panels.\nTo avoid the large portion of guarded data overwhelming the learning from the rare attackable data, our GAIRAT explicitly give small weights to the losses of adversarial variants of the guarded data. Blue (ω2) and yellow (ω3) lines in upper-left panel give two types of weight assignment functions that assign instance-dependent weight on the loss based on the geometry value κ. In GAIRAT, the model is forced to give enough focus on those rare attackable data.\nIn GAIRAT, the initial 30 epochs is burn-in period, and we introduce the instance-dependent weight assignment ω from Epoch 31 onward (both blue and yellow lines in Figure 4). The rest of hyperparameters keeps the same as AT (red lines). From the upper-right panel, GAIRAT (both yellow and blue lines) achieves smaller error on adversarial test data and larger error on training adversarial data, compared with standard AT (red lines). Therefore, our GAIRAT can relieve the issue of the robust overfitting.\nBesides, Appendix C contains more experiments such as different learning rate schedules, different choices of weight assignment functions ω, different lengths of burn-in period, a different dataset (SVHN) and different networks (Small CNN and VGG), which all justify the efficacy of our GAIRAT. Notably, in Appendix C.6, we show the effects of GAIR-FAT on improving FAT." }, { "heading": "4.2 PERFORMANCE EVALUATION ON WIDE RESNETS", "text": "We employ the large-capacity network, i.e., Wide ResNet (Zagoruyko & Komodakis, 2016), on the CIFAR-10 dataset. In Table 1, we compare the performance of the standard AT (Madry et al., 2018), FAT (Zhang et al., 2020b), GAIRAT and GAIR-FAT. We use WRN-32-10 that keeps the same as Madry et al. (2018). We compare different methods on the best checkpoint model (suggested by Rice et al. (2020)) and the last checkpoint model (used by Madry et al. (2018)), respectively. Note that results in Zhang et al. (2020b) only compare the last checkpoint between AT and FAT; instead, we also include the best checkpoint comparisons. We evaluate the robust models based on the three evaluation metrics, i.e., standard test accuracy on natural data (Natural), robust test accuracy on adversarial data generated by PGD-20 and PGD+. PGD+ is PGD with five random starts, and each start has 40 steps with step size 0.01, which keeps the same as Carmon et al. (2019) (PGD+ has 40 × 5 = 200 iterations for each test data). We run AT, FAT, GAIRAT, and GAIR-FAT five repeated trials with different random seeds. Table 1 reports the medians and standard deviations of the results. Besides, we treat the results of AT as the baseline and report the difference (Diff.) of the test accuracies. The detailed training settings and evaluations are in Appendix C.8. Besides, we also compare TRADES and GAIR-TRADES using WRN-34-10, which is in the Appendix C.9.\nCompared with standard AT, our GAIRAT significantly boosts adversarial robustness with little degradation of accuracy, which challenges the inherent trade-off. Besides, FAT also challenges the inherent trade-off instead by improving accuracy with little degradation of robustness. Combining two directions, i.e., GAIR-FAT, we can improve both robustness and accuracy of standard AT. Therefore, Table 1 affirmatively confirms the efficacy of our geometry-aware instance-reweighted methods in significantly improving adversarial training." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "This paper has proposed a novel adversarial training method, i.e., geometry-aware instancereweighted adversarial training (GAIRAT). GAIRAT gives more (less) weights to loss of the adversarial data whose natural counterparts are closer to (farther away from) the decision boundary. Under the limited model capacity and the inherent inequality of the data, GAIRAT sheds new lights on improving the adversarial training.\nGAIRAT training under the PGD attacks can defend PGD attacks very well, but indeed, it cannot perform equally well on all existing attacks (Chen et al., 2021a). From the philosophical perspective, we cannot expect defenses under one specific attack can defend all existing attacks, which echoes the previous finding that “it is essential to include adversarial data produced by all known attacks, as the defensive training is non-adaptive (Papernot et al., 2016).” Incorporating all attacks in GAIRAT yet preserving the efficiency is an interesting future direction. Besides, it still an open question to design the optimal weight assignment function ω in Eq. 5 or to design a proper network structure suitable to adversarial training. Furthermore, there is still a large room to apply adversarial training techniques into other domains such as pre-training (Hendrycks et al., 2019; Chen et al., 2020; Jiang et al., 2020; Salman et al., 2020), noisy labels (Zhu et al., 2021) and so on." }, { "heading": "ACKNOWLEDGMENT", "text": "JZ, GN, and MS were supported by JST AIP Acceleration Research Grant Number JPMJCR20U3, Japan. MS was also supported by the Institute for AI and Beyond, UTokyo. JNZ and BH were supported by the HKBU CSD Departmental Incentive Scheme. BH was supported by the RGC Early Career Scheme No. 22200720 and NSFC Young Scientists Fund No. 62006202. MK was supported by the National Research Foundation, Singapore under its Strategic Capability Research Centres Funding Initiative." }, { "heading": "A MOTIVATIONS OF GAIRAT", "text": "We show that model capacity is often insufficient in adversarial training, especially when train is large; therefore, the model capacity should be carefully preserved for fitting important data.\nIn this section, we give experimental details of Figure 2 and provide complementary experiments in Figures 5 and 6. In the left panel of Figure 2 and top two panels of Figure 5, we use standard AT to train different sizes of network under the perturbation bound train = 8/255 on CIFAR10 dataset. In the right panel of Figure 2 and two bottom panels of Figure 5, we fix the size of network and use ResNet-18; we conduct standard AT under different values of perturbation bound train ∈ [1/255, 16/255]. The solid lines show the standard training error on natural data and the dash lines show the robust training error on adversarial training data.\nTraining details We train all the different networks for 100 epochs using SGD with 0.9 momentum. The initial learning rate is 0.1, reduced to 0.01, 0.001 at Epoch 30, and 60, respectively. The weight decay is 0.0005. For generating the most adversarial data for updating the model, we use the PGD-10 attack. The PGD steps number K = 10 and the step size α = /4. There is a random start, i.e., uniformly random perturbations ([− train,+ train]) added to natural data before PGD perturbations for generating PGD-10 training data. We report the standard training error on the natural training data and the robust training error on the adversarial training data that are generated by the PGD-10 attack.\nWe also conduct the experiments on the SVHN dataset in Figure 6. The training setting keeps the same as that of CIFAR-10 experiments except using 0.01 as the initial learning rate, reduced to 0.001, 0.0001 at Epoch 30, and 60, respectively. We find standard AT always fails when the perturbation bound is larger than = 16/255 for the SVHN dataset due to the severe cross-over mixture issue (Zhang et al., 2020b); therefore, we do not report its results.\nNext, we show that more attackable (more important) data are closer to the decision boundary; more guarded (less important) data are farther away from the decision boundary.\nIn Figures 7 and 8, we plot 2-d visualizations of the output distributions of a robust ResNet-18 on CIFAR-10 dataset. We take the robust ResNet-18 at the checkpoint of Epoch 30 (red line in Figure 9) as our base model here. For each class in the CIFAR-10 dataset, we randomly sample 1000 training datapoints for visualization. For each data point, we compute its the least number of iterations κ that PGD requires to find its misclassified adversarial variant. For PGD, we set the perturbation bound = 0.031, the step size α = 0.31/4, and the maximum PGD steps K = 10. Then, each data point has its unique robustness attribution, i.e., value κ. We take those data as the input of the robust ResNet and output 10-dimensional logits, and then, we use principal components analysis (PCA) to project 10-dimensional logits into 2-dimension for visualization. The color gradient denotes the degree of the robustness of each data point. The more attackable data have lighter colors (red or blue), and the more guarded data has darker colors (red or blue).\nFrom Figures 7 and 8, we find that the attackable data in general are geometrically close to the decision boundary while the guarded data in general are geometrically far away from the decision boundary. It is also very interesting to observe that not all classes are well separated. For example, Cat-Dog is less separable than Cat-Ship in second row of Figure 8." }, { "heading": "B ALGORITHMS", "text": "" }, { "heading": "B.1 GEOMETRY-AWARE INSTANCE-REWEIGHTED FRIENDLY ADVERSARIAL TRAINING", "text": "(GAIR-FAT)\nAlgorithm 3 Geometry-aware early stopped PGD-K-τ Input: data x ∈ X , label y ∈ Y , model f , loss function `, maximum PGD step K, step τ , perturbation bound , step size α Output: friendly adversarial data x̃ and geometry value κ(x, y) x̃← x; κ(x, y)← 0 while K > 0 do\nif arg maxi f(x̃) 6= y and τ = 0 then break else if arg maxi f(x̃) 6= y then τ ← τ − 1 else κ(x, y)← κ(x, y) + 1 end if x̃← ΠB[x, ] ( α sign(∇x̃`(f(x̃), y)) + x̃\n) K ← K − 1\nend while\nGAIRAT is a general method, and the friendly adversarial training (Zhang et al., 2020b) can be easily modified to a geometry-aware instance-reweighted version, i.e. GAIR-FAT.\nGAIR-FAT utilizes Algorithm 3 to generate friendly adversarial data (x̃, y) and the corresponding geometry value κ(x, y), and then utilizes Algorithm 2 to update the model parameters." }, { "heading": "B.2 GEOMETRY-AWARE INSTANCE-REWEIGHTED TRADES (GAIR-TRADES)", "text": "Algorithm 4 Geometry-aware PGD for TRADES Input: data x ∈ X , label y ∈ Y , model f , loss function `KL, maximum PGD stepK, perturbation bound , step size α Output: adversarial data x̃ and geometry value κ(x, y) x̃← x+ ξN (0, I); κ(x, y)← 0 while K > 0 do\nif arg maxi f(x̃) = y then κ(x, y)← κ(x, y) + 1 end if x̃← ΠB[x, ] ( α sign(∇x̃`KL(f(x̃), f(x)) + x̃\n) K ← K − 1\nend while\nAlgorithm 5 Geometry-aware instance-reweighted TRADES (GAIR-TRADES) Input: network fθ, training dataset S = {(xi, yi)}ni=1, learning rate η, number of epochs T , batch size m, number of batches M Output: adversarially robust network fθ for epoch = 1, . . . , T do\nfor mini-batch = 1, . . . , M do Sample a mini-batch {(xi, yi)}mi=1 from S for i = 1, . . . , m (in parallel) do\nObtain adversarial data x̃i of xi and geometry value κ(xi, yi) by Algorithm 4 Calculate ω(xi, yi) according to geometry value κ(xi, yi) by Eq. (6)\nend for Calculate the normalized ωi = ω(xi,yi)∑m j=1 ω(xj ,yj) for each data\nθ ← θ − η∇θ ∑m i=1 { ωi`CE(fθ(xi), yi) + β`KL(fθ(x̃i), fθ(xi)) } end for\nend for\nWe modify TRADES (Zhang et al., 2019) to a GAIRAT version, i.e. GAIR-TRADES (Algorithms 4 and 5). Different from GAIRAT and GAIR-FAT, GAIR-TRADES employs Algorithm 4 to generate adversarial data (x̃, y) and the corresponding geometry value κ(x, y), and then utilizes both natural data and their adversarial variants to update the model parameters (Algorithm 5). Note that TRADES utilizes virtual adversarial data (Miyato et al., 2016) for updating the current model. The generated virtual adversarial data do not require any label information; therefore, their supervision signals heavily rely on their natural counterparts. Thus, in GAIR-TRADES, the instance-reweighting function ω applies to the loss of their natural data.\nIn Algorithm 4, N (0, I) generates a random unit vector. ξ is a small constant. `KL is KullbackLeibler loss. In Algorithm 5, β > 0 is a regularization parameter for TRADES. `CE is cross-entropy loss. `KL is Kullback-Leibler loss, which keeps the same as Zhang et al. (2019).\nNatural test data Natural training dataDecision boundary" }, { "heading": "C EXTENSIVE EXPERIMENTS", "text": "" }, { "heading": "C.1 GAIRAT RELIEVES ROBUST OVERFITTING", "text": "In this section, we give the detailed descriptions of Figure 4 and provide more analysis and complementary experiments using the SVHN dataset in Figure 10.\nIn Figure 4, red lines (solid and dashed lines) refer to standard adversarial training (AT) (Madry et al., 2018). Blue and yellow lines (solid and dashed) refer to our geometry-aware instance-reweighted adversarial training (GAIRAT). Blue lines represent that GAIRAT utilizes the decreasing ω for assigning instance-dependent weights (corresponding to the blue line in the bottom-left panel); yellow lines represent that GAIRAT utilizes the non-increasing piece-wise ω for assigning instancedependent weights (corresponding to the yellow line in the bottom-left panel). In the upper-left panel of Figure 4, we calculate the mean and median of geometry values κ(x, y) of all 50K training data at each epoch. Geometry value κ(x, y) of data (x, y) refers to the least number of PGD steps that PGD methods need to generate a misclassified adversarial variant. Note that when the natural data is misclassified without any adversarial perturbations, the geometry value κ(x, y) = 0. The bottom-left panel calculates the instance-dependent weight ω for the loss of adversarial data based on the geometry value κ.\nIn the upper-middle panel of Figure 4, the solid lines represent the standard training error on the natural training data; the dashed lines represent the standard test error on the natural test data.\nIn the upper-right panel of Figure 4, the solid lines represent the robust training error on the adversarial training data; the dashed lines represent the robust test error on the adversarial test data. The adversarial training/test data are generated by PGD-20 attack with random start. Random start refers to the uniformly random perturbation of [− , ] added to the natural data before PGD perturbations. The step size α = 2/255, which is the same as Wang et al. (2019).\nIn the bottom-middle and bottom-right panels of Figure 4, we calculate the flatness of the adversarial loss `(fθ(x̃), x̃)) w.r.t. the adversarial data x̃. In the bottom-middle panel, adversarial data refer to the friendly adversarial test data that are generated by early-stopped PGD-20-0 (Zhang et al., 2020b). The maximum PGD step number is 20; τ = 0 means the immediate stop once the wrongly predicted adversarial test data are found. We use friendly adversarial test data to approximate the points on decision boundary of the robust model fθ. The flatness of the decision boundary is approximated by average of ||∇˜̃x`|| across all 10K adversarial test data. We give the flatness value at each training epoch (higher flatness value refers to higher curved decision boundary, see Figure 9).\nFor completeness, the bottom-right panel uses the most adversarial test data that are generated by PGD-20 (Madry et al., 2018).\nThe magnitude of the norm of gradients, i.e., ||∇x̃`||, is a reasonable metric for measuring the magnitude of curvatures of the decision boundary. Moosavi-Dezfooli et al. (2019) show the magnitude of the norm of gradients upper bound the largest eigenvalues of the hessian matrix of loss w.r.t. input x, thus measuring the curvature of the decision boundary. Besides, Moosavi-Dezfooli et al. (2019) even show that the low curvatures can lead to the enhanced robustness, which echoes our results in Figure 4.\nThe flatness values (red lines) increases abruptly at smaller learning rates (0.01, 0.001) at Epoch 30 and Epoch 60. It shows that when we begin to use adversarial data to fine-tune the decision boundary of the robust model, the decision boundary becomes more tortuous around the adversarial data (see Figure 9). This leads to the severe overfitting issue.\nSimilar to Figure 4, we compare GAIRAT and AT using the SVHN dataset, which can be found in Figure 10. Experiments on the SVHN dataset corroborate the reasons for issue of the robust overfitting and justify the efficacy of our GAIRAT. The training and evaluation settings keep the same as Figure 4 except the initial rate of 0.01 divided by 10 at Epoch 30 and 60 respectively." }, { "heading": "C.2 DIFFERENT LEARNING RATE SCHEDULES", "text": "In Figure 11, we compare our GAIRAT and AT using different learning rate schedules. Under the different learning rate schedules, our GAIRAT can relieve the undesirable issue of the robust overfitting, thus enhancing the adversarial robustness. To make the fair comparisons with Rice et al. (2020), we use the pre-activation ResNet-18 (He et al., 2016). We conduct standard adversarial training (AT) using SGD with 0.9 momentum for 200 epochs on CIFAR-10 dataset. The different learning rate schedules are in the top panel in Figure 11. The perturbation bound = 8/255, the\nPGD steps number K = 10 and the step size α = 2/255. The training setting keeps the same as Rice et al. (2020)1.\nGAIRAT has the same training configurations (including all hyperparamter settings) including the 100 epochs burn-in period, after which, GAIRAT begins to introduce geometry-aware instancereweighted loss. We use the weight assignment function ω from Eq. (6) with λ = −1. At each training epoch, we evaluate each checkpoint using CIFAR-10 test data. In the middle panels of Figure 11, we report robust test error on the adversarial test data. The adversarial test data are generated by PGD-20 attack with the perturbation bound = 8/255 and step size α = 2/255. The PGD attack has a random start, i.e, the uniformly random perturbations of [− , ] are added to the natural data before PGD iterations, which keeps the same as Wang et al. (2019); Zhang et al. (2020b). Note that different from Rice et al. (2020) using PGD-10, we use PGD-20 because under the computational budget, PGD-20 is a more informative metric for the robustness evaluation. In the bottom panels of Figure 11, we report the standard test error on the natural data.\nFigure 11 shows that under different learning rate schedules, our GAIRAT can relieve the issue of robust overfitting, thus enhancing the adversarial robustness with little degradation of accuracy.\nC.3 DIFFERENT WEIGHT ASSIGNMENT FUNCTIONS ω\nThe weight assignment functions ω should be non-increasing w.r.t. the geometry value κ. In Figure 12, besides tanh-type Eq. (6) (blue line), we compare different types of weight assignment functions. The purple lines represent a linearly decreasing function, i.e.,\nw(x, y) = 1− κ(x, y) K + 1 . (7)\n1Robust Overfitting’s GitHub\nThe green lines represent a sigmoid-type decreasing function, i.e.,\nw(x, y) = σ(λ+ 5× (1− 2× κ(x, y)/K)), (8) where σ(x) = 11+e−x .\nFigure 12 shows that compared with AT, GAIRAT with different weight assignment functions have similar degradation of standard test accuracy on natural data, but GAIRAT with the tanh-type decreasing function (Eq. (6)) has the better robustness accuracy. Thus, we further explore the Eq. (6) with different λ in Figure 13.\nIn Figure 13, when λ = +∞, GAIRAT recovers the standard AT, assigning equal weights to the losses of the adversarial data. Smaller λ corresponds to the weight assignment function ω, assigning relatively smaller weight to the loss of the adversarial data of the guarded data and assigning relatively larger weight to the loss of the adversarial data of the attackable data, which enhance the robustness more. With the same logic, larger λ corresponds to the weight assignment function ω, assigning relatively larger weight to the loss of the adversarial data of the guarded data and assigning relatively smaller weight to the loss of the adversarial data of the attackable data, which enhances the robustness less. The guarded data need more PGD steps κ to fool the current model; the attackable data need less PGD steps κ to fool the current model.\nThe results in Figure 13 justify the above logic. GAIRAT with smaller λ (lighter blue lines) has better adversarial robustness with bigger degradation of standard test accuracy. On the other hand GAIRAT with larger λ (darker blue lines) has relatively worse adversarial robustness with minor degradation of standard test accuracy. Nevertheless, our GAIRAT (light and dark lines) has better robustness than AT (red lines).\nTraining and evaluation details We training ResNet-18 using SGD with 0.9 momentum for 100 epochs. The initial learning rate is 0.1 divided by 10 at Epoch 30 and 60 respectively. The weight decay=0.0005. The perturbation bound = 0.031; the PGD step size α = 0.007, and PGD step numbers K = 10. For evaluations, we obtain standard test accuracy for natural test data and robust test accuracy for adversarial test data. The adversarial test data are generated by PGD-20 attack with\nthe same perturbation bound = 0.031 and the step size α = 0.031/4, which keeps the same as Wang et al. (2019). All PGD generation have a random start, i.e, the uniformly random perturbation of [− , ] added to the natural data before PGD iterations. Note that the robustness reflected by PGD-20 test data is quite high. However, when we use other attacks such as C&W attack (Carlini & Wagner, 2017) for evaluation, both blue and red lines will degrade the robustness to around 40%. We believe this degradation is due to the mismatch between PGD-adversarial training and C&W attacks, which is the common deflect of the empirical defense (Tsuzuku et al., 2018; Wong & Kolter, 2018; Cohen et al., 2019; Balunovic & Vechev, 2020; Zhang et al., 2020a). We leave this for future work.\nIn Figure 14, we also conduct experiments of GAIRAT using Eq. (6) with different λ and AT using ResNet-18 on SVHN dataset. The training and evaluation settings keep the same as Figure 13 except the initial rate of 0.01 divided by 10 at Epoch 30 and 60 respectively.\nInterestingly, AT (red lines) on SVHN dataset has not only the issue of robust overfitting, but also the issue of natural overfitting: The standard test accuracy has slight degradation over the training epochs. By contrast, our GAIRAT (blue lines) can relieve the undesirable robust overfitting, thus enhancing both robustness and accuracy." }, { "heading": "C.4 DIFFERENT LENGTHS OF BURN-IN PERIOD", "text": "In Figure 15, we conduct experiments of GAIRAT under different lengths of the burn-in period using ResNet-18 on CIFAR-10 dataset. The training and evaluations details are the same as Appendix C.3 except the different lengths of burn-in period in the training.\nFigure 15 shows that compared with AT (red lines), GAIRAT with a shorter length of burn-in period (darker blue lines) can significantly enhance robustness but suffers a little degradation of accuracy. On the other hand, GAIRAT with a longer length of burn-in period (lighter blue lines) slightly enhance robustness with zero degradation of accuracy." }, { "heading": "C.5 DIFFERENT NETWORKS - SMALL CNN AND VGG-13", "text": "In Figure 16, besides ResNet-18, we apply our GAIRAT to Small CNN (6 convolutional layers and 2 fully-connected layers) on CIFAR-10 dataset. Training and evaluation settings keeps the same as the Appendix C.3; we use 30 epochs burn-in period and Eq. (6) as the weight assignment function.\nFigure 16 shows that larger network ResNet-18 has better performance than Small CNN in terms of both robustness and accuracy. Interestingly, Small CNN has less severe issue of the robust overfitting. Nevertheless, our GAIRAT are still quite effective in relieving the robust overfitting and thus enhancing robustness in the smaller network.\nIn Figure 16, we also compare our GAIRAT with AT using VGG-13 (Simonyan & Zisserman, 2015) on CIFAR-10 dataset. Under the same training and evaluation settings as Small CNN, results of VGG-13 once again demonstrate the efficacy of our GAIRAT." }, { "heading": "C.6 GEOMETRY-AWARE INSTANCE DEPENDENT FAT (GAIR-FAT)", "text": "In this section, we show that GAIR-FAT can enhance friendly adversarial training (FAT). Our geometry-aware instance-reweighted method is a general method. Besides AT, we can easily modify friendly adversarial training (FAT) (Zhang et al., 2020b) to GAIR-FAT (See Algorithm 3 in the Appendix B.1).\nIn Figure 17, we compare FAT and GAIR-FAT using ResNet-18 on CIFAR-10 dataset. The training and evaluation settings keeps the same as Appendix C.3 except that GAIR-FAT and FAT has an extra hyperparameter τ . In Figures 17, the τ begins from 0 and increases by 3 at Epoch 40 and 70 respectively. The burn-in period is 70 epochs. In Figure 17, we use Eq. (6) with different λ as GAIR-FAT’s weight assignment function.\nDifferent from AT, FAT has slower progress in enhancing the adversarial robustness over the training epochs, so FAT can naturally resist undesirable robust overfitting. However, once the robust test accuracy reaches plateau, FAT still suffers a slight robust overfitting issue (red line in the right panel). By contrast, when we introduce our instance dependent loss from Epoch 70, GAIR-FAT (light and dark blue lines) can get further enhanced robustness with near-zero degradation of accuracy.\nNote that different from the FAT used by Zhang et al. (2020b) increasing τ from 0 to 2 over the training epochs, we increase the τ from 0 to 6. As shown in Figure 18, we find out FAT with smaller τ (e.g., 1-3) does not suffer the issue of the robust overfitting, since the FAT with smaller τ has the slower progress in increasing the robustness over the training epochs. This slow progress leads to the slow increase of the portion of guarded data, which is less likely to overwhelm the learning from the attackable data. Thus, our geometry-aware instance dependent loss applied on FAT with smaller τ does not offer extra benefits, and it does not have damage as well." }, { "heading": "C.7 GEOMETRY-AWARE INSTANCE DEPENDENT MART (GAIR-MART)", "text": "In this section, we compare our method with MMA (Ding et al., 2020) and MART (Wang et al., 2020b). To be specific, we easily modify MART to a GAIRAT version, i.e., GAIR-MART. The learning objective of MART is Eq. (9); the learning objective of our GAIR-MART is Eq. (10).\nThe learning objective of MART is\n`margin(p(x̃, θ), y) + β`KL(p(x̃, θ), p(x, θ)) · (1− py(x, θ)); (9) our learning objective of of GAIR-MART is\n`GAIRmargin(p(x̃, θ), y) + β`KL(p(x̃, θ), p(x, θ)) · (1− py(x, θ)), (10) where `margin = − log(py(x̃, θ)) − log(1 − max\nk 6=y pk(x̃, θ)) and pk(x, θ) is probability (softmax\non logits) of x belonging to class k. To be specific, the first term − log(py(x̃, θ)) is commonly used CE loss and the second term − log(1 − max\nk 6=y pk(x̃, θ)) is a margin term used to improve the\ndecision margin of the classifier. More detailed analysis about the learning objective can be found in (Wang et al., 2020b). In Eq. (9) and Eq. (10), x is natural training data, x̃ is adversarial training data generated by CE loss, and β > 0 is a regularization parameter for MART. In Eq. (10), `GAIRmargin = − log(py(x̃, θ)) · ω− log(1−max\nk 6=y pk(x̃, θ)) and ω refers to our weight assignment\nfunction.\nFor MMA and MART, the training settings keep the same as the 2 and 3. For fair comparisons, GAIR-MART keeps the same training configurations as MART except that we use the weight assignment function ω (Eq.(6)) to introduce geometry-aware instance-reweighted loss from Epoch 75 onward. We train ResNet-18 on CIFAR-10 dataset for 120 epochs. For MMA, the learning rate is 0.3 from Iteration 0 to 20000, 0.09 from Iteration 20000 to 30000, 0.03 from Iteration 30000 to 40000, and 0.009 after Iteration 40000, where the Iteration refers to training with one mini-batch of data; For MART and GAIR-MART, the learning rate is 0.01 divided by 10 at Epoch 75, 90, and 100 respectively. For evaluations, we obtain standard test accuracy for natural test data and robust test accuracy for PGD-20 adversarial test data with the same settings as Appendix C.3.\nFigure 19 shows GAIR-MART performs better than MART and MMA. The results demonstrate the efficacy of our GAIRAT method on improving robustness without the degradation of standard accuracy.\nReweighing KL loss The learning objective of MART explicitly assigns weights, not directly on the adversarial loss but KL divergence loss. We ask what if you replace their reweighting scheme (1− py(x, θ)) with our ω. The learning objective is\n`margin(p(x̃, θ), y) + β`KL(p(x̃, θ), p(x, θ)) · ω. (11) Figure 20 reports the results: It does not have much effect on adding the geometry-aware instancedependent weight to the regularization part, i.e., KL divergence loss .\n2MMA’s GitHub 3MART’s GitHub" }, { "heading": "C.8 PERFORMANCE EVALUATION ON WIDE RESNET (WRN-32-10)", "text": "In Table 1, we compare our GAIRAT, GAIR-FAT with standard AT and FAT. CIFAR-10 dataset is normalized into [0,1]: Each pixel is scaled by 1/255. We perform the standard CIFAR-10 data augmentation: a random 4 pixel crop followed by a random horizontal flip. In AT, we train WRN32-10 for 120 epochs using SGD with 0.9 momentum. The initial learning rate is 0.1 reduced to 0.01, 0.001 and 0.0005 at epoch 60, 90 and 110. The weight decay is 0.0002. For generating the adversarial data for updating the model, the perturbation bound train = 0.031, the PGD step is fixed to 10, and the step size is fixed to 0.007. The training settings come from FAT’s Github. 4 In GAIRAT, we choose 60 epochs burn-in period and then use Eq. (6) with λ = 0 as the weight assignment function; the rest keeps the same as AT. The hyperparameter τ of FAT and begins from 0 and increases by 3 at Epoch 40 and 70 respectively; the rest keeps the same as AT. In GAIR-FAT, we choose 60 epochs burn-in period and then use Eq. (6) with λ = 0 as the weight assignment function; the rest keeps the same as FAT.\nAs suggested by results of the experiments in Section 4.1, the robust test accuracy usually gets significantly boosted when the learning rate is firstly reduced to 0.01. Thus, we save the model checkpoints at Epochs 59-100 for evaluations, among which, the best checkpoint is selected based on the PGD-20 attack since PGD+ is extremely computationally expensive. We also save the last checkpoint at Epoch 120 for evaluations. We run AT, FAT, GAIRAT and GAIR-FAT with 5 repeated times with different random seeds.\nAs for the evaluations, we test the checkpoint using three metrics: standard test accuracy on natural data (Natural), robust test accuracy on adversarial data generated by PGD-20 and PGD+. PGD20 follows the same setting of the PGD-20 used by Wang et al. (2019)5. PGD+ is the same as PGours used by Carmon et al. (2019)6. The adversarial attacks have the same perturbation bound test = 0.031. For PGD-20, the step number is 20, and the step size α = test/4. There is a random start, i.e., uniformly random perturbations ([− test,+ test]) added to natural data before PGD perturbations. For PGD+, the step number is 40, and the step size α = 0.01. There are 5 random starts for each natural test data. Therefore, for each natural test data, we have 40× 5 = 200 PGD iterations for the robustness evaluation.\nIn Table 1, the best checkpoint is chosen among the model checkpoints at Epochs 59-100 (selected based on the robust accuracy on PGD-20 test data). In practice, we can use a hold-out validation set to determine the best checkpoint, since (Rice et al., 2020) found the validation curve over epochs matches the test curves over epochs. The last checkpoint is the model checkpoint at Epoch 120. Our experiments find that GAIRAT reaches the best robustness at Epoch 90 (three trails) and 92 (two trails), and AT reaches the best robustness at Epoch 60 (five trails). FAT reaches the best robustness at Epoch 60 (four trails) and 61 (one trail). GAIR-FAT reaches the best robustness at around Epoch 90 (five trails). We report the median test accuracy and its standard deviation over 5 repeated trails.\nPGD attacks with different iterations In Table 1, each defense method has five trails with five different random seeds; therefore, each defense method has ten models (five last checkpoints and five best checkpoints). In Figure 21, for each defense, we randomly choose one last-checkpoint and\n4FAT’s GitHub 5DAT’s GitHub 6RST’s GitHub\none best-checkpoint and evaluate them using PGD-10, PGD-20, PGD-40, PGD-60, PGD-80, and PGD-100. All the PGD attacks use the same test = 0.031 and the step size α = (2.5 · test)/100. We ensure that we can reach the boundary of the -ball from any starting point within it and still allow for movement on the boundary, which is suggested by Madry et al. (2018). The results show the PGD attacks have converged with more iterations." }, { "heading": "C.9 PERFORMANCE EVALUATION ON WIDE RESNET (GAIR-TRADES)", "text": "In Table 2, we compare our GAIR-TRADES with TRADES. CIFAR-10 dataset normalization and augmentations keep the same as Appendix C.8. Instead, we use WRN-34-10, which keeps the same as Zhang et al. (2020b). We train WRN-34-10 for 100 epochs using SGD with 0.9 momentum. The initial learning rate is 0.1 reduced to 0.01 and 0.01 at epoch 75 and 90. The weight decay is 0.0002. For generating the adversarial data for updating the model, the perturbation bound train = 0.031, the PGD step is fixed to 10, and the step size is fixed to 0.007. Since TRADES has a trade-off parameter β, for fair comparison, our GAIR-TRADES uses the same β = 6. In GAIR-TRADES, we choose 75 epochs burn-in period and then use Eq. (6) with λ = −1 as the weight assignment function. We run TRADES and GAIR-TRADES five repeated trails with different random seeds.\nThe evaluations are the same as Appendix C.8 except the step size α = 0.003 for PGD-20 attack, which keeps the same as Zhang et al. (2020b)7.\nIn Table 2, the best checkpoint is chosen among the model checkpoints at Epochs 75-100 (w.r.t. the PGD-20 robustness). The last checkpoint is evaluated based on the model checkpoint at Epoch 100. Our experiments find that GAIR-TRADES reaches the best robustness at Epoch 90 (three trails), 96 (one trail) and 98 (one trail), and TRADES reaches the best robustness at Epoch 76 (three trail), 77 (one trails) and 79 (one trail). We report the median test accuracy and its standard deviation over 5 repeated trails.\nTable 2 shows that our GAIR-TRADES can have both improved accuracy and robustness." }, { "heading": "C.10 BENCHMARKING ROBUSTNESS WITH ADDITIONAL UNLABELED (U) DATA", "text": "In this section, we verify the efficacy of our GAIRAT method by utilizing additional 500K U data pre-processed by Carmon et al. (2019) for CIFAR-10 dataset.\nCarmon et al. (2019) scratched additional U data from 80 Million Tiny Images (Torralba et al., 2008); then, they used standard training to obtain a classifier to give pseudo labels to those U data.\n7TRADES’s GitHub\nAmong those U data, they selected 500K U data (with pseudo labels). Combining 50K labeled CIFAR-10’s training data and pseudo-labeled 500K U data, they propose a robust training method named RST which utilized the learning objective function of TRADES, i.e.,\n`CE(fθ(x), y) + β`KL(fθ(x̃), fθ(x)), (12)\nwhere x̃ is generated by PGD-10 attack with CE loss.\nBased on the RST method, we introduce our instance-reweighting mechanism, i.e., our GAIR-RST. To be specific, we change the learning objective function to\n`CE(fθ(x), y) + β {ω`KL(fθ(x̃), fθ(x)) + (1− ω)`KL(fθ(x̃CW ), fθ(x))} , (13) where the x̃CW refers to the adversarial data generated by C&W attack (Carlini & Wagner, 2017) and ω is the as Eq. (6).\nIn Table 3, we compare the performance of our GAIR-RST with other methods that use WRN-2810 under auto attacks (AA) (Croce & Hein, 2020). All the methods utilized the same set of U data which are from RST’s GitHub8 and the results are reported on the leaderboard of AA’s GitHub9. Our GAIR-RST use the same training settings (e.g., learning rate schedule, train = 0.031) as RST. The evaluations are on the full set of the AA in (Croce & Hein, 2020) with test = 0.031, which keeps the same as training.\nThe results show our geometry-aware instance-reweighted method can facilitate a competitive model by utilizing additional U data.\n8RST’s GitHub 9AA’s GitHub" } ]
2,021
GEOMETRY-AWARE INSTANCE-REWEIGHTED ADVER-
SP:d729aacc2cd3f97011a04360a252ca7cb0489354
[ "This paper considers adopting continual learning on the problem of causal effect estimation. The paper combines methods and algorithms for storing feature representation and representative samples (herding algorithm), avoiding drifting feature representation when new data is learned (feature representation distillation), balanced representation by regularization, etc. Consequently, the paper presents a system that makes use of existing methods as a loss function (the sum of losses and regularization terms). " ]
The era of real world evidence has witnessed an increasing availability of observational data, which much facilitates the development of causal effect inference. Although significant advances have been made to overcome the challenges in causal effect estimation, such as missing counterfactual outcomes and selection bias, they only focus on source-specific and stationary observational data. In this paper, we investigate a new research problem of causal effect inference from incrementally available observational data, and present three new evaluation criteria accordingly, including extensibility, adaptability, and accessibility. We propose a Continual Causal Effect Representation Learning method for estimating causal effect with observational data, which are incrementally available from non-stationary data distributions. Instead of having access to all seen observational data, our method only stores a limited subset of feature representations learned from previous data. Combining the selective and balanced representation learning, feature representation distillation, and feature transformation, our method achieves the continual causal effect estimation for new data without compromising the estimation capability for original data. Extensive experiments demonstrate the significance of continual causal effect inference and the effectiveness of our method.
[]
[ { "authors": [ "Ahmed M Alaa", "Mihaela van der Schaar" ], "title": "Bayesian inference of individualized treatment effects using multi-task gaussian processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Hugh A Chipman", "Edward I George", "Robert E McCulloch" ], "title": "Bart: Bayesian additive regression trees", "venue": "The Annals of Applied Statistics,", "year": 2010 }, { "authors": [ "Zhixuan Chu", "Stephen L Rathbun", "Sheng Li" ], "title": "Matching in selective and balanced representation space for treatment effects estimation", "venue": "arXiv preprint arXiv:2009.06828,", "year": 2020 }, { "authors": [ "Prithviraj Dhar", "Rajat Vikram Singh", "Kuan-Chuan Peng", "Ziyan Wu", "Rama Chellappa" ], "title": "Learning without memorizing", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Robert M French" ], "title": "Catastrophic forgetting in connectionist networks", "venue": "Trends in cognitive sciences,", "year": 1999 }, { "authors": [ "Ruocheng Guo", "Jundong Li", "Huan Liu" ], "title": "Learning individual causal effects from networked observational data", "venue": "In Proceedings of the 13th International Conference on Web Search and Data Mining,", "year": 2020 }, { "authors": [ "Johanna Hardin", "Stephan Ramon Garcia", "David Golan" ], "title": "A method for generating realistic correlation matrices", "venue": "The Annals of Applied Statistics,", "year": 2013 }, { "authors": [ "Jennifer L Hill" ], "title": "Bayesian nonparametric modeling for causal inference", "venue": "Journal of Computational and Graphical Statistics,", "year": 2011 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Saihui Hou", "Xinyu Pan", "Chen Change Loy", "Zilei Wang", "Dahua Lin" ], "title": "Learning a unified classifier incrementally via rebalancing", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Guido W Imbens", "Donald B Rubin" ], "title": "Causal inference in statistics, social, and biomedical sciences", "venue": null, "year": 2015 }, { "authors": [ "Ahmet Iscen", "Jeffrey Zhang", "Svetlana Lazebnik", "Cordelia Schmid" ], "title": "Memory-efficient incremental learning through feature adaptation", "venue": "arXiv preprint arXiv:2004.00713,", "year": 2020 }, { "authors": [ "Daniel Jacob", "Wolfgang Karl Härdle", "Stefan Lessmann" ], "title": "Group average treatment effects for observational studies", "venue": "arXiv preprint arXiv:1911.02688,", "year": 2019 }, { "authors": [ "Fredrik Johansson", "Uri Shalit", "David Sontag" ], "title": "Learning representations for counterfactual inference", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Peter Langfelder", "Bin Zhang", "Steve Horvath" ], "title": "Defining clusters from a hierarchical cluster tree: the dynamic tree cut package for", "venue": "r. Bioinformatics,", "year": 2008 }, { "authors": [ "Sheng Li", "Yun Fu" ], "title": "Matching on balanced nonlinear representations for treatment effects estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Christos Louizos", "Uri Shalit", "Joris M Mooij", "David Sontag", "Richard Zemel", "Max Welling" ], "title": "Causal effect inference with deep latent-variable models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Chunjie Luo", "Jianfeng Zhan", "Xiaohe Xue", "Lei Wang", "Rui Ren", "Qiang Yang" ], "title": "Cosine normalization: Using cosine similarity instead of dot product in neural networks", "venue": "In International Conference on Artificial Neural Networks,", "year": 2018 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "In Psychology of learning and motivation,", "year": 1989 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander Kolesnikov", "Georg Sperl", "Christoph H Lampert" ], "title": "icarl: Incremental classifier and representation learning", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Peter M Robinson" ], "title": "Root-n-consistent semiparametric regression", "venue": "Econometrica: Journal of the Econometric Society, pp", "year": 1988 }, { "authors": [ "Donald B Rubin" ], "title": "Estimating causal effects of treatments in randomized and nonrandomized studies", "venue": "Journal of educational Psychology,", "year": 1974 }, { "authors": [ "Saeed Samet", "Ali Miri", "Eric Granger" ], "title": "Incremental learning of privacy-preserving bayesian networks", "venue": "Applied Soft Computing,", "year": 2013 }, { "authors": [ "Patrick Schwab", "Lorenz Linhardt", "Walter Karlen" ], "title": "Perfect match: A simple method for learning representations for counterfactual inference with neural networks", "venue": "arXiv preprint arXiv:1810.00656,", "year": 2018 }, { "authors": [ "Uri Shalit", "Fredrik D Johansson", "David Sontag" ], "title": "Estimating individual treatment effect: generalization bounds and algorithms", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Jerzy Splawa-Neyman", "Dorota M Dabrowska", "TP Speed" ], "title": "On the application of probability theory to agricultural experiments", "venue": "essay on principles", "year": 1990 }, { "authors": [ "Bharath K Sriperumbudur", "Kenji Fukumizu", "Arthur Gretton", "Bernhard Schölkopf", "Gert RG Lanckriet" ], "title": "On the empirical estimation of integral probability metrics", "venue": "Electronic Journal of Statistics,", "year": 2012 }, { "authors": [ "Stefan Wager", "Susan Athey" ], "title": "Estimation and inference of heterogeneous treatment effects using random forests", "venue": "Journal of the American Statistical Association,", "year": 2017 }, { "authors": [ "Max Welling" ], "title": "Herding dynamical weights to learn", "venue": "In Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Liuyi Yao", "Zhixuan Chu", "Sheng Li", "Yaliang Li", "Jing Gao", "Aidong Zhang" ], "title": "A survey on causal inference", "venue": "arXiv preprint arXiv:2002.02770,", "year": 2020 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela van der Schaar" ], "title": "Ganite: Estimation of individualized treatment effects using generative adversarial nets", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Bin Zhang", "Steve Horvath" ], "title": "A general framework for weighted gene co-expression network analysis", "venue": "Statistical applications in genetics and molecular biology,", "year": 2005 }, { "authors": [ "Junting Zhang", "Jie Zhang", "Shalini Ghosh", "Dawei Li", "Serafettin Tasci", "Larry Heck", "Heming Zhang", "C-C Jay Kuo" ], "title": "Class-incremental learning via deep model consolidation", "venue": "In The IEEE Winter Conference on Applications of Computer Vision,", "year": 2020 }, { "authors": [ "Hui Zou", "Trevor Hastie" ], "title": "Regularization and variable selection via the elastic net", "venue": "Journal of the royal statistical society: series B (statistical methodology),", "year": 2005 }, { "authors": [ "Jacob" ], "title": "C, 35 adjustment variables in A, 10 instrumental variables in Z, and 20 irrelevant variables in I", "venue": null, "year": 2020 }, { "authors": [ "Hardin" ], "title": "to simulate positive definite correlation matrices consisting of different types of variables. Our correlation matrices are based on the hub correlation structure which has a known correlation between a hub variable and each of the remaining variables (Zhang", "venue": "Langfelder et al.,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Causal effect inference is a critical research topic across many domains, such as statistics, computer science, public policy, and economics. Randomized controlled trials (RCT) are usually considered as the gold-standard for causal effect inference, which randomly assigns participants into a treatment or control group. As the RCT is conducted, the only expected difference between the treatment and control groups is the outcome variable being studied. However, in reality, randomized controlled trials are always time-consuming and expensive, and thus the study cannot involve many subjects, which may be not representative of the real-world population the intervention would eventually target. Nowadays, estimating causal effects from observational data has become an appealing research direction owing to a large amount of available data and low budget requirements, compared with RCT (Yao et al., 2020). Researchers have developed various strategies for causal effect inference with observational data, such as tree-based methods (Chipman et al., 2010; Wager & Athey, 2018), representation learning methods (Johansson et al., 2016; Li & Fu, 2017; Shalit et al., 2017; Chu et al., 2020), adapting Bayesian algorithms (Alaa & van der Schaar, 2017), generative adversarial nets (Yoon et al., 2018), variational autoencoders (Louizos et al., 2017) and so on.\nAlthough significant advances have been made to overcome the challenges in causal effect estimation with observational data, such as missing counterfactual outcomes and selection bias between treatment and control groups, the existing methods only focus on source-specific and stationary observational data. Such learning strategies assume that all observational data are already available during the training phase and from the only one source. This assumption is unsubstantial in practice due to two reasons. The first one is based on the characteristics of observational data, which are incrementally available from non-stationary data distributions. For instance, the number of electronic medical records in one hospital is growing every day, or the electronic medical records for one disease may be from different hospitals or even different countries. This characteristic implies that one cannot have access to all observational data at one time point and from one single source. The second reason is based on the realistic consideration of accessibility. For example, when the new observational are available, if we want to refine the model previously trained by original data, maybe\nthe original training data are no longer accessible due to a variety of reasons, e.g., legacy data may be unrecorded, proprietary, too large to store, or subject to privacy constraint (Zhang et al., 2020). This practical concern of accessibility is ubiquitous in various academic and industrial applications. That’s what it boiled down to: in the era of big data, we face the new challenges in causal inference with observational data: the extensibility for incrementally available observational data, the adaptability for extra domain adaptation problem except for the imbalance between treatment and control groups in one source, and the accessibility for a huge amount of data.\nExisting causal effect inference methods, however, are unable to deal with the aforementioned new challenges, i.e., extensibility, adaptability, and accessibility. Although it is possible to adapt existing causal inference methods to address the new challenges, these adapted methods still have inevitable defects. Three straightforward adaptation strategies are described as follows. (1) If we directly apply the model previously trained based on original data to new observational data, the performance on new task will be very poor due to the domain shift issues among different data sources; (2) If we utilize newly available data to re-train the previously learned model, adapting changes in the data distribution, old knowledge will be completely or partially overwritten by the new one, which can result in severe performance degradation on old tasks. This is the well-known catastrophic forgetting problem (McCloskey & Cohen, 1989; French, 1999); (3) To overcome the catastrophic forgetting problem, we may rely on the storage of old data and combine the old and new data together, and then re-train the model from scratch. However, this strategy is memory inefficient and time-consuming, and it brings practical concerns such as copyright or privacy issues when storing data for a long time (Samet et al., 2013). Our empirical evaluations in Section 4 demonstrate that any of these three strategies in combination with the existing causal effect inference methods is deficient.\nTo address the above issues, we propose a Continual Causal Effect Representation Learning method (CERL) for estimating causal effect with incrementally available observational data. Instead of having access to all previous observational data, we only store a limited subset of feature representations learned from previous data. Combining the selective and balanced representation learning, feature representation distillation, and feature transformation, our method preserves the knowledge learned from previous data and update the knowledge by leveraging new data, so that it can achieve the continual causal effect estimation for new data without compromising the estimation capability for previous data. To summarize, our main contributions include:\n• Our work is the first to introduce the continual lifelong causal effect inference problem for the incrementally available observational data and three corresponding evaluation criteria, i.e., extensibility, adaptability, and accessibility.\n• We propose a new framework for continual lifelong causal effect inference based on deep representation learning and continual learning.\n• Extensive experiments demonstrate the deficiency of existing methods when facing the incrementally available observational data and our model’s outstanding performance." }, { "heading": "2 BACKGROUND AND PROBLEM STATEMENT", "text": "Suppose that the observational data contain n units collected from d different domains and the d-th dataset Dd contains the data {(x, y, t)|x ∈ X, y ∈ Y, t ∈ T} collected from d-th domain, which contains nd units. Let X denote all observed variables, Y denote the outcomes in the observational data, and T is a binary variable. Let D1:d = {D1, D2, ..., Dd} be the set of combination of d dataset, separately collected from d different domains. For d datasets {D1, D2, ..., Dd}, they have the common observed variables but due to the fact that they are collected from different domains, they have different distributions with respect to X , Y , and T in each dataset. Each unit in the observational data received one of two treatments. Let ti denote the treatment assignment for unit i; i = 1, ..., n. For binary treatments, ti = 1 is for the treatment group, and ti = 0 for the control group. The outcome for unit i is denoted by yit when treatment t is applied to unit i; that is, y i 1 is the potential outcome of unit i in the treatment group and yi0 is the potential outcome of unit i in the control group. For observational data, only one of the potential outcomes is observed. The observed outcome is called the factual outcome and the remaining unobserved potential outcomes are called counterfactual outcomes.\nIn this paper, we follow the potential outcome framework for estimating treatment effects (Rubin, 1974; Splawa-Neyman et al., 1990). The individual treatment effect (ITE) for unit i is the difference\nbetween the potential treated and control outcomes, and is defined as ITEi = yi1 − yi0. The average treatment effect (ATE) is the difference between the mean potential treated and control outcomes, which is defined as ATE = 1n ∑n i=1(y i 1 − yi0).\nThe success of the potential outcome framework is based on the following assumptions (Imbens & Rubin, 2015), which ensure that the treatment effect can be identified. Stable Unit Treatment Value Assumption (SUTVA): The potential outcomes for any units do not vary with the treatments assigned to other units, and, for each unit, there are no different forms or versions of each treatment level, which lead to different potential outcomes. Consistency: The potential outcome of treatment t is equal to the observed outcome if the actual treatment received is t. Positivity: For any value of x, treatment assignment is not deterministic, i.e.,P (T = t|X = x) > 0, for all t and x. Ignorability: Given covariates, treatment assignment is independent to the potential outcomes, i.e., (y1, y0) ⊥ t|x. The goal of our work is to develop a novel continual causal inference framework, given new available observational data Dd, to estimate the causal effect for newly available data Dd as well as the previous data D1:(d−1) without having access to previous training data in D1:(d−1)." }, { "heading": "3 THE PROPOSED FRAMEWORK", "text": "The availability of “real world evidence” is expected to facilitate the development of causal effect inference models for various academic and industrial applications. How to achieve continual learning from incrementally available observational data from non-stationary data domains is a new direction in causal effect inference. Rather than only focusing on handling the selection bias problem, we also need to take into comprehensive consideration three aspects of the model, i.e., the extensibility for incrementally available observational data, the adaptability for various data sources, and the accessibility for a huge amount of data.\nWe propose the Continual Causal Effect Representation Learning method (CERL) for estimating causal effect with incrementally available observational data. Based on selective and balanced representation learning for treatment effect estimation, CERL incorporates feature representation distillation to preserve the knowledge learned from previous observational data. Besides, aiming at adapting the updated model to original and new data without having access to the original data, and solving the selection bias between treatment and control groups, we propose one representation transformation function, which maps partial original feature representations into new feature representation space and makes the global feature representation space balanced with respect to treatment and control groups. Therefore, CERL can achieve the continual causal effect estimation for new data and meanwhile preserve the estimation capability for previous data, without the aid of original data." }, { "heading": "3.1 MODEL ARCHITECTURE", "text": "To estimate the incrementally available observational data, the framework of CERL is mainly composed of two components: (1) the baseline causal effect learning model is only for the first available observational data, and thus we don’t need to consider the domain shift issue among different data sources. This component is equivalent to the traditional causal effect estimation problem; (2) the continual causal effect learning model is for the sequentially available observational data, where we need to handle more complex issues, such as knowledge transfer, catastrophic forgetting, global representation balance, and memory constraint. We present the details of each component as follows." }, { "heading": "3.1.1 THE BASELINE CAUSAL EFFECT LEARNING MODEL", "text": "We first describe the baseline causal effect learning model for the initial observational dataset and then bring in subsequent datasets. For causal effect estimation in the initial dataset, it can be transformed into the traditional causal effect estimation problem. Motivated by the empirical success of deep representation learning for counterfactual inference (Shalit et al., 2017; Chu et al., 2020), we propose to learn the selective and balanced feature representations for treated and control units, and then infer the potential outcomes based on learned representation space.\nLearning Selective and Balanced Representation. Firstly, we adopt a deep feature selection model that enables variable selection in one deep neural network, i.e., gw1 : X → R, where X denotes the original covariate space, R denotes the representation space, and w1 are the learnable parameters in\nfunction g. The elastic net regularization term (Zou & Hastie, 2005) is adopted in our model, i.e., Lw1 = ‖w1‖22 + ‖w1‖1. Elastic net throughout the fully connected representation layers assigns larger weights to important features. This strategy can effectively filter out the irrelevant variables and highlight the important variables.\nDue to the selection bias between treatment and control groups and among the sequential different data sources, the magnitudes of confounders may be significantly different. To effectively eliminate the imbalance caused by the significant difference in magnitudes between treatment and control groups and among different data sources, we propose to use cosine normalization in the last representation layer. In the multi-layer neural networks, we traditionally use dot products between the output vector of the previous layer and the incoming weight vector, and then input the products to the activation function. The result of dot product is unbounded. Cosine normalization uses cosine similarity instead of simple dot products in neural networks, which can bound the pre-activation between −1 and 1. The result could be even smaller when the dimension is high. As a result, the variance can be controlled within a very narrow range (Luo et al., 2018). Cosine normalization is defined as r = σ(rnorm) = σ ( cos(w, x) ) = σ( w·x|w||x| ), where rnorm is the normalized pre-activation, w is the incoming weight vector, x is the input vector, and σ is nonlinear activation function.\nMotivated by Shalit et al. (2017), we adopt integral probability metrics (IPM) when learning the representation space to balance the treatment and control groups. The IPM measures the divergence between the representation distributions of treatment and control groups, so we want to minimize the IPM to make two distributions more similar. Let P (g(x)|t = 1) and Q(g(x)|t = 0) denote the empirical distributions of the representation vectors for the treatment and control groups, respectively. We adopt the IPM defined in the family of 1-Lipschitz functions, which leads to IPM being the Wasserstein distance (Sriperumbudur et al., 2012; Shalit et al., 2017). In particular, the IPM term with Wasserstein distance is defined as Wass(P,Q) = infk∈K ∫ g(x) ‖k(g(x)) − g(x)‖P (g(x))d(g(x)), where γ denotes the hyper-parameter controlling the trade-off between Wass(P,Q) and other terms in the final objective function. K = {k|Q(k(g(x))) = P (g(x))} defines the set of push-forward functions that transform the representation distribution of the treatment distribution P to that of the control Q and g(x) ∈ {g(x)i}i:ti=1. Inferring Potential Outcomes. We aim to learn a function hθ1 : R × T → Y that maps the representation vectors and treatment assignment to the corresponding observed outcomes, and it can be parameterized by deep neural networks. To overcome the risk of losing the influence of T on R, hθ1(gw1(x), t) is partitioned into two separate tasks for treatment and control groups, respectively. Each unit is only updated in the task corresponding to its observed treatment. Let ŷi = hθ1(gw1(x), t) denote the inferred observed outcome of unit i corresponding to factual treatment ti. We minimize the mean squared error in predicting factual outcomes: LY = 1n1 ∑n1 i=1(ŷi − yi)2.\nPutting all the above together, the objective function of our baseline causal effect learning model is: L = LY + αWass(P,Q) + λLw1 , where α and λ denote the hyper-parameters controlling the trade-off among Wass(P,Q), Lw, and LY in the objective function." }, { "heading": "3.1.2 THE SUSTAINABILITY OF MODEL LEARNING", "text": "By far, we have built the baseline model for causal effect estimation with observational data from a single source. To avoid catastrophic forgetting when learning new data, we propose to preserve a subset of lower-dimensional feature representations rather than all original covariates. We also can adjust the number of preserved feature representations according to the memory constraint.\nAfter the completion of baseline model training, we store a subset of feature representations R1 = {gw1(x)|x ∈ D1} and the corresponding {Y, T} ∈ D1 as memory M1. The size of stored representation vectors can be reduced to satisfy the pre-specified memory constraint by a herding algorithm (Welling, 2009; Rebuffi et al., 2017). The herding algorithm can create a representative set of samples from distribution and requires fewer samples to achieve a high approximation quality than random subsampling. We run the herding algorithm separately for treatment and control groups to store the same number of feature representations from treatment and control groups. At this point, we only store the memory set M1 and model gw1 , without the original data (D1)." }, { "heading": "3.1.3 THE CONTINUAL CAUSAL EFFECT LEARNING MODEL", "text": "For now, we have stored memory M1 and baseline model. To continually estimate the causal effect for incrementally available observational data, we incorporate feature representation distillation and feature representation transformation to estimate causal effect for all seen data based on balanced global feature representation space. The framework of CERL is shown in Fig. 1.\nFeature Representation Distillation. For next available dataset D2 = {(x, y, t)|x ∈ X, y ∈ Y, t ∈ T} collected from second domain, we adopt the same selective representation learning gw2 : X → R2 with elastic net regularization (Lw2 ) on new parameters w2. Because we expect our model can estimate causal effect for both previous and new data, we want the new model to inherit some knowledge from previous model. In continual learning, knowledge distillation (Hinton et al., 2015; Li & Hoiem, 2017) is commonly adopted to alleviate the catastrophic forgetting, where knowledge is transferred from one network to another network by encouraging the outputs of the original and new network to be similar. However, for the continual causal effect estimation problem, we focus more on the feature representations, which are required to be balanced between treatment and control, and among different data domains. Inspired by Hou et al. (2019); Dhar et al. (2019); Iscen et al. (2020), we propose feature representation distillation to encourage the representation vector {gw1(x)|x ∈ D2} based on baseline model to be similar to the representation vector {gw2(x)|x ∈ D2} based on new model by Euclidean distance. This feature distillation can help prevent the learned representations from drifting too much in the new feature representation space. Because we apply the cosine normalization to feature representations and ‖A−B‖2 = (A − B)ᵀ(A − B) = ‖A‖2 + ‖B‖2 − 2AᵀB = 2 ( 1 − cos(A,B) ) , the feature\nrepresentation distillation is defined as LFD(x) = 1− cos ( gw1(x), gw2(x) ) ,where x ∈ D2.\nFeature Representation Transformation. We have previous feature representations R1 stored in M1 and new feature representations R2 extracted from newly available data. R1 and R2 lie in different feature representation space and they are not compatible with each other because they are learned from different models. In addition, we cannot learn the feature representations of previous data from the new model gw2 , as we no longer have access to previous data. Therefore, to balance the global feature representation space including previous and new representations between treatment and control groups, a feature transformation function is needed from previous feature representations R1 to transformed feature representations R̃1 compatible with new feature representations space R2. We define a feature transformation function as φ1→2 : R1 → R̃1. We also input the feature representations of new data D2 learned from old model, i.e., gw1(x), to get the transformed feature representations of new data, i.e., φ1→2(gw1(x)). To keep the transformed space compatible with the new feature representation space, we train the transformation function φ1→2 by making the φ1→2(gw1(x)) and gw2(x) similar, where x ∈ D2. The loss function is defined as LFT (x) = 1 − cos ( φ1→2(gw1(x)), gw2(x) ) , which is used to train the function φ1→2 to transform feature representations between different feature spaces. Then, we can attain the transformed old feature representations R̃1 = φ1→2(R1), which is in the same space as R2.\nBalancing Global Feature Representation Space. We have obtained a global feature representation space including the transformed representations of stored old data and new representations of new available data. We adopt the same integral probability metrics as baseline model to make sure that the representation distributions are balanced for treatment and control groups in the global fea-\nture representation space. In addition, we define a potential outcome function hθ2 : (R̃1, R2)×T → Y . Let ŷMi = hθ2 ( φ1→2(ri), t ) , where ri ∈M1, and ŷDj = hθ2 ( gw2(xj), t ) , where xj ∈ D2 denote the inferred observed outcomes. We aim to minimize the mean squared error in predicting factual outcomes for global feature representations including transformed old feature representations and new feature representations: LG = 1ñ1 ∑ñ1 i=1(ŷ M i − yMi )2 + 1n2 ∑n2 j=1(ŷ D j − yDj )2, where ñ1 is the number of units stored in M1 by herding algorithm, yMi ∈M1, and yDj ∈ D2. In summary, the objective function of our continual causal effect learning model is L = LG + αWass(P,Q) + λLw2 + βLFD + δLFT , where α, λ, β, and δ denote the hyper-parameters controlling the trade-off among Wass(P,Q), Lw2 , LFD, LFT , and LG in the final objective function." }, { "heading": "3.2 OVERVIEW OF CERL", "text": "In the above sections, we have provided the baseline and continual causal effect learning models. When the continual causal effect learning model for the second data is trained, we can extract the R2 = {gw2(x)|x ∈ D2} and R̃1 = {φ1→2(r)|r ∈ M1}. We define a new memory set as M2 = {R2, Y2, T2} ∪ φ1→2(M1), where φ1→2(M1) includes R̃1 and the corresponding {Y, T} stored in M1. Similarly, to satisfy the pre-specified memory constraint, M2 can be reduced by conducting the herding algorithm to store the same number of feature representations from treatment and control groups. We only store the new memory set M2 and new model gw2 , which are used to train the following model and balance the global feature representation space. It is unnecessary to store the original data (D1 and D2) any longer.\nWe follow the same procedure for the subsequently available observational data. When we obtain the new observational data Dd, we can train hθd(gwd) and φd−1→d : Rd−1 → R̃d−1 based on the continual causal effect learning model. Besides, the new memory set is defined as: Md = {Rd, Yd, Td} ∪ φd−1→d(Md−1). So far, our model hθd(gwd) can estimate causal effect for all seen observational data regardless of the data source and it doesn’t require access to previous data. The detailed procedures of our CERL method are summarized in Algorithm 1 in Section B of Appendix." }, { "heading": "4 EXPERIMENTS", "text": "We adapt the traditional benchmarks, i.e., News (Johansson et al., 2016; Schwab et al., 2018) and BlogCatalog (Guo et al., 2020) to continual causal effect estimation. Specifically, we consider three scenarios to represent the different degrees of domain shifts among the incrementally available observational data, including the substantial shift, moderate shift, and no shift. Besides, we generate a series of synthetic datasets and also conduct ablation studies to demonstrate the effectiveness of our model on multiple sequential datasets. The model performance with different numbers of preserved feature representations, and the robustness to hyperparameters are also evaluated." }, { "heading": "4.1 DATASET DESCRIPTION", "text": "We utilize two semi-synthetic benchmarks for the task of continual causal effect estimation, which are based on real-world features, synthesized treatments and outcomes.\nNews. The News dataset consists of 5000 randomly sampled news articles from the NY Times corpus1. It simulates the opinions of media consumers on news items. The units are different news items represented by word counts xi ∈ NV and outcome y(xi) ∈ R is the news item. The intervention t ∈ {0, 1} represents the viewing device, desktop (t = 0) or mobile (t = 1). We extend the original dataset specification in Johansson et al. (2016); Schwab et al. (2018) to enable the simulation of incrementally available observational data with different degrees of domain shifts. Assuming consumers prefer to read certain media items on specific viewing devices, we train a topic model on a large set of documents and define z(x) as the topic distribution of news item x. We define one topic distribution of a randomly sampled document as centroid zc1 for mobile and the average topic representation of all document as centroid zc0 for desktop. Therefore, the reader’s opinion of news item x on device t is determined by the similarity between z(x) and zct , i.e., y(xi) = C(z(x)ᵀzc0 + ti · z(x)ᵀzc1) + , where C = 60 is a scaling factor and ∼ N(0, 1).\n1https://archive.ics.uci.edu/ml/datasets/bag+of+words\nBesides, the intervention t is defined by p(t = 1|x) = e k·z(x)ᵀzc1\nek·z(x) ᵀzc0+ek·z(x) ᵀzc1 , where k = 10 indicates\nan expected selection bias. In the experiments, 50 LDA topics are learned from the training corpus and 3477 bag-of-words features are in the dataset. To generate two sequential datasets with different domain shifts, we combine the news items belonging to LDA topics from 1 to 25 into first dataset and the news items belonging to LDA topics from 26 to 50 into second dataset. There is no overlap of the LDA topics between the first dataset and second dataset, which is considered as substantial domain shift. In addition, the news items belonging to LDA topics from 1 to 35 and items belonging to from 16 to 50 are used to construct the first dataset and second dataset, respectively, which is regarded as moderate domain shift. Finally, randomly sampled items from 50 LDA topics compose the first and second dataset, resulting in no domain shift, because they are from the same distribution. Under each domain shift scenario and each dataset, we randomly sample 60% and 20% of the units as the training set and validation set and let the remaining be the test set.\nBlogCatalog. BlogCatalog (Guo et al., 2020) is a blog directory that manages the bloggers and their blogs. In this semi-synthetic dataset, each unit is a blogger and the features are bag-of-words representations of keywords in bloggers’ descriptions collected from real-world source. We adopt the same settings and assumptions to simulate the treatment options and outcomes as we do for the News dataset. 50 LDA topics are learned from the training corpus. 5196 units and 2160 bag-ofwords features are in the dataset. Similar to the generation procedure of News datasets with domain shifts, we create two datasets for each of the three domain shift scenarios. Under each domain shift scenario and each dataset, we randomly sample 60% and 20% of the units as the training set and validation set and let the remaining be the test set." }, { "heading": "4.2 RESULTS AND ANALYSIS", "text": "Evaluation Metrics. We adopt two commonly used evaluation metrics. The first one is the error of ATE estimation, which is defined as ATE = |ATE− ÂTE|, where ATE is the true value and ÂTE is an estimated ATE. The second one is the error of expected precision in estimation of heterogeneous effect (PEHE) Hill (2011), which is defined as PEHE = 1n ∑n i=1(ITEi − ÎTEi)2, where ITEi is the true ITE for unit i and ÎTEi is an estimated ITE for unit i.\nWe employ three strategies to adapt traditional causal effect estimation models to incrementally available observational data: (A) directly apply the model previously trained based on original data to new observational data; (B) utilize newly available data to fine-tune the previously learned model;\n(C) store all previous data and combine with new data to re-train the model from scratch. Among these three strategies, (C) is expected to be the best performer and get the ideal performance with respect to ATE and PEHE, although it needs to take up the most resources (all the data from previous and new dataset). We implement the three strategies based on the counterfactual regression model (CFR) (Shalit et al., 2017), which is a representative causal effect estimation method.\nAs shown in Table 1, under no domain shift scenario, the three strategies and our model have the similar performance on the News and BlogCatalog datasets, because the previous and new data are from the same distribution. CFR-A, CFR-B, and CERL need less resources than CFR-C. Under substantial shift and moderate shift scenarios, we find strategy CFR-A performs well on previous data, but significantly declines on new dataset; strategy CFR-B shows the catastrophic forgetting problem where the performance on previous dataset is poor; strategy CFR-C performs well on both previous and new data, but it re-trains the whole model using both previous and new data. However, if there is a memory constraint or a barrier to accessing previous data, the strategy CFR-C cannot be conducted. Our CERL has a similar performance to strategy CFR-C, while CERL does not require access to previous data. Besides, by comparing the performance under substantial and moderate shift scenarios, the larger domain shift leads to worse performance of CFR-A and CFR-B. However, no matter what the domain shift is, the performance of our model CERL is consistent with the ideal strategy CFR-C." }, { "heading": "4.3 MODEL EVALUATION", "text": "Synthetic Dataset. Our synthetic data include confounders, instrumental, adjustment, and irrelevant variables. The interrelations among these variables, treatments, and outcomes are illustrated in Figure 2. We totally simulate five different data sources with five different multivariate normal distributions to represent the incrementally available observational data. In each data source, we randomly draw 10000 samples including treatment units and control units. Therefore, for five datasets, they have different selection bias, magnitude of covariates, covariance matrices for variables, and number of treatment and control units. To ensure a robust estimation of model performance, for each data source, we repeat the simulation procedure 10 times and obtain 10 synthetic datasets. The details of data simulation are provided in Section A of Appendix.\nResults. Similar to the experiments for News and BlogCatalog benchmarks, we still utilize two sequential datasets to compare our model with CFR under three strategies on the more complex\nsynthetic data. As shown in Table 2, the result is consistent with the conclusions on News and BlogCatalog. Our model’s performance demonstrates its superiority over CFR-A and CFR-B. CERL is comparable with CFR-C, while it does not need to have access to the raw data from previous dataset. Besides, we also conduct three ablation studies to test the effectiveness of the important components in CERL, i.e., CERL (w/o FRT), CERL (w/o herding), and CERL (w/o cosine norm). CERL (w/o FRT) is the simplified CERL without the feature representation transformation, which is based on traditional continual learning with knowledge distillation and integral probability metrics. In CERL (w/o FRT), we do not store and transform the previous feature representation into new feature space, and only utilize the knowledge distillation to realize the continual learning task and balance the bias between treatment and control groups with each new data. CERL (w/o herding) adopts random subsampling strategy to select samples into memory, instead of herding algorithm. CERL (w/o cosine norm) removes the cosine normalization in the last representation layer. Table 2 shows that the performance becomes poor after removing anyone in the feature representation transformation, herding, or cosine normalization modules compared to the original CERL. More specifically, after removing the feature representation transformation, √ PEHE and ATE increase dramatically, which demonstrates that the knowledge distillation always used in continual learning task is not enough for the continual causal effect estimation. Also, using herding to select a representative set of samples from treatment and control distributions is crucial for the feature representation transformation.\nCERL Performance Evaluation. As illustrated in Figure 3, the five observational data are incrementally available in sequence, and the model will continue to estimate the causal effect without having access to previous data. We further evaluate the performance of CERL from three perspectives, i.e., the impact of memory constraint, effeteness of cosine normalization, and its robustness to hyper-parameters. As shown in Figure 4 (a) and (b), as the model continually learns a new dataset, every time when finishing training one new dataset, we report the √ PEHE and ATE on test sets composed of previous data and new data. Our model with memory constraints has a similar performance to the ideal situation, where all data are available to train the model from scratch. However, our model can effectively save memory space, e.g., when facing the fifth dataset, our model only stores 1000, 5000, or 10000 feature representations, but the ideal situation needs to store 5 × 10000 = 50000 observations with all covariates. For the cosine normalization, we perform an ablation study of CERL (M=5000, 5 datasets), where we remove cosine normalization in the representation learning procedure. We find the √ PEHE increases from 1.80 and 1.92 and ATE from 0.55 to 0.61. Next, we explore the model’s sensitivity to the most important parameter α and δ, which controls the representation balance and representation transformation. From Fig. 4 (c) and (d), we observe that the performance is stable over a large parameter range. In addition, the parameter β for feature representation distillation is set to 1 (Rebuffi et al., 2017; Iscen et al., 2020)." }, { "heading": "5 CONCLUSION", "text": "It is the first time to propose the continual lifelong causal effect inference problem and the corresponding evaluation criteria. As the real world evidence is becoming more prominent, how to integrate and utilize these powerful data for causal effect estimation becomes a new research challenge. To address this challenge, we propose the Continual Causal Effect Representation Learning method for estimating causal effect with observational data, which are incrementally available from non-stationary data distributions. Extensive experiments demonstrate the superiority of our method over baselines for continual causal effect estimation." }, { "heading": "A SIMULATION PROCEDURE", "text": "Our synthetic data include confounders, instrumental, adjustment, and irrelevant variables. The interrelations among these variables, treatments, and outcomes are illustrated in Figure 2. The number of observed variables in the vector X = (Cᵀ, Zᵀ, Iᵀ, Aᵀ)ᵀ is set to 100, including 35 confounders in C, 35 adjustment variables in A, 10 instrumental variables in Z, and 20 irrelevant variables in I . The model used to generate the continuous outcome variable Y in this simulation is the partially linear regression model, extending the ideas described in Robinson (1988); Jacob et al. (2019); Chu et al. (2020):\nY = τ((Cᵀ, Aᵀ)ᵀ)T + g((Cᵀ, Aᵀ)ᵀ) + , (1)\nwhere are unobserved covariates, which follow a standard normal distribution N(0, 1) and E[ |C,A, T ] = 0. T ind.∼ Bernoulli(e0((Cᵀ, Zᵀ)ᵀ)) and e0((Cᵀ, Zᵀ)ᵀ) is the propensity score, which represents the treatment selection bias based on their own confounders C and instrumental variables Z. Because we aim to simulate multiple data sources {Dd; d = 1, ..., D}, the vector of all observed covariates X = (Cᵀ, Zᵀ, Iᵀ, Aᵀ)ᵀ is sampled from different multivariate normal distribution with mean vector µdC , µ d Z , µ d I , and µ d A and different random positive definite covariance matrices Σd.\nFor each data source, except for the different magnitude of mean vector and structure of covariance matrix, the simulation procedure is the same. Let D be the diagonal matrix with the square roots of the diagonal entries of Σ on its diagonal, i.e., D = √ diag(σ), then the correlation matrix is given as: R = D−1ΣD−1. (2)\nWe use algorithm 3 in Hardin et al. (2013) to simulate positive definite correlation matrices consisting of different types of variables. Our correlation matrices are based on the hub correlation structure which has a known correlation between a hub variable and each of the remaining variables (Zhang & Horvath, 2005; Langfelder et al., 2008). Each variable in one type of variables is correlated to the hub-variable with decreasing strength from specified maximum correlation to minimum correlation, and different types of variables are generated independently or with weaker correlation among variable types. Defining the first variable as the hub, for the ith variable (i = 2, 3, ..., n), the correlation between it and the hub-variable in one type of variables is given as:\nRi,1 = ρmax − ( i− 2 d− 2 )γ (ρmax − ρmin), (3)\nwhere ρmax and ρmin are specified maximum and minimum correlations, and the rate γ controls rate at which correlations decay.\nAfter specifying the relationship between the hub variable and the remaining variables in the same type of variables, we use Toeplitz structure to fill out the remainder of the hub correlation matrix and get the hub-Toeplitz correlation matrix Rtype for other type of variables. Here, R is the n × n matrix having the blocksRZ , RC , RA, andRI along the diagonal and zeros at off-diagonal elements. This yields a correlation matrix with nonzero correlations within the same type and zero correlation among other types. The amount of correlations among types which can be added to the positivedefinite correlation matrix R is determined by its smallest eigenvalue.\nThe function τ((Cᵀ, Aᵀ)ᵀ) describes the true treatment effect as a function of the values of adjustment variables A and confounders C; namely τ((Cᵀ, Aᵀ)ᵀ) = (sin ((Cᵀ, Aᵀ)ᵀ × bτ ))2 where bτ represents weights for every covariate in the function, which is generated by uniform(0, 1). The variable treatment effect implies that its strength differs among the units and is therefore conditioned on C and A. The function g((Cᵀ, Aᵀ)ᵀ) can have an influence on outcome regardless of treatment assignment. It is calculated via a trigonometric function to make the covariates nonlinear, which is defined as g((Cᵀ, Aᵀ)ᵀ) = (cos ((Cᵀ, Aᵀ)ᵀ × bg))2. Here, bg represents a weight for each covariate in this function, which is generated by uniform(0, 1). The bias is attributed to unobserved covariates which follow a random normal distribution N(0, 1). The treatment assignment T follows the Bernoulli distribution, i.e., T ind.∼ Bernoulli(e0((Cᵀ, Zᵀ)ᵀ)) with probability\ne0((C ᵀ, Zᵀ)ᵀ) = Φ(a−µ(a)σ(a) ), where e0((C ᵀ, Zᵀ)ᵀ) represents the propensity score, which is the cumulative distribution function for a standard normal random variable based on confounders C and instrumental variables Z, i.e., a = sin ((Cᵀ, Zᵀ)ᵀ × ba), where ba is generated by uniform(0, 1). We totally simulate five different data sources with five different multivariate normal distributions to represent the incrementally available observational data. In each data source, we randomly draw 10000 samples including treatment units and control units. Therefore, for five datasets, they have different selection bias, magnitude of covariates, covariance matrices for variables, and number of treatment and control units. To ensure a robust estimation of model performance, for each data source, we repeat the simulation procedure 10 times and obtain 10 synthetic datasets." }, { "heading": "B ALGORITHM 1", "text": "Algorithm 1 Continual Causal Effect Representation Learning Data: Given d incrementally available observational data from D1 to Dd if {x, y, t} ∈ D1 then\n*** Train baseline causal effect model hθ1(gw1) *** w1, θ1 = OPTIMIZE(LY + αWass(P,Q) + λLw1) R1 = {gw1(x)|x ∈ D1} M1 = HERDING{R1, Y1, T1}\nelse for {x, y, t} ∈ D2, ..., Dd do\n*** Train continual causal effect model hθd(gwd) *** wd, θd, φd−1→d = OPTIMIZE(LG + αWass(P,Q) + λLw2 + βLFD + δLFT ) R̃d−1 = φd−1→d(Rd−1) Rd = {gwd(x)|x ∈ Dd} Md = HERDING ( {Rd, Yd, Td} ∪ {R̃d−1, Yd−1 ∈Md−1, Td−1 ∈Md−1}\n) end\nend" } ]
2,020
CONTINUAL LIFELONG CAUSAL EFFECT INFERENCE
SP:864d98472c237daf2b227692c4765af9a89886cd
[ "In this paper, the authors study the problem of GCN for disassortative graphs. The authors proposed the GNAN method to allow attention on distant nodes indeed of limiting to local neighbors. The authors generalized the idea of graph wavelet with MLP to generate the attention score and utilized it to generate multiple attention heads. The authors carried out experiments on several real-world networks (4 assortative and 3 disassortative) with comparison to several state-of-art GCN methods." ]
Graph neural networks (GNNs) have been extensively studied for prediction tasks on graphs. Most GNNs assume local homophily, i.e., strong similarities in local neighborhoods. This assumption limits the generalizability of GNNs, which has been demonstrated by recent work on disassortative graphs with weak local homophily. In this paper, we argue that GNN’s feature aggregation scheme can be made flexible and adaptive to data without the assumption of local homophily. To demonstrate, we propose a GNN model with a global self-attention mechanism defined using learnable spectral filters, which can attend to any nodes, regardless of distance. We evaluated the proposed model on node classification tasks over seven benchmark datasets. The proposed model has been shown to generalize well to both assortative and disassortative graphs. Further, it outperforms all state-ofthe-art baselines on disassortative graphs and performs comparably with them on assortative graphs.
[]
[ { "authors": [ "Filippo Maria Bianchi", "Daniele Grattarola", "Lorenzo Livi", "Cesare Alippi" ], "title": "Graph neural networks with convolutional ARMA filters", "venue": null, "year": 1901 }, { "authors": [ "Heng Chang", "Yu Rong", "Tingyang Xu", "Wenbing Huang", "Somayeh Sojoudi", "Junzhou Huang", "Wenwu Zhu" ], "title": "Spectral graph attention", "venue": null, "year": 2003 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: Fast learning with graph convolutional networks via importance sampling", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Claire Donnat", "Marinka Zitnik", "David Hallac", "Jure Leskovec" ], "title": "Learning structural node embeddings via diffusion wavelets", "venue": "In Proceedings of the 24th ACM International Conference of Knowledge Discovery & Data Mining (KDD),", "year": 2018 }, { "authors": [ "Matthias Fey", "Jan Eric Lenssen" ], "title": "Fast graph representation learning with pytorch geometric", "venue": null, "year": 1903 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data", "year": 2016 }, { "authors": [ "William L. Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "David K. Hammond", "Pierre Vandergheynst", "Rémi Gribonval" ], "title": "Wavelets on graphs via spectral graph theory", "venue": "Applied and Computational Harmonic Analysis,", "year": 2011 }, { "authors": [ "Wen-bing Huang", "Tong Zhang", "Yu Rong", "Junzhou Huang" ], "title": "Adaptive sampling towards fast graph representation learning", "venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In Proceedings of the 5th International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "venue": "In Proceedings of the 7th International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Johannes Klicpera", "Stefan Weißenberger", "Stephan Günnemann" ], "title": "Diffusion improves graph learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "John Boaz Lee", "Ryan A. Rossi", "Xiangnan Kong" ], "title": "Graph classification using structural attention", "venue": "In Proceedings of the 24th ACM International Conference on Knowledge Discovery & Data Mining (KDD),", "year": 2018 }, { "authors": [ "Omer Levy", "Yoav Goldberg" ], "title": "Neural word embedding as implicit matrix factorization", "venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semisupervised learning", "venue": "In Proceedings of the 32nd Conference on Artificial Intelligence (AAAI),", "year": 2018 }, { "authors": [ "Meng Liu", "Zhengyang Wang", "Shuiwang Ji" ], "title": "Non-local graph neural networks", "venue": "CoRR, abs/2005.14612,", "year": 2020 }, { "authors": [ "Paul Michel", "Omer Levy", "Graham Neubig" ], "title": "Are sixteen heads really better than one", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "B.A. Miller", "M.S. Beard", "N.T. Bliss" ], "title": "Matched filtering for subgraph detection in dynamic networks", "venue": "IEEE Statistical Signal Processing Workshop (SSP),", "year": 2011 }, { "authors": [ "Edoardo Di Napoli", "Eric Polizzi", "Yousef Saad" ], "title": "Efficient estimation of eigenvalue counts in an interval", "venue": "Numer. Linear Algebra Appl.,", "year": 2016 }, { "authors": [ "Maximilian Nickel", "Douwe Kiela" ], "title": "Poincaré embeddings for learning hierarchical representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Hongbin Pei", "Bingzhe Wei", "Kevin Chen-Chuan Chang", "Yu Lei", "Bo Yang" ], "title": "Geom-gcn: Geometric graph convolutional networks", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: online learning of social representations", "venue": "The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14,", "year": 2014 }, { "authors": [ "Jiezhong Qiu", "Yuxiao Dong", "Hao Ma", "Jian Li", "Kuansan Wang", "Jie Tang" ], "title": "Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec", "venue": "Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining,", "year": 2018 }, { "authors": [ "Leonardo F.R. Ribeiro", "Pedro H.P. Saverese", "Daniel R. Figueiredo" ], "title": "Struc2vec: Learning node representations from structural identity", "venue": "In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [ "Benedek Rozemberczki", "Carl Allen", "Rik Sarkar" ], "title": "Multi-scale attributed node embedding", "venue": "CoRR, abs/1909.13021,", "year": 2019 }, { "authors": [ "Akie Sakiyama", "Kana Watanabe", "Yuichi Tanaka" ], "title": "Spectral Graph Wavelets and Filter Banks with Low Approximation Error", "venue": "IEEE Transactions on Signal and Information Processing over Networks,", "year": 2016 }, { "authors": [ "Michael T. Schaub", "Santiago Segarra" ], "title": "Flow smoothing and denoising: Graph signal processing in the edgespace", "venue": "IEEE Global Conference on Signal and Information Processing (GlobalSIP),", "year": 2018 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Gallagher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI Magazine,", "year": 2008 }, { "authors": [ "David I. Shuman", "Sunil K. Narang", "Pascal Frossard", "Antonio Ortega", "Pierre Vandergheynst" ], "title": "The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains", "venue": "IEEE Signal Process. Mag.,", "year": 2013 }, { "authors": [ "Jian Tang", "Meng Qu", "Qiaozhu Mei" ], "title": "PTE: predictive text embedding through large-scale heterogeneous text networks", "venue": "Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data", "year": 2015 }, { "authors": [ "Jian Tang", "Meng Qu", "Mingzhe Wang", "Ming Zhang", "Jun Yan", "Qiaozhu Mei" ], "title": "LINE: large-scale information network embedding", "venue": "Proceedings of the 24th International Conference on World Wide Web, WWW 2015, Florence,", "year": 2015 }, { "authors": [ "J.B. Tenenbaum", "V. De Silva", "J.C. Langford" ], "title": "A global geometric framework for nonlinear dimensionality reduction", "venue": "Science,", "year": 2000 }, { "authors": [ "Petar Velickovic", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In Proceedings of the 6th International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "H. Wai", "S. Segarra", "A.E. Ozdaglar", "A. Scaglione", "A. Jadbabaie" ], "title": "Community detection from low-rank excitations of a graph filter", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Felix Wu", "Amauri H. Souza Jr.", "Tianyi Zhang", "Christopher Fifty", "Tao Yu", "Kilian Q. Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Bingbing Xu", "Huawei Shen", "Qi Cao", "Yunqi Qiu", "Xueqi Cheng" ], "title": "Graph wavelet neural network", "venue": "In Proceedings of the 7th International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Kai Zhang", "Yaokang Zhu", "Jun Wang", "Jie Zhang" ], "title": "Adaptive structural fingerprints for graph attention networks", "venue": "In Proceedings of the 8th International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Muhan Zhang", "Yixin Chen" ], "title": "Link prediction based on graph neural networks", "venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Lingxiao Zhao", "Leman Akoglu" ], "title": "Pairnorm: Tackling oversmoothing in gnns", "venue": "In Proceedings of the 8th International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Hammond" ], "title": "Under review as a conference paper at ICLR 2021 APPENDIX A GRAPH SPECTRAL FILTERING WITHOUT EIGEN-DECOMPOSITION Chebyshev polynomials approximation has been the de facto approximation method for avoiding eigendecomposition in spectral graph filters", "venue": null, "year": 2011 }, { "authors": [ "Sakiyama" ], "title": "We hereby use it to approximate Equation 3. In fact, other approximation methods can also be used for the purpose, such as the Jackson-Chebychev polynomials (Napoli et al., 2016) but we will leave it for future study. Briefly, in Chebyshev polynomial approximation, the graph signal filtered by a filter g(L) is approximated as g̃(L), and represented as a sum of recursive polynomials (Sakiyama", "venue": null, "year": 2019 }, { "authors": [ "Qiu" ], "title": "2015a), and node2vec (Grover & Leskovec, 2016), are essentially factorizing implicit matrices closely related to the normalized graph Laplacian. The implicit matrices can be presented as graph wavelet transforms on the graph Laplacian. For simplicity, we hereby use DeepWalk, a generalized form of LINE and PTE", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph neural networks (GNNs) have recently demonstrated great power in graph-related learning tasks, such as node classification (Kipf & Welling, 2017), link prediction (Zhang & Chen, 2018) and graph classification (Lee et al., 2018). Most GNNs follow a message-passing architecture where, in each GNN layer, a node aggregates information from its direct neighbors indifferently. In this architecture, information from long-distance nodes is propagated and aggregated by stacking multiple GNN layers together (Kipf & Welling, 2017; Velickovic et al., 2018; Defferrard et al., 2016). However, this architecture underlies the assumption of local homophily, i.e. proximity of similar nodes. While this assumption seems reasonable and helps to achieve good prediction results on graphs with strong local homophily, such as citation networks and community networks (Pei et al., 2020), it limits GNNs’ generalizability. Particularly, determining whether a graph has strong local homophily or not is a challenge by itself. Furthermore, strong and weak local homophily can both exhibit in different parts of a graph, which makes a learning task more challenging.\nPei et al. (2020) proposed a metric to measure local node homophily based on how many neighbors of a node are from the same class. Using this metric, they categorized graphs as assortative (strong local homophily) or disassortative (weak local homophily), and showed that classical GNNs such as GCN (Kipf & Welling, 2017) and GAT (Velickovic et al., 2018) perform poorly on disassortative graphs. Liu et al. (2020) further showed that GCN and GAT are outperformed by a simple multilayer perceptron (MLP) in node classification tasks on disassortative graphs. This is because the naive local aggregation of homophilic models brings in more noise than useful information for such graphs. These findings indicate that these GNN models perform sub-optimally when the fundamental assumption of local homophily does not hold.\nBased on the above observation, we argue that a well-generalized GNN should perform well on graphs, regardless of their local homophily. Furthermore, since a real-world graph can exhibit both strong and weak homophily in different node neighborhoods, a powerful GNN model should be able to aggregate node features using different strategies accordingly. For instance, in disassortative graphs where a node shares no similarity with any of its direct neighbors, such a GNN model should be able to ignore direct neighbors and reach farther to find similar nodes, or at least, resort to the node’s attributes to make a prediction. Since the validity of the assumption about local homophily is often unknown, such aggregation strategies should be learned from data rather than decided upfront.\nTo circumvent this issue, in this paper, we propose a novel GNN model with global self-attention mechanism, called GNAN. Most existing attention-based aggregation architectures perform selfattention to the local neighborhood of a node (Velickovic et al., 2018), which may add local noises in aggregation. Unlike these works, we aim to design an aggregation method that can gather informative features from both close and far-distant nodes. To achieve this, we employ graph wavelets under a relaxed condition of localization, which enables us to learn attention weights for nodes in the spectral domain. In doing so, the model can effectively capture not only local information but also global structure into node representations.\nTo further improve the generalizability of our model, instead of using predefined spectral kernels, we propose to use multi-layer perceptrons (MLP) to learn the desired spectral filters without limiting their shapes. Existing works on graph wavelet transform choose wavelet filters heuristically, such as heat kernel, wave kernel and personalized page rank kernel (Klicpera et al., 2019b; Xu et al., 2019; Klicpera et al., 2019a). They are mostly low-pass filters, which means that these models implicitly treat high-frequency components as “noises” and have them discarded (Shuman et al., 2013; Hammond et al., 2011; Chang et al., 2020). However, this may hinder the generalizability of models since high-frequency components can carry meaningful information about local discontinuities, as analyzed in (Shuman et al., 2013). Our model overcomes these limitations by incorporating fully learnable spectral filters into the proposed global self-attention mechanism.\nFrom a computational perspective, learning global self-attention may impose high computational overhead, particularly when graphs are large. We alleviate this problem from two aspects. First, we sparsify nodes according to their wavelet coefficients, which enables attention weights to be distributed across the graph sparsely. Second, we observed that spectral filters learned by different MLPs tend to converge to be of similar shapes. Thus, we use a single MLP to reduce redundancy among filters, where each dimension in the output corresponds to one learnable spectral filter. In addition to these, following (Xu et al., 2019; Klicpera et al., 2019b), we use a fast algorithm to efficiently approximate graph wavelet transform, which has computational complexity O(p× |E|), where p is the order of Chebyshev polynomials and |E| is the number of edges in a graph. To summarize, the main contributions of this work are as follows:\n1. We propose a generalized GNN model which performs well on both assortative and disassortative graphs, regardless of local node homophily.\n2. We exhibit that GNN’s aggregation strategy can be trained via a fully learnable spectral filter, thereby enabling feature aggregation from both close and far nodes.\n3. We show that, unlike commonly understood, higher-frequency on disassortative graphs provides meaningful information that helps improving prediction performance.\nWe conduct extensive experiments to compare GNAN with well-known baselines on node classification tasks. The experimental results show that GNAN significantly outperforms the state-of-the-art methods on disassortative graphs where local node homophily is weak, and performs comparably with the state-of-the-art methods on assortative graphs where local node homophily is strong. This empirically verifies that GNAN is a general model for learning on different types of graphs." }, { "heading": "2 PRELIMINARIES", "text": "Let G = (V,E,A,x) be an undirected graph with N nodes, where V , E, and A are the node set, edge set, and adjacency matrix of G, respectively, and x : V 7→ Rm is a graph signal function that associates each node with a feature vector. The normalized Laplacian matrix of G is defined as L = I −D−1/2AD−1/2, whereD ∈ RN×N is the diagonal degree matrix of G. In spectral graph theory, the eigenvalues Λ = diag(λ1, ..., λN ) and eigenvectors U of L = UΛUH are known as the graph’s spectrum and spectral basis, respectively, where UH is the Hermitian transpose of U . The graph Fourier transform of x is x̂ = UHx and its inverse is x = Ux̂.\nThe spectrum and spectral basis carry important information on the connectivity of a graph (Shuman et al., 2013). Intuitively, lower frequencies correspond to global and smooth information on the graph, while higher frequencies correspond to local information, discontinuities and possible noise (Shuman et al., 2013). One can apply a spectral filter g as in Equation 1 and use graph Fourier transform to manipulate signals on a graph in various ways, such as smoothing and denoising (Schaub &\nSegarra, 2018), abnormally detection (Miller et al., 2011) and clustering (Wai et al., 2018). Spectral convolutions on graphs is defined as the multiplication of a signal x with a filter g(Λ) in the Fourier domain, i.e. g(L)x = g(UΛUH)x = Ug(Λ)UHx = Ug(Λ)x̂. (1) When a spectral filter is parameterized by a scale factor, which controls the radius of neighbourhood aggregation, Equation 1 is also known as the Spectral Graph Wavelet Transform (SGWT) (Hammond et al., 2011; Shuman et al., 2013). For example, Xu et al. (2019) uses a small scale parameter s < 2 for a heat kernel, g(sλ) = e−λs, to localize the wavelet at a node." }, { "heading": "3 PROPOSED APPROACH", "text": "Graph neural networks (GNNs) learn lower-dimensional embeddings of nodes from graph structured data. In general, given a node, GNNs iteratively aggregate information from its neighbor nodes, and then combine the aggregated information with its own information. An embedding of node v at the kth layer of GNN is typically formulated as\nmv = aggregate({h(k−1)u |u ∈ Nv}) h(k)v = combine(h (k−1) v ,mv),\nwhereNv is the set of neighbor nodes of node v,mv is the aggregated information from the neighbors, and h(k)v is the embedding of the node v at the kth layer (h (0) v = xv). The embedding hnv of the node v at the final layer is then used for some prediction tasks. In most GNNs, Nv is restricted to a set of one-hop neighbors of node v. Therefore, one needs to stack multiple aggregation layers in order to collect the information from more than one-hop neighborhood within this architecture.\nAdaptive spectral filters. Instead of stacking multiple aggregation layers, we introduce a spectral attention layer that rewires a graph based on spectral graph wavelets. A spectral graph wavelet ψv at node v is a modulation in the spectral domain of signals centered around the node v, given by an N -dimensional vector\nψv = Ug(Λ)U Hδv, (2)\nwhere g(·) is a spectral filter and δv is a one-hot vector for node v. The common choice of a spectral filter is heat kernel. A wavelet coefficient ψvu computed from a heat kernel can be interpreted as the amount of energy that node v has received from node u in its local neighborhood. In this work, instead of using pre-defined localized kernels, we use multilayer perceptrons (MLP) to learn spectral filters. With learnable spectral kernels, we obtain wavelet coefficients\nψv = Udiag(MLP(Λ))UHδv. (3)\nSimilar to that of a heat kernel, the wavelet coefficient with a learnable spectral filter ψvu can be understood as the amount of energy that is distributed from node v to node u, under the conditions regulated by the spectral filter. Note that we use the terminology wavelet and spectral filter interchangeably as we have relaxed the wavelet definition from (Hammond et al., 2011) so that learnable spectral filters in our work are not necessarily localized in the spectral and spatial domains. Equation 3 requires the eigen-decomposition of a Laplacian matrix, which is expensive and infeasible for large graphs. We follow Xu et al. (2019); Klicpera et al. (2019b) to approximate graph wavelet transform using Chebyshev polynomials (Shuman et al., 2013) (see Appendix A for details).\nGlobal self-attention. Unlike the previous work (Xu et al., 2019) where wavelet coefficients are directly used to compute node embeddings, we normalize wavelet coefficients through a softmax layer\nav = softmax(ψv),\nwhere av ∈ RN is an attention weight vector. With attention weights, an update layer is then formalized as\nh(k)v = σ ( N∑ u=1 avuh (k−1) u W (k) ) , (4)\nwhere W (k) is a weight matrix shared across all nodes in the kth layer and σ is ELU nonlinear activation. Unlike heat kernel, the wavelet coefficient with a learnable spectral kernel is not localized. Hence, our work can actively aggregate information from far-distant nodes. Note that the update layer is not divided into aggregation and combine steps in our work. Instead, we compute the attention avv directly from a spectral filter.\nSparsified node attentions. With predefined localized spectral filters such as a heat kernel, most of wavelet coefficients are zero due to their locality. In our work, spectral filters are fully learned from data, and consequently attention weights obtained from learnable spectral filters do not impose any sparsity. This means that to perform an aggregation operation we need to retrieve all possible nodes in a graph, which is time consuming with large graphs. From our experiments, we observe that most of attention weights are negligible after softmax. Thus, we consider two sparsification techniques:\n1. Discard the entries of wavelet bases that are below a threshold t, i.e.\nψ̄vu = { ψvu if ψvu > t −∞ otherwise. (5)\nThe threshold t can be easily applied on all entries of wavelet bases. However, it offers little guarantee on attention sparsity since attention weights may vary, depending on the learning process of spectral filters and the characteristics of different datasets, as will be further discussed in Section 4.2.\n2. Keep only the largest k entries of wavelet bases for each node, i.e.\nψ̄vu = { ψvu if ψvu ∈ topK({ψv0, ..., ψvN}, k) −∞ otherwise, (6)\nwhere topK is a partial sorting function that returns the largest k entries from a set of wavelet bases {ψv0, ..., ψvN}. This technique guarantees attention sparsity such that the embedding of each node can be aggregated from at most k other nodes. However, it takes more computational overhead to sort entries since topK has a time complexity of O(N +k logN).\nThe resulting ψ̄ from either of the above techniques is then fed into the softmax layer to compute attention weights. The experiments for comparing these techniques will be discussed in Section 4.2.\nWe adopt multi-head attention to model multiple spectral filters. Each attention head aggregates node information with a different spectral filter, and the aggregated embedding is concatenated before being sent to the next layer. We can allocate an independent MLP for each of attention heads; however, we found independent MLPs tend to learn spectral filters of similar shapes. Hence, we adopt a single MLP: RN → RN×M , where M is the number of attention heads, and each column of the output corresponds to one adaptive spectral filter.\nWe name the multi-head spectral attention architecture as a global node attention network (GNAN). The design of GNAN is easily generalizable, and many existing GNNs can be expressed as special cases of GNAN (see Appendix D). Figure 1 illustrates how GNAN works with two attention heads learned from the CITESEER dataset. As shown in the illustration, the MLP learns adaptive filters such as low-band pass and high-band pass filters. A low-band pass filter assigns high attention weights in local neighborhoods, while a high-band pass filter assigns high attention weights on far-distant nodes, which cannot be captured by a one-hop aggregation scheme." }, { "heading": "4 EXPERIMENTS", "text": "To evaluate the performance of our proposed model, we conduct experiments on node classification tasks with assortative graph datasets, where the labels of nodes exhibit strong homophily, and disassortative graph datasets, where the local homophily is weak and labels of nodes represent their structural roles. To quantify the assortativeness of graphs, we use the metric β introduced by Pei et al. (2020),\nβ = 1 |V | ∑ v∈V βv and βv = |{u ∈ Nv|`(u) = `(v)}| |Nv| , (7)\nwhere `(v) refers to the label of node v. β measures the homophily of a graph, and βv measures the homophily of node v in the graph. A graph has strong local homophily if β is large and vice versa." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Baseline methods. We evaluate two variants of GNAN which only differ in the method used for sparsification: one adopts Equation 5 called GNAN-T, and the other adopts Equation 6 called GNAN-K. We compare both variants against 11 benchmark methods: vanilla GCN (Kipf & Welling, 2017) and its simplified version SGC (Wu et al., 2019); two spectral methods, one using the Chebyshev polynomial spectral filters (Defferrard et al., 2016) and the other using the auto-regressive moving average (ARMA) filters (Bianchi et al., 2019); the graph attention model GAT (Velickovic et al., 2018); APPNP which allows adaptive neighbourhood aggregation using personalized page rank (Klicpera et al., 2019a); three sampling-based approaches, GraphSage (Hamilton et al., 2017), FastGCN Chen et al. (2018) and ASGCN (Huang et al., 2018); and Geom-GCN which targets prediction on disassortative graphs (Pei et al., 2020). We also include MLP in the baselines since it performs better than many GNN-based methods on disassortative graphs (Liu et al., 2020).\nDatasets. We evaluate our model and the baseline methods on node classification tasks over three citation networks: CORA, CITESEER and PUBMED (Sen et al., 2008), three webgraphs from the WebKB dataset1: WISCONSIN, TEXAS and CORNELL, and another webgraph from Wikipedia called CHAMELEON (Rozemberczki et al., 2019). We divide these datasets into two groups, assortative and disassortative, based on their β. The details of these datasets are summarized in Table 1.\nHyper-parameter settings. For the citation networks, we follow the experimental setup for node classification from (Hamilton et al., 2017; Huang et al., 2018; Chen et al., 2018) and report the results averaged on 10 runs. For the webgraphs, we run each model on the 10 splits provided by (Pei et al., 2020) and take the average, where each split uses 60%, 20%, and 20% nodes of each class for training, validation and testing, respectively. The results we report on GCN and GAT are better than Pei et al. (2020) due to converting the graphs to undirected before training 2. Geom-GCN uses node embeddings pre-trained from different methods such as Isomap (Tenenbaum et al., 2000), Poincare (Nickel & Kiela, 2017) and struc2vec (Ribeiro et al., 2017). We hereby report the best micro-F1 results among all variants for Geom-GCN.\nWe use the best-performing hyperparameters specified in the original papers of baseline methods. For hyperparameters not specified in the original papers, we use the parameters from (Fey & Lenssen, 2019). We report the test accuracy results from epochs with the smallest validation loss and highest validation accuracy. Early termination is adopted for both validation loss and accuracy, and the training is thus stopped when neither of validation loss and accuracy improve for 100 consecutive epochs. We use a two-layer GNAN where multi-head’s filters are learned using a MLP of 2 hidden layers and then approximated by Chebyshev polynomials. Each layer of the MLP consists of a linear function and a ReLU activation. To avoid overfitting, dropout is applied in each GNAN layer on both attention weights and inputs equally." }, { "heading": "4.2 RESULTS AND DISCUSSION", "text": "We use two evaluation metrics to evaluate the performance of node classification tasks: micro-F1 and macro-F1. The results with micro-F1 are summarized in Table 2, and the results with macro-F1 are provided in Table 3 in the appendix. Overall, on assortative citation networks, GNAN performs comparably with the state-of-the-art methods, ranking first on PUBMED and second on CORA and CITESEER in terms of micro-F1 scores. On disassortative graphs, GNAN outperforms all the stateof-the-art methods by a margin of at least 2.4% and MLP by a margin of at least 1.3%. These results indicate that GNAN can learn spectral filters adaptively based on different characteristics of graphs.\nAlthough our model GNAN performs well on both assortative and disassortative graphs, it is unclear how GNAN performs on disassortative nodes whose neighbors are mostly of different classes in an assortative graph. Thus, we report an average classification accuracy on disassortative nodes at different levels of βv in Figure 2 for the assortative graph datasets CITESEER and PUBMED. The\n1http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-11/www/wwkb/ 2https://openreview.net/forum?id=S1e2agrFvS\nnodes are binned into five groups based on βv. For example, all nodes with 0.3 < βv ≤ 0.4 belong to the bin at 0.4. We have excluded CORA from the report since it has very few nodes with low βv .\nThe results in Figure 2 show that all GNNs based on local aggregation schemes perform poorly when βv is low. One may argue that the performance on disassortative graphs might improve by stacking multiple GNN layers together to obtain information from far-distant nodes. However, it turns out that this approach introduces an oversmoothing problem in local aggregation-based GNNs (Li et al., 2018). On the other hand, GNAN outperforms the other GNN-based methods on disassortative nodes, suggesting that adaptive spectral filters reduce local noise in aggregation while allowing far-distant nodes to be attended to.\nAttention sparsification. The two variants of GNAN use slightly different sparsification techniques to speed up computation. For each node v, GNAN-T uses a threshold t to eliminate low ψvu (Equation 5), thereby sparsifying the resulting attention matrix. However, t cannot control the level of sparsification precisely. In comparison, GNAN-K keeps the k largest φvu (Equation 6); therefore it guarantees a certain level of sparsification. Nonetheless, GNAN-K requires a partial sorting which adds an overhead of O(n+ k logN). To further analyze the impact of attention sparsity on runtime, we plot the density of an attention matrix with respect to both k (Figure 3.a and 3.c) and t (Figure 3.b and 3.d). The results are drawn from two datasets: the disassortative dataset CHAMELEON and the assortative dataset CORA. As expected, GNAN-K shows a stable growth in the attention density as the value of k increases. GNAN-T, on the other hand, demonstrates fluctuation in density with t and reaches the lowest density at t = 1e− 9 and t = 1e− 6 for CORAand CHAMELEON, respectively. We observe that the attention weights tend to converge to similar small values on all nodes when t goes beyond 0.001 in both datasets. To study how efficiency is improved via sparsification, we also plot the training time averaged over 500 epochs in Figure 3. It shows that the model GNAN runs much faster when attention weights are well-sparsified. In our experiments, we find the best results are achieved on k < 20 for GNAN-K and t < 1e− 5 for GNAN-T. Thus, the model GNAN not only runs faster, but also performs better when attention weights are well-sparsified.\nFrequency range ablation. To understand how adaptive spectral filters contribute to GNAN’s performance on disassortative graphs, we conduct an ablation study on spectral frequency ranges. We first divide the entire frequency range (0 ∼ 2) into a set of predefined sub-ranges exclusively, and then manually set the filter frequency responses to zero for each sub-range at a time in order to check the impact of each sub-range on the performance of classification. By doing so, the frequencies within a selected sub-range do not contribute to neither node attention nor feature aggregation, therefore helping to reveal the importance of the sub-range. We consider three different lengths of sub-ranges, i.e., step=1.0, step=0.5, and step=0.25. The results of frequency ablation on the three assortative graphs are summarized in Figure 4. The results for step=1.0 reveal the importance of high-frequency range (1 ∼ 2) on node classification of disassortative graphs. The performances are significantly dropped by ablating high-frequency range on all datasets. Further investigation at the finer-level sub-ranges (step=0.5) shows that sub-range 0.5 ∼ 1.5 has the most negative impact on performance, whereas the most important sub-range varies across different datasets at the finest level (step=0.25). This finding matches our intuition that low-pass filters used in GNNs underlie the local node homophily assumption in a similar way as naive local aggregation. We suspect the choice of\nlow-pass filters also relates to oversmoothing issues in spectral methods (Li et al., 2018), but we leave it for future work.\nAttention head ablation. In GNAN, each head uses a spectral filter to produce attention weights. To delve the importance of a spectral filter, we further follow the ablation method used by (Michel et al., 2019). Specifically, we ablate one or more filters by manually setting their attention weights to zeros. We then measure the impact on performance using micro-F1. If the ablation results in a large decrease in performance, the ablated filters are considered important. We observe that all attention heads (spectral filters) in GNAN are of similar importance, and only all attention heads combined produce the best performance. Please check Appendix C for the detailed results." }, { "heading": "5 RELATED WORK", "text": "Graph neural networks have been extensively studied recently. We categorize work relevant to ours into three perspectives and summarize the key ideas.\nAttention on graphs. Graph attention networks (GAT) (Velickovic et al., 2018) was the first to introduce attention mechanisms on graphs. GAT assigns different importance scores to local neighbors via attention mechanism. Similar to other GNN variants, long-distance information propagation in GAT is realized by stacking multiple layers together. Therefore, GAT suffers from the oversmoothing issue (Zhao & Akoglu, 2020). Zhang et al. (2020) improve GAT by incorporating both structural and feature similarities while computing attention scores.\nSpectral graph filters and wavelets. Some GNNs also use graph wavelets to extract information from graphs. Xu et al. (2019) applied graph wavelet transform defined by Shuman et al. (2013) in GNNs. Klicpera et al. (2019b) proposed a general GNN argumentation using graph diffusion kernels to rewire the nodes. Donnat et al. (2018) used heat wavelet to learn node embeddings in unsupervised ways and showed that the learned embeddings closely capture structural similarities between nodes. Other spectral filters used in GNNs can also be viewed as special forms of graph wavelets (Kipf & Welling, 2017; Defferrard et al., 2016; Bianchi et al., 2019). Coincidentally, Chang et al. (2020) also noticed useful information carried by high-frequency components from a graph Laplacian. Similarly, they attempted to utilize such components using node attentions. However, they resorted to the traditional choice of heat kernels and applied such kernels separately to low-frequency\nand high-frequency components divided by a hyperparameter. In addition to this, their work did not link high-frequency components to disassortative graphs.\nPrediction on disassortative graphs. Pei et al. (2020) have drawn attention to GCN and GAT’s poor performance on disassortative graphs very recently. They tried to address the issue by essentially pivoting feature aggregation to structural neighborhoods from a continuous latent space learned by unsupervised methods. Another attempt to address the issue was proposed by Liu et al. (2020). They proposed to sort locally aggregated node embeddings along a one-dimensional space and used a one-dimensional convolution layer to aggregate embeddings a second time. By doing so, non-local but similar nodes can be attended to.\nAlthough our method shares some similarities in motivation with the aforementioned work, it is fundamentally different in several aspects. To the best of our knowledge, our method is the first to learn spectral filters as part of supervised training on graphs. It is also the first architecture we know that computes node attention weights purely from learned spectral filters. As a result, in contrast to commonly used heat kernel, our method utilizes high-frequency components of a graph, which helps prediction on disassortative graphs." }, { "heading": "6 CONCLUSION", "text": "In this paper, we study the node classification tasks on graphs where local node homophily is weak. We argue the assumption of local homophily is the cause of poor performance on disassortative graphs. In order to design more generalizable GNNs, we suggest that a more flexible and adaptive feature aggregation scheme is needed. To demonstrate, we have introduced the global node attention network (GNAN) which achieves flexible feature aggregation using learnable spectral graph filters. By utilizing the full graph spectrum adaptively via the learned filters, GNAN is able to aggregate features from nodes that are close and far. For node classification tasks, GNAN outperforms all benchmarks on disassortative graphs, and performs comparably on assortative graphs. On assortative graphs, GNAN also performs better for nodes with weak local homophily. Through our analysis, we find the performance gain is closely linked to the higher end of the frequency spectrum." }, { "heading": "A GRAPH SPECTRAL FILTERING WITHOUT EIGEN-DECOMPOSITION", "text": "Chebyshev polynomials approximation has been the de facto approximation method for avoiding eigendecomposition in spectral graph filters. It has been commonly used in previous works Hammond et al. (2011); Sakiyama et al. (2016); Xu et al. (2019). We hereby use it to approximate Equation 3. In fact, other approximation methods can also be used for the purpose, such as the Jackson-Chebychev polynomials (Napoli et al., 2016) but we will leave it for future study. Briefly, in Chebyshev polynomial approximation, the graph signal filtered by a filter g(L) is approximated as g̃(L), and represented as a sum of recursive polynomials (Sakiyama et al., 2016):\ng̃(L)x = {1\n2 c0 + p∑ i=1 ciT̄i(L) } x (8)\nwhere T̄0 = 1, T̄1(L) = 2(L− 1)/λmax, T̄i(L) = 4(L− 1)T̄i−1/λmax − T̄i−2(L), and\nci = 2\nS S∑ m=1 cos (πi(m− 1 2 ) S ) × g (λmax 2 ( cos (π(m− 1 2 ) S + 1 )))\n(9)\nfor i = 0, ..., p, where p is the approximation order, S is the number of sampling points and is normally set to S = p+ 1.\nIn Equation 3, MLP is used to produce the filter responses so we have g = MLP in Equation 9. The above equation is differentiable so the parameters in MLP can be learned by gradient decent from the loss function. The above approximation has a time complexity of O(p× |E|), so that the complexity for Equation 3 is also O(p × |E|). Please note, while Chebyshev polynomials are mentioned in both our method and ChevNet, however they are used in fundamentally different ways: ChevNet uses the simplified Chebyshev polynomials as the polynomial filter directly, while we use it as a method to approximate the filtering operation. Naturally, approximation error reduces while a larger p is used, which is also why we have p > 12 in our model." }, { "heading": "B FURTHER EXPERIMENT RESULTS", "text": "We provide the macro-F1 scores on the classification task in Table 3. The proposed model outperforms the other models on disassortative graphs and performs comparable on the assortative graphs." }, { "heading": "C ABLATION STUDY ON FILTERS", "text": "Ablating all but one spectral filter. In GNAN, each head uses a filter to produce spectral attention weights. To delve the importance of a filter, we follow the ablation method used by (Michel et al., 2019). Specifically, we ablate one or more filters by manually setting the attention scores to zeros. We then measure the impact on performance using micro-F1. If the ablation results in a large decrease in performance, the ablated fitlerbank(s)\nis considered important. The results are summarized in Table 4a. All attention head (filters) in GNAN are of similar importance, and only all heads combined produces the best performance.\nAblating only one spectral filter. We then examine performance differences by ablating one filter only and keeping all other fitlerbanks Table 4b. Different with above, ablating just one fitlerbank only decreases performance by a small margin. Moreover, ablating some fitlerbanks does not impact prediction performance at all. This is an indicator of potential redundancies in the filters. We leave the redundancy reduction in the model for future work." }, { "heading": "D CONNECTIONS TO OTHER METHODS", "text": "D.1 CONNECTION TO GCN\nA GCN (Kipf & Welling, 2017) layer can be expressed as\nh(k)v = ReLU( N∑ u=1 âvuh (k−1) u W (k))\nwhere âvu is the elements from the vth row of the symmetric adjacency matrix\n = D̃−1/2ÃD̃−1/2 where à = A+ IN , D̃vv = N∑ u=1 Ãvu\nSo that\nâvu = { 1 if evu ∈ E 0 if evu /∈ E\nTherefore, GCN can be viewed as a case of Equation 4 with σ = ReLU and avu = âvu\nD.2 CONNECTION TO POLYNOMIAL FILTERS\nPolynomial filters localize in a node’s K-hop neighbors utilizing K-order polynomials (Defferrard et al., 2016), most of them takes the following form:\ngθ(Λ) = K−1∑ k=0 θkΛ k\nwhere θk is a learnable polynomial coefficient for each order. Thus a GNN layer using a polynomial filter becomes\nh(k)v = ReLU( N∑ u=1 Ugθ(Λ)U Thu)\nwhich can be expressed using Equation 4 with W (k) = IN , σ = ReLU and avu = (Ugθ(Λ)UT )vu. In comparison, our method uses a MLP to learn the spectral filters instead of using a polynomial filter. Also, in our method, coefficients after sparsification and normalization are used as directly as attentions.\nD.3 CONNECTION TO GAT\nOur method is inspired by and closely related to GAT (Velickovic et al., 2018). To demonstrate the connection, we firstly define a matrix Φ where each column φv is the transformed feature vector of node v concatenated with feature vector of another node (including node v itself) in the graph.\nφv = ||Nj=0[Whv||Whu] (10)\nGAT multiplies each column of Φ with a learnable weight vector α and masks the result with the adjacencyA before feeding it to the nonlinear function LeakyRelu and softmax to calculate attention scores. The masking can be expressed as a Hadamard product with the adjacency matrixA which is the congruent of a graph wavelet transform with the filter g(Λ) = I − Λ:\nΨ = A = D 1 2U(I − Λ)UTD 1 2 (11)\nAnd the GAT attention vector for node i become\nav = softmax(LeakyReLU(α Tφv ψ̄v)) (12)\nwhere ψ̄v is the vth row of Ψ after applying Equation 5 with t = 0, denotes the Hadamard product, as of (Velickovic et al., 2018).\nIn comparison with our method, GAT incorporate node features in the attention score calculation, while node attentions in our methods are purely computed from the graph wavelet transform. Also, attentions in GAT are masked byA, which means the attentions are restricted to node v’s 1-hop neighbours only.\nD.4 CONNECTION TO SKIP-GRAM METHODS\nSkip-gram models in natural language processing are shown to be equivalent to a form of matrix factorization (Levy & Goldberg, 2014). Recently Qiu et al. (2018) proved that many Skip-Gram Negative Sampling (SGNS) models used in node embedding, including DeepWalk (Perozzi et al., 2014), LINE (Tang et al., 2015b), PTE (Tang et al., 2015a), and node2vec (Grover & Leskovec, 2016), are essentially factorizing implicit matrices closely related to the normalized graph Laplacian. The implicit matrices can be presented as graph wavelet transforms on the graph Laplacian. For simplicity, we hereby use DeepWalk, a generalized form of LINE and PTE, as an example. Qiu et al. (2018) shows DeepWalk effectively factorizes the matrix\nlog ( vol(G) T ( T∑ r=1 P r)D−1 ) − log(b) (13)\nwhere vol(G) = ∑ vDvv is the sum of node degrees, P = D\n−1A is the random walk matrix, T is the skip-gram window size and b is the parameter for negative sampling. We know\nP = I −D− 1 2LD 1 2 = D− 1 2U(I − Λ)UTD 1 2\nSo Equation 13 can be written using graph Laplacian as:\nlog ( vol(G) T D− 1 2 T∑ r=1 (I −L)rD 1 2 ) − log(b)\nOr, after eigen-decomposition, as:\nM = log ( vol(G) Tb D− 1 2U T∑ r=1 (I − Λ)rUTD 1 2 ) (14)\nwhere U ∑T r=1(I − Λ) rUT , denoted as ψsg , is a wavelet transform with the filter gsg(λ) = ∑T r=1(1− λ)\nr . Therefore, DeepWalk can be seen a special case of Equation 4 where:\nav = { ψv if v = k 0 if v 6= u\nAssigningH = W = I , K = 1 and σ(X) = log( vol(G) Tb D− 1 2XD 1 2 ). We have\nh′i = FACTORIZE(σ(ai)) (15)\nwhere FACTORIZE is a matrix factorization operator of choice. Qiu et al. (2018) uses SVD in a generalized SGNS model, where the decomposed matrix Ud and Σd from M = UdΣVd is used to obtain the node embedding Ud √ Σd." } ]
2,020
null
SP:28a5570540fa769396ee73c14c25ada9669dd95f
[ "The paper presents a post-hoc calibration method for deep neural net classification. The method proposes to first reduces the well-known ECE score to a special case of the Kolmogorov-Smirnov (KS) test, and this way solves the dependency of ECE on the limiting binning assumption. The method proposes next to recalibrate the classification probabilities by fitting a cubic spline to the KS test score." ]
Calibrating neural networks is of utmost importance when employing them in safety-critical applications where the downstream decision making depends on the predicted probabilities. Measuring calibration error amounts to comparing two empirical distributions. In this work, we introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test in which the main idea is to compare the respective cumulative probability distributions. From this, by approximating the empirical cumulative distribution using a differentiable function via splines, we obtain a recalibration function, which maps the network outputs to actual (calibrated) class assignment probabilities. The spline-fitting is performed using a held-out calibration set and the obtained recalibration function is evaluated on an unseen test set. We tested our method against existing calibration approaches on various image classification datasets and our spline-based recalibration approach consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
[ { "affiliations": [], "name": "Kartik Gupta" }, { "affiliations": [], "name": "Amir Rahimi" }, { "affiliations": [], "name": "Thalaiyasingam Ajanthan" }, { "affiliations": [], "name": "Thomas Mensink" }, { "affiliations": [], "name": "Cristian Sminchisescu" }, { "affiliations": [], "name": "Richard Hartley" } ]
[ { "authors": [ "Glenn W Brier" ], "title": "Verification of forecasts expressed in terms of probability", "venue": "Monthly weather review,", "year": 1950 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "A Kolmogorov" ], "title": "Sulla determinazione empírica di uma legge di distribuzione", "venue": null, "year": 1933 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Meelis Kull", "Telmo Silva Filho", "Peter Flach" ], "title": "Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Meelis Kull", "Miquel Perello Nieto", "Markus Kängsepp", "Telmo Silva Filho", "Hao Song", "Peter Flach" ], "title": "Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with dirichlet calibration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ananya Kumar", "Percy S Liang", "Tengyu Ma" ], "title": "Verified uncertainty calibration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aviral Kumar", "Sunita Sarawagi", "Ujjwal Jain" ], "title": "Trainable calibration measures for neural networks from kernel mean embeddings", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Sky McKinley", "Megan Levine" ], "title": "Cubic spline interpolation", "venue": "College of the Redwoods,", "year": 1998 }, { "authors": [ "Jishnu Mukhoti", "Viveka Kulharia", "Amartya Sanyal", "Stuart Golodetz", "Philip HS Torr", "Puneet K Dokania" ], "title": "Calibrating deep neural networks using focal loss", "venue": "arXiv preprint arXiv:2002.09437,", "year": 2020 }, { "authors": [ "Rafael Müller", "Simon Kornblith", "Geoffrey E Hinton" ], "title": "When does label smoothing help", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mahdi Pakdaman Naeini", "Gregory Cooper", "Milos Hauskrecht" ], "title": "Obtaining well calibrated probabilities using bayesian binning", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": null, "year": 2011 }, { "authors": [ "Alexandru Niculescu-Mizil", "Rich Caruana" ], "title": "Predicting good probabilities with supervised learning", "venue": "In Proceedings of the 22nd international conference on Machine learning,", "year": 2005 }, { "authors": [ "Jeremy Nixon", "Michael W Dusenberry", "Linchuan Zhang", "Ghassen Jerfel", "Dustin Tran" ], "title": "Measuring calibration in deep learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2019 }, { "authors": [ "Gabriel Pereyra", "George Tucker", "Jan Chorowski", "Łukasz Kaiser", "Geoffrey Hinton" ], "title": "Regularizing neural networks by penalizing confident output distributions", "venue": "arXiv preprint arXiv:1701.06548,", "year": 2017 }, { "authors": [ "John Platt" ], "title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", "venue": "Advances in large margin classifiers,", "year": 1999 }, { "authors": [ "Seonguk Seo", "Paul Hongsuck Seo", "Bohyung Han" ], "title": "Learning for single-shot confidence calibration in deep neural networks through stochastic inferences", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Nikolai Smirnov" ], "title": "On the estimation of the discrepancy between empirical curves of distribution for two independent samples", "venue": null, "year": 1939 }, { "authors": [ "Sunil Thulasidasan", "Gopinath Chennupati", "Jeff A Bilmes", "Tanmoy Bhattacharya", "Sarah Michalak" ], "title": "On mixup training: Improved calibration and predictive uncertainty for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Juozas Vaicenavicius", "David Widmann", "Carl Andersson", "Fredrik Lindsten", "Jacob Roll", "Thomas B Schön" ], "title": "Evaluating model calibration in classification", "venue": null, "year": 2019 }, { "authors": [ "David Widmann", "Fredrik Lindsten", "Dave Zachariah" ], "title": "Calibration tests in multi-class classification: A unifying framework", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Bianca Zadrozny", "Charles Elkan" ], "title": "Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers", "venue": "In Icml,", "year": 2001 }, { "authors": [ "Bianca Zadrozny", "Charles Elkan" ], "title": "Transforming classifier scores into accurate multiclass probability estimates", "venue": "In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2002 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cissé", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jize Zhang", "Bhavya Kailkhura", "T Han" ], "title": "Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite the success of modern neural networks they are shown to be poorly calibrated (Guo et al. (2017)), which has led to a growing interest in the calibration of neural networks over the past few years (Kull et al. (2019); Kumar et al. (2019; 2018); Müller et al. (2019)). Considering classification problems, a classifier is said to be calibrated if the probability values it associates with the class labels match the true probabilities of correct class assignments. For instance, if an image classifier outputs 0.2 probability for the “horse” label for 100 test images, then out of those 100 images approximately 20 images should be classified as horse. It is important to ensure calibration when using classifiers for safety-critical applications such as medical image analysis and autonomous driving where the downstream decision making depends on the predicted probabilities.\nOne of the important aspects of machine learning research is the measure used to evaluate the performance of a model and in the context of calibration, this amounts to measuring the difference between two empirical probability distributions. To this end, the popular metric, Expected Calibration Error (ECE) (Naeini et al. (2015)), approximates the classwise probability distributions using histograms and takes an expected difference. This histogram approximation has a weakness that the resulting calibration error depends on the binning scheme (number of bins and bin divisions). Even though the drawbacks of ECE have been pointed out and some improvements have been proposed (Kumar et al. (2019); Nixon et al. (2019)), the histogram approximation has not been eliminated.1\nIn this paper, we first introduce a simple, binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test (Kolmogorov (1933); Smirnov (1939)), which also provides an effective visualization of the degree of miscalibration similar to the reliability diagram (NiculescuMizil & Caruana (2005)). To this end, the main idea of the KS-test is to compare the respective classwise cumulative (empirical) distributions. Furthermore, by approximating the empirical cumulative distribution using a differentiable function via splines (McKinley & Levine (1998)), we\n1We consider metrics that measure classwise (top-r) calibration error (Kull et al. (2019)). Refer to section 2 for details.\nobtain an analytical recalibration function2 which maps the given network outputs to the actual class assignment probabilities. Such a direct mapping was previously unavailable and the problem has been approached indirectly via learning, for example, by optimizing the (modified) cross-entropy loss (Guo et al. (2017); Mukhoti et al. (2020); Müller et al. (2019)). Similar to the existing methods (Guo et al. (2017); Kull et al. (2019)) the spline-fitting is performed using a held-out calibration set and the obtained recalibration function is evaluated on an unseen test set.\nWe evaluated our method against existing calibration approaches on various image classification datasets and our spline-based recalibration approach consistently outperforms existing methods on KS error, ECE as well as other commonly used calibration measures. Our approach to calibration does not update the model parameters, which allows it to be applied on any trained network and it retains the original classification accuracy in all the tested cases." }, { "heading": "2 NOTATION AND PRELIMINARIES", "text": "We abstract the network as a function fθ : D → [0, 1]K , where D ⊂ IRd, and write fθ(x) = z. Here, x may be an image, or other input datum, and z is a vector, sometimes known as the vector of logits. In this paper, the parameters θ will not be considered, and we write simply f to represent the network function. We often refer to this function as a classifier, and in theory this could be of some other type than a neural network.\nIn a classification problem, K is the number of classes to be distinguished, and we call the value zk (the k-th component of vector z) the score for the class k. If the final layer of a network is a softmax layer, then the values zk satisfy ∑K k=1 zk = 1, and zk ≥ 0. Hence, the zk are pseudoprobabilities, though they do not necessarily have anything to do with real probabilities of correct class assignments. Typically, the value y∗ = argmaxk zk is taken as the (top-1) prediction of the network, and the corresponding score, maxk zk is called the confidence of the prediction. However, the term confidence does not have any mathematical meaning in this context and we deprecate its use.\nWe assume we are given a set of training data (xi, yi)ni=1, where xi ∈ D is an input data element, which for simplicity we call an image, and yi ∈ K = {1, . . . ,K} is the so-called ground-truth label. Our method also uses two other sets of data, called calibration data and test data.\nIt would be desirable if the numbers zk output by a network represented true probabilities. For this to make sense, we posit the existence of joint random variables (X,Y ), where X takes values in a domain D ⊂ IRd, and Y takes values in K. Further, let Z = f(X), another random variable, and Zk = fk(X) be its k-th component. Note that in this formulation X and Y are joint random variables, and the probability P (Y | X) is not assumed to be 1 for single class, and 0 for the others. A network is said to be calibrated if for every class k,\nP (Y = k | Z = z) = zk . (1)\nThis can be written briefly as P (k | f(x)) = fk(x) = zk. Thus, if the network takes input x and outputs z = f(x), then zk represents the probability (given f(x)) that image x belongs to class k.\nThe probability P (k | z) is difficult to evaluate, even empirically, and most metrics (such as ECE) use or measure a different notion called classwise calibration (Kull et al. (2019); Zadrozny & Elkan (2002)), defined as,\nP (Y = k | Zk = zk) = zk . (2)\nThis paper uses this definition (2) of calibration in the proposed KS metric.\nCalibration and accuracy of a network are different concepts. For instance, one may consider a classifier that simply outputs the class probabilities for the data, ignoring the input x. Thus, if fk(x) = zk = P (Y = k), this classifier f is calibrated but the accuracy is no better than the random predictor. Therefore, in calibration of a classifier, it is important that this is not done while sacrificing classification (for instance top-1) accuracy.\n2Open-source implementation available at https://github.com/kartikgupta-at-anu/ spline-calibration\nThe top-r prediction. The classifier f being calibrated means that fk(x) is calibrated for each class k, not only for the top class. This means that scores zk for all classes k give a meaningful estimate of the probability of the sample belonging to class k. This is particularly important in medical diagnosis where one may wish to have a reliable estimate of the probability of certain unlikely diagnoses.\nFrequently, however, one is most interested in the probability of the top scoring class, the top-1 prediction, or in general the top-r prediction. Suppose a classifier f is given with values in [0, 1]K and let y be the ground truth label. Let us use f (−r) to denote the r-th top score (so f (−1) would denote the top score; the notation follows python semantics in which A[−1] represents the last element in array A). Similarly we define max(−r) for the r-th largest value. Let f (−r) : D → [0, 1] be defined as\nf (−r)(x) = max(−r)k fk(x) , and y (−r) = { 1 if y = argmax(−r)k fk(x) 0 otherwise .\n(3)\nIn words, y(−r) is 1 if the r-th top predicted class is the correct (ground-truth) choice. The network is calibrated for the top-r predictor if for all scores σ,\nP (y(−r) = 1 | f (−r)(x) = σ) = σ . (4)\nIn words, the conditional probability that the top-r-th choice of the network is the correct choice, is equal to the r-th top score.\nSimilarly, one may consider probabilities that a datum belongs to one of the top-r scoring classes. The classifier is calibrated for being within-the-top-r classes if\nP (∑r\ns=1 y (−s) = 1 ∣∣ ∑r s=1 f (−s)(x) = σ ) = σ . (5)\nHere, the sum on the left is 1 if the ground-truth label is among the top r choices, 0 otherwise, and the sum on the right is the sum of the top r scores." }, { "heading": "3 KOLMOGOROV-SMIRNOV CALIBRATION ERROR", "text": "We now consider a way to measure if a classifier is classwise calibrated, including top-r and withintop-r calibration. This test is closely related to the Kolmogorov-Smirnov test (Kolmogorov (1933); Smirnov (1939)) for the equality of two probability distributions. This may be applied when the probability distributions are represented by samples.\nWe start with the definition of classwise calibration:\nP (Y = k | fk(X) = zk) = zk . (6) P (Y = k, fk(X) = zk) = zk P (fk(X) = zk) , Bayes’ rule .\nThis may be written more simply but with a less precise notation as\nP (zk, k) = zk P (zk) .\nMotivation of the KS test. One is motivated to test the equality (or difference between) two distributions, defined on the interval [0, 1]. However, instead of having a functional form of these distributions, one has only samples from them. Given samples (xi, yi), it is not straight-forward to estimate P (zk) or P (zk | k), since a given value zk is likely to occur only once, or not at all, since the sample set is finite. One possibility is to use histograms of these distributions. However, this requires selection of the bin size and the division between bins, and the result depends on these parameters. For this reason, we believe this is an inadequate solution.\nThe approach suggested by the Kolmogorov-Smirnov test is to compare the cumulative distributions. Thus, with k given, one tests the equality∫ σ\n0\nP (zk, k) dzk = ∫ σ 0 zk P (zk) dzk . (7)\nWriting φ1(σ) and φ2(σ) to be the two sides of this equation, the KS-distance between these two distributions is defined as KS = maxσ |φ1(σ)− φ2(σ)|. The fact that simply the maximum is used\nhere may suggest a lack of robustness, but this is a maximum difference between two integrals, so it reflects an accumulated difference between the two distributions.\nTo provide more insights into the KS-distance, let us a consider a case where zk consistently over or under-estimates P (k | zk) (which is usually the case, at least for top-1 classification (Guo et al. (2017))), then P (k | zk)−zk has constant sign for all values of zk. It follows that P (zk, k)−zkP (zk) has constant sign and so the maximum value in the KS-distance is achieved when σ = 1. In this case,\nKS = ∫ 1 0 ∣∣P (zk, k)− zkP (zk)∣∣ dzk = ∫ 1 0 ∣∣P (k | zk)− zk∣∣P (zk) dzk , (8) which is the expected difference between zk and P (k | zk). This can be equivalently referred to as the expected calibration error for the class k.\nSampled distributions. Given samples (xi, yi)Ni=1, and a fixed k, one can estimate these cumulative distributions by ∫ σ\n0\nP (zk, k) dzk ≈ 1\nN N∑ i=1 1(fk(xi) ≤ σ)× 1(yi = k) , (9)\nwhere 1 : B → {0, 1} is the function that returns 1 if the Boolean expression is true and otherwise 0. Thus, the sum is simply a count of the number of samples for which yi = k and fk(xi) ≤ σ, and so the integral represents the proportion of the data satisfying this condition. Similarly,∫ σ\n0\nzk P (zk) dzk ≈ 1\nN N∑ i=1 1(fk(xi) ≤ σ)fk(xi) . (10)\nThese sums can be computed quickly by sorting the data according to the values fk(xi), then defining two sequences as follows.\nh̃0 = h0 = 0 ,\nhi = hi−1 + 1(yi = k)/N ,\nh̃i = h̃i−1 + fk(xi)/N .\n(11)\nThe two sequences should be the same, and the metric\nKS(fk) = max i |hi − h̃i| , (12)\ngives a numerical estimate of the similarity, and hence a measure of the degree of calibration of fk. This is essentially a version of the Kolmogorov-Smirnov test for equality of two distributions.\nRemark. All this discussion holds also when k < 0, for top-r and within-top-r predictions as discussed in section 2. In (11), for instance, f−1(xi) means the top score, f−1(xi) = maxk(fk(xi)), or more generally, f−r(xi) means the r-th top score. Similarly, the expression yi = −r means that yi is the class that has the r-th top score. Note when calibrating the top-1 score, our method is applied after identifying the top-1 score, hence, it does not alter the classification accuracy." }, { "heading": "4 RECALIBRATION USING SPLINES", "text": "The function hi defined in (11) computes an empirical approximation\nhi ≈ P (Y = k, fk(X) ≤ fk(xi)) . (13)\nFor convenience, the value of fk will be referred to as the score. We now define a continuous function h(t) for t ∈ [0, 1] by h(t) = P (Y = k, fk(X) ≤ s(t)) , (14) where s(t) is the t-th fractile score, namely the value that a proportion t of the scores fk(X) lie below. For instance s(0.5) is the median score. So, hi is an empirical approximation to h(t) where t = i/N . We now provide the basic observation that allows us to compute probabilities given the scores." }, { "heading": "Test Class[-1] | Uncalibrated", "text": "KS-error = 5.493%, Probability=92.420%\nProposition 4.1. If h(t) = P (Y = k, fk(X) ≤ s(t)) as in (14) where s(t) is the t-th fractile score, then h′(t) = P (Y = k | fk(X) = s(t)), where h′(t) = dh/dt.\nProof. The proof relies on the equality P (fk(X) ≤ s(t)) = t. In words, since s(t) is the value that a fraction t of the scores are less than or equal, the probability that a score is less than or equal to s(t), is (obviously) equal to t. See the supplementary material for a detailed proof.\nNotice h′(t) allows direct conversion from score to probability. Therefore, our idea is to approximate hi using a differentiable function and take the derivative which would be our recalibration function." }, { "heading": "4.1 SPLINE FITTING", "text": "The function hi (shown in fig 1a) is obtained through sampling only. Nevertheless, the sampled graph is smooth and increasing. There are various ways to fit a smooth curve to it, so as to take derivatives. We choose to fit the sampled points hi to a cubic spline and take its derivative.\nGiven sample points (ui, vi)Ni=1 in IR× IR, easily available references show how to fit a smooth spline curve that passes directly through the points (ui, vi). A very clear description is given in McKinley & Levine (1998), for the case where the points ui are equally spaced. We wish, however, to fit a spline curve with a small number of knot points to do a least-squares fit to the points. For convenience, this is briefly described here.\nA cubic spline v(u) is defined by its values at certain knot points (ûk, v̂k)Kk=1. In fact, the value of the curve at any point u can be written as a linear function v(u) = ∑K k=1 ak(u)v̂k = a\n>(u) v̂, where the coefficients ak depend on u.3 Therefore, given a set of further points (ui, vi)Ni=1, which may be different from the knot points, and typically more in number, least-squares spline fitting of the points (ui, vi) can be written as a least-squares problem minv̂ ‖A(u)v̂ − v‖2, which is solved by standard linear least-squares techniques. Here, the matrix A has dimension N ×K with N > K. Once v̂ is found, the value of the spline at any further points u is equal to v(u) = a(u)>v̂, a linear combination of the knot-point values v̂k.\nSince the function is piecewise cubic, with continuous second derivatives, the first derivative of the spline is computed analytically. Furthermore, the derivative v′(u) can also be written as a linear combination v′(u) = a′(u)>v̂, where the coefficients a′(u) can be written explicitly.\nOur goal is to fit a spline to a set of data points (ui, vi) = (i/N, hi) defined in (11), in other words, the values hi plotted against fractile score. Then according to Proposition 4.1, the derivative of the spline is equal to P (k | fk(X) = s(t)). This allows a direct computation of the conditional probability that the sample belongs to class k.\n3Here and elsewhere, notation such as v and a denotes the vector of values vi or ak, as appropriate.\n0 20 40 60 80 100\nPercentile\n0.0\n0.2\n0.4\n0.6 0.8 C um ul at iv e S co re / P ro ba bi li\nty\n(a)\nCumulative Score Cumulative Probability\n0.0 0.2 0.4 0.6 0.8\nCumulative Score\n0.0\n0.2\n0.4\n0.6\n0.8\n(b)\nCumulative Score Cumulative Probability\n0 20 40 60 80 100\nPercentile\n0.2\n0.4\n0.6\n0.8\n1.0\nS co\nre /\nP ro\nba bi\nli ty\n(c)\nScore Probability\n0.2 0.4 0.6 0.8 1.0\nScore\n0.2\n0.4\n0.6\n0.8\n1.0\n(d)\nScore Probability" }, { "heading": "Calib Class[-1] | Calibrated", "text": "KS-error = 0.371%, Probability=93.820%" }, { "heading": "Test Class[-1] | Calibrated", "text": "KS-error = 0.716%, Probability=92.420%\nSince the derivative of hi is a probability, one might constrain the derivative to be in the range [0, 1] while fitting splines. This can be easily incorporated because the derivative of the spline is a linear expression in v̂i. The spline fitting problem thereby becomes a linearly-constrained quadratic program (QP). However, although we tested this, in all the reported experiments, a simple least-squares solver is used without the constraints." }, { "heading": "4.2 RECALIBRATION", "text": "We suppose that the classifier f = fθ is fixed, through training on the training set. Typically, if the classifier is tested on the training set, it is very close to being calibrated. However, if a classifier f is then tested on a different set of data, it may be substantially mis-calibrated. See fig 1.\nOur method of calibration is to find a further mapping γ : [0, 1]→ [0, 1], such that γ ◦fk is calibrated. This is easily obtained from the direct mapping from score fk(x) to P (k | fk(x)) (refer to fig 1d). In equations, γ(σ) = h′(s−1(σ)). The function h′ is known analytically, from fitting a spline to h(t) and taking its derivative. The function s−1 is a mapping from the given score σ to its fractile s−1(σ). Note that, a held out calibration set is used to fit the splines and the obtained recalibration function γ is evaluated on an unseen test set.\nTo this end, given a sample x from the test set with fk(x) = σ, one can compute h′(s−1(σ)) directly in one step by interpolating its value between the values of h′(fk(xi)) and h′(fk(xi+1)) where xi and xi+1 are two samples from the calibration set, with closest scores on either side of σ. Assuming the samples in the calibration set are ordered, the samples xi and xi+1 can be quickly located using binary search. Given a reasonable number of samples in the calibration set, (usually in the order of thousands), this can be very accurate. In our experiments, improvement in calibration is observed in the test set with no difference to the accuracy of the network (refer to fig 2d). In practice, spline fitting is much faster than one forward pass through the network and it is highly scalable compared to learning based calibration methods." }, { "heading": "5 RELATED WORK", "text": "Modern calibration methods. In recent years, neural networks are shown to overfit to the Negative Log-Likelihood (NLL) loss and in turn produce overconfident predictions which is cited as the main reason for miscalibration (Guo et al. (2017)). To this end, modern calibration methods can be broadly categorized into 1) methods that adapt the training procedure of the classifier, and 2) methods that learn a recalibration function post training. Among the former, the main idea is to increase the entropy of the classifier to avoid overconfident predictions, which is accomplished via modifying the training loss (Kumar et al. (2018); Mukhoti et al. (2020); Seo et al. (2019)), label smoothing (Müller et al. (2019); Pereyra et al. (2017)), and data augmentation techniques (Thulasidasan et al. (2019); Yun et al. (2019); Zhang et al. (2018)).\nOn the other hand, we are interested in calibrating an already trained classifier that eliminates the need for training from scratch. In this regard, a popular approach is Platt scaling (Platt et al. (1999)) which transforms the outputs of a binary classifier into probabilities by fitting a scaled logistic function on a held out calibration set. Similar approaches on binary classifiers include Isotonic Regression (Zadrozny & Elkan (2001)), histogram and Bayesian binning (Naeini et al. (2015); Zadrozny & Elkan (2001)), and Beta calibration (Kull et al. (2017)), which are later extended to the multiclass setting (Guo et al. (2017); Kull et al. (2019); Zadrozny & Elkan (2002)). Among these, the most popular method is temperature scaling (Guo et al. (2017)), which learns a single scalar on a held out set to calibrate the network predictions. Despite being simple and one of the early works, temperature scaling is the method to beat in calibrating modern networks. Our approach falls into this category, however, as opposed to minimizing a loss function, we obtain a recalibration function via spline-fitting, which directly maps the classifier outputs to the calibrated probabilities.\nCalibration measures. Expected Calibration Error (ECE) (Naeini et al. (2015)) is the most popular measure in the literature, however, it has a weakness that the resulting calibration error depends on the histogram binning scheme such as the bin endpoints and the number of bins. Even though, some improvements have been proposed (Nixon et al. (2019); Vaicenavicius et al. (2019)), the binning scheme has not been eliminated and it is recently shown that any binning scheme leads to underestimated calibration errors (Kumar et al. (2019); Widmann et al. (2019)). Note that, there are binning-free metrics exist such as Brier score (Brier (1950)), NLL, and kernel based metrics for the multiclass setting (Kumar et al. (2018); Widmann et al. (2019)). Nevertheless, the Brier score and NLL measure a combination of calibration error and classification error (not just the calibration which is the focus). Whereas kernel based metrics, besides being computationally expensive, measure the calibration of the predicted probability vector rather than the classwise calibration error (Kull et al. (2019)) (or top-r prediction) which is typically the quantity of interest. To this end, we introduce a binning-free calibration measure based on the classical KS-test, which has the same benefits as ECE and provides effective visualizations similar to reliability diagrams. Furthermore, KS error can be shown to be a special case of kernel based measures (Gretton et al. (2012))." }, { "heading": "6 EXPERIMENTS", "text": "Experimental setup. We evaluate our proposed calibration method on four different imageclassification datasets namely CIFAR-10/100 (Krizhevsky et al. (2009)), SVHN (Netzer et al. (2011)) and ImageNet (Deng et al. (2009)) using LeNet (LeCun et al. (1998)), ResNet (He et al. (2016)), ResNet with stochastic depth (Huang et al. (2017)), Wide ResNet (Zagoruyko & Komodakis (2016)) and DenseNet (Huang et al. (2017)) network architectures against state-of-the-art methods that calibrate post training. We use the pretrained network logits4 for spline fitting where we choose validation set as the calibration set, similar to the standard practice. Our final results for calibration are then reported on the test set of all datasets. Since ImageNet does not comprise the validation set, test set is divided into two halves: calibration set and test set. We use the natural cubic spline fitting method (that is, cubic splines with linear run-out) with 6 knots for all our experiments. Further experimental details are provided in the supplementary. For baseline methods namely: Temperature scaling, Vector scaling, Matrix scaling with ODIR (Off-diagonal and Intercept Regularisation), and Dirichlet calibration, we use the implementation of Kull et al. (Kull et al. (2019)).\n4Pre-trained network logits are obtained from https://github.com/markus93/NN_ calibration.\nResults. We provide comparisons of our method using proposed KS error for the top most prediction against state-of-the-art calibration methods namely temperature scaling (Guo et al. (2017)), vector scaling, MS-ODIR, and Dirichlet Calibration (Dir-ODIR) (Kull et al. (2019)) in Table 1. Our method reduces calibration error to 1% in almost all experiments performed on different datasets without any loss in accuracy. It clearly reflects the efficacy of our method irrespective of the scale of the dataset as well as the depth of the network architecture. It consistently performs better than the recently introduced Dirichlet calibration and Matrix scaling with ODIR (Kull et al. (2019)) in all the experiments. Note this is consistent with the top-1 calibration results reported in Table 15 of (Kull et al. (2019)). The closest competitor to our method is temperature scaling, against which our method performs better in 9 out of 13 experiments. Note, in the cases where temperature scaling outperforms our method, the gap in KS error between the two methods is marginal (< 0.3%) and our method is the second best. We provide comparisons using other calibration metrics in the supplementary.\nFrom the practical point of view, it is also important for a network to be calibrated for top second/third predictions and so on. We thus show comparisons for top-2 prediction KS error in Table 2. An observation similar to the one noted in Table 1 can be made for the top-2 predictions as well. Our method achieves< 1% calibration error in all the experiments. It consistently performs well especially for experiments performed on large scale ImageNet dataset where it sets new state-of-the-art for calibration. We would like to emphasize here, though for some cases Kull et al. (Kull et al. (2019)) and Vector Scaling perform better than our method in terms of top-2 KS calibration error, overall (considering both top-1 and top-2 predictions) our method performs better." }, { "heading": "7 CONCLUSION", "text": "In this work, we have introduced a binning-free calibration metric based on the Kolmogorov-Smirnov test to measure classwise or (within)-top-r calibration errors. Our KS error eliminates the shortcomings of the popular ECE measure and its variants while accurately measuring the expected calibration error and provides effective visualizations similar to reliability diagrams. Furthermore, we introduced a simple and effective calibration method based on spline-fitting which does not involve any learning and yet consistently yields the lowest calibration error in the majority of our experiments. We believe, the KS metric would be of wide-spread use to measure classwise calibration and our spline method would inspire learning-free approaches to neural network calibration. We intend to focus on calibration beyond classification problems as future work." }, { "heading": "8 ACKNOWLEDGEMENTS", "text": "The work is supported by the Australian Research Council Centre of Excellence for Robotic Vision (project number CE140100016). We would also like to thank Google Research and Data61, CSIRO for their support." }, { "heading": "Appendices", "text": "Here, we first provide the proof of our main result, discuss more about top-r calibration and splinefitting, and then turn to additional experiments." }, { "heading": "A PROOF OF PROPOSITION 4.1", "text": "We first restate our proposition below.\nProposition A.2. If h(t) = P (Y = k, fk(X) ≤ s(t)) as in (14) of the main paper where s(t) is the t-th fractile score. Then h′(t) = P (Y = k | fk(X) = s(t)), where h′(t) = dh/dt.\nProof. The proof is using the fundamental relationship between the Probability Distribution Function (PDF) and the Cumulative Distribution Function (CDF) and it is provided here for completeness. Taking derivatives, we see (writing P (k) instead of P (Y = k)):\nh′(t) = P (k, fk(X) = s(t)) . s ′(t)\n= P (k | fk(X) = s(t)) . P (fk(X) = s(t)) . s′(t)\n= P (k | fk(X) = s(t)) . d\ndt\n( P (fk(X) ≤ s(t)) ) = P (k | fk(X) = s(t)) . d\ndt (t)\n= P (k | fk(X) = s(t)) .\n(15)\nThe proof relies on the equality P (fk(X) ≤ s(t)) = t. In words: s(t) is the value that a fraction t of the scores are less than or equal. This equality then says: the probability that a score is less than or equal to the value that a fraction t of the scores lie below, is (obviously) equal to t.\nB MORE ON TOP-r AND WITHIN-TOP-r CALIBRATION\nIn the main paper, definitions of top-r and within-top-r calibration are given in equations (4) and (5). Here, a few more details are given of how to calibrate the classifier f for top-r and within-top-r calibration.\nThe method of calibration using splines described in this paper consists of fitting a spline to the cumulative accuracy, defined as hi in equation (11) in the main paper. For top-r classification, the method is much the same as for the classification for class k. Equation (11) is replaced by sorting the data according to the r-th top score, then defining\nh̃0 = h0 = 0 ,\nhi = hi−1 + 1(y (−r) = 1)/N ,\nh̃i = h̃i−1 + f (−r)(xi)/N ,\n(16)\nwhere y(−r) and f (−r)(xi) are defined in the main paper, equation (3). These sequences may then be used both as a metric for the correct top-r calibration and for calibration using spline-fitting as described.\nFor within-top-r calibration, one sorts the data according to the sum of the top r scores, namely∑r s=1 f (−s)(xi), then computes\nh̃0 = h0 = 0 , hi = hi−1 + 1 ( r∑ s=1 y(−s) = 1 )/ N ,\nh̃i = h̃i−1 + r∑ s=1 f (−s)(xi)/N ,\n(17)\nAs before, this can be used as a metric, or as the starting point for within-top-r calibration by our method. Examples of this type of calibration (graphs for uncalibrated networks in fig 7 and fig 9) is given in the graphs provided in fig 8 and fig 10 for within-top-2 predictions and within-top-3 predictions respectively.\nIt is notable that if a classifier is calibrated in the sense of equation (1) in the main paper (also called multi-class-calibrated), then it is also calibrated for top-r and within-top-r classification." }, { "heading": "C LEAST SQUARE SPLINE FITTING", "text": "Least-square fitting using cubic splines is a known technique. However, details are given here for the convenience of the reader. Our primary reference is (McKinley & Levine (1998)), which we adapt to least-squares fitting. We consider the case where the knot-points are evenly spaced.\nWe change notation from that used in the main paper by denoting points by (x, y) instead of (u, v). Thus, given knot points (x̂i, ŷi)Kk=1 one is required to fit some points (xi, yi) N i=1. Given a point x, the corresponding spline value is given by y = a(x)>Mŷ, where ŷ is the vector of values ŷi. The form of the vector a(x) and the matrix M are given in the following.\nThe form of the matrix M is derived from equation (25) in McKinley & Levine (1998). Define the matrices\nA = 4 1 1 4 1 1 4 1 . . .\n1 4 1 1 4\n ; B = 6 h2 1 −2 1 1 −2 1 . . .\n1 −2 1 , where h is the distance between the knot points. These matrices are of dimensions K − 2×K − 2 and K − 2×K respectively. Finally, let M be the matrix\nM = 0K >\nA−1B 0K >\nIK×K . Here, 0K is a vector of zeros of length K, and IK×K is the identity matrix. The matrix M has dimension 2K ×K. Next, let the point x lie between the knots j and j + 1 and let u = x− x̂j . Then define the vector v = a(x) by values\nvj = −u3/(6h) + u2/2− hu/3 , vj+1 = u\n3/(6h)− hu/6 , vj+K = −u/h+ 1 ,\nvj+1+K = u/h ,\nwith other entries equal to 0.\nThen the value of the spline is given by\ny = a(x)>Mŷ ,\nas required. This allows us to fit the spline (varying the values of ŷ) to points (xi, yi) by least-squares fit, as described in the main paper.\nThe above description is for so-called natural (linear-runout) splines. For quadratic-runout or cubicrunout splines the only difference is that the first and last rows of matrix A are changed – see McKinley & Levine (1998) for details.\nAs described in the main paper, it is also possible to add linear constraints to this least-squares problem, such as constraints on derivatives of the spline. This results in a linearly-constrained quadratic programming problem." }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "We first provide the experimental setup for different datasets in Table 3. Note, the calibration set is used for spline fitting in our method and then final evaluation is based on an unseen test set.\nWe also provide comparisons of our method against baseline methods for within-top-2 predictions (equation 5 of the main paper) in Table 4 using KS error. Our method achieves comparable or better results for within-top-2 predictions. It should be noted that the scores for top-3 (f (−3)(x)) or even top-4, top-5, etc., are very close to zero for majority of the samples (due to overconfidence of top-1 predictions). Therefore the calibration error for top-r with r > 2 predictions is very close to zero and comparing different methods with respect to it is of little value. Furthermore, for visual illustration, we provide calibration graphs of top-2 predictions in fig 3 and fig 4 for uncalibrated and calibrated network respectively. Similar graphs for top-3, within-top-2, and within-top-3 predictions are presented in figures 5 – 10.\nWe also provide classification accuracy comparisons for different post-hoc calibration methods against our method if we apply calibration for all top-1, 2, 3, . . . ,K predictions for K-class classification problem in Table 5. We would like to point out that there is negligible change in accuracy between the calibrated networks (using our method) and the uncalibrated ones." }, { "heading": "Test Class[-2] | Uncalibrated", "text": "KS-error = 2.256%, Probability=97.500%\n0 20 40 60 80 100\nPercentile\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nC um\nul at\niv e\nS co\nre /\nP ro\nba bi\nli ty\n(a)\nCumulative Score Cumulative Probability\n0.0 0.2 0.4 0.6 0.8 1.0\nCumulative Score\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\n(b)\nCumulative Score Cumulative Probability\n0 20 40 60 80 100\nPercentile\n0.6\n0.7\n0.8\n0.9\n1.0\nS co\nre /\nP ro\nba bi\nli ty\n(c)\nScore Probability\n0.6 0.7 0.8 0.9 1.0\nScore\n0.6\n0.7\n0.8\n0.9\n1.0\n(d)\nScore Probability" }, { "heading": "Calib Class[-2] | Calibrated", "text": "KS-error = 0.144%, Probability=98.040%" }, { "heading": "Test Class[-2] | Calibrated", "text": "KS-error = 0.571%, Probability=97.500%" }, { "heading": "Test Class[-3] | Uncalibrated", "text": "KS-error = 0.983%, Probability=98.970%\n0 20 40 60 80 100\nPercentile\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nC um\nul at\niv e\nS co\nre /\nP ro\nba bi\nli ty\n(a)\nCumulative Score Cumulative Probability\n0.0 0.2 0.4 0.6 0.8 1.0\nCumulative Score\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\n(b)\nCumulative Score Cumulative Probability\n0 20 40 60 80 100\nPercentile\n0.825\n0.850\n0.875\n0.900\n0.925\n0.950\n0.975\n1.000\nS co\nre /\nP ro\nba bi\nli ty\n(c)\nScore Probability\n0.85 0.90 0.95 1.00\nScore\n0.825\n0.850\n0.875\n0.900\n0.925\n0.950\n0.975\n1.000\n(d)\nScore Probability" }, { "heading": "Calib Class[-3] | Calibrated", "text": "KS-error = 0.176%, Probability=99.280%" }, { "heading": "Test Class[-3] | Calibrated", "text": "KS-error = 0.368%, Probability=98.970%\nFor the sake of completeness, we present calibration results using the existing calibration metric, Expected Calibration Error (ECE) (Naeini et al. (2015)) in Table 6. We would like to reiterate the fact that ECE metric is highly dependent on the chosen number of bins and thus does not really reflect true calibration performance. To reflect the efficacy of our proposed calibration method, we also present calibration results using other calibration metrics such as recently proposed binning free measure KDE-ECE (Zhang et al. (2020)), MCE (Maximum Calibration Error) (Guo et al. (2017)) and Brier Scores for top-1 predictions on ImageNet dataset in Table 7. Since, the original formulation of Brier Score for multi-class predictions is highly biased on the accuracy and is approximately similar for all calibration methods, we hereby use top-1 Brier Score which is the mean squared error between top-1 scores and ground truths for the top-1 predictions (1 if the prediction is correct and 0 otherwise). It can be clearly observed that our approach consistently outperforms all the baselines on different calibration measures." } ]
2,021
null
SP:cdc407d403e1008ced29c7cda727db0d631cc966
[ "This paper proposes ProxylessKD method from a novel perspective of knowledge distillation. Instead of minimizing the outputs of teacher and student models, ProxylessKD adopts a shared classifier for two models. The shared classifier yields better aligned embedding space, so the embeddings from teacher and student models are comparable. Since the optimization objective for student model is learning discriminative embeddings, the face recognition performance is improved compared to the vanilla KL counterpart." ]
Knowledge Distillation (KD) refers to transferring knowledge from a large model to a smaller one, which is widely used to enhance model performance in machine learning. It tries to align embedding spaces generated from the teacher and the student model (i.e. to make images corresponding to the same semantics share the same embedding across different models). In this work, we focus on its application in face recognition. We observe that existing knowledge distillation models optimize the proxy tasks that force the student to mimic the teacher’s behavior, instead of directly optimizing the face recognition accuracy. Consequently, the obtained student models are not guaranteed to be optimal on the target task or able to benefit from advanced constraints, such as large margin constraint (e.g. margin-based softmax). We then propose a novel method named ProxylessKD that directly optimizes face recognition accuracy by inheriting the teacher’s classifier as the student’s classifier to guide the student to learn discriminative embeddings in the teacher’s embedding space. The proposed ProxylessKD is very easy to implement and sufficiently generic to be extended to other tasks beyond face recognition. We conduct extensive experiments on standard face recognition benchmarks, and the results demonstrate that ProxylessKD achieves superior performance over existing knowledge distillation methods.
[ { "affiliations": [], "name": "FACE RECOGNI" } ]
[ { "authors": [ "Umar Asif", "Jianbin Tang", "Stefan Harrer" ], "title": "Ensemble knowledge distillation for learning improved and efficient networks", "venue": "arXiv preprint arXiv:1909.08097,", "year": 2019 }, { "authors": [ "Guobin Chen", "Wongun Choi", "Xiang Yu", "Tony Han", "Manmohan Chandraker" ], "title": "Learning efficient object detection models with knowledge distillation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Guobin Chen", "Wongun Choi", "Xiang Yu", "Tony Han", "Manmohan Chandraker" ], "title": "Learning efficient object detection models with knowledge distillation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Wei-Chun Chen", "Chia-Che Chang", "Che-Rung Lee" ], "title": "Knowledge distillation with feature maps for image classification", "venue": "In Asian Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Jiankang Deng", "Jia Guo", "Stefanos Zafeiriou Arcface" ], "title": "Additive angular margin loss for deep face recognition", "venue": null, "year": 2018 }, { "authors": [ "Jiankang Deng", "Jia Guo", "Niannan Xue", "Stefanos Zafeiriou" ], "title": "Arcface: Additive angular margin loss for deep face recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jiankang Deng", "Jia Guo", "Debing Zhang", "Yafeng Deng", "Xiangju Lu", "Song Shi" ], "title": "Lightweight face recognition challenge", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Yandong Guo", "Lei Zhang", "Yuxiao Hu", "Xiaodong He", "Jianfeng Gao" ], "title": "Ms-celeb-1m: A dataset and benchmark for large-scale face recognition", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Gary B Huang", "Marwan Mattar", "Tamara Berg", "Eric Learned-Miller" ], "title": "Labeled faces in the wild: A database forstudying face recognition in unconstrained environments", "venue": null, "year": 2008 }, { "authors": [ "Ira Kemelmacher-Shlizerman", "Steven M Seitz", "Daniel Miller", "Evan Brossard" ], "title": "The megaface benchmark: 1 million faces for recognition at scale", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Weiyang Liu", "Yandong Wen", "Zhiding Yu", "Ming Li", "Bhiksha Raj", "Le Song" ], "title": "Sphereface: Deep hypersphere embedding for face recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Yifan Liu", "Ke Chen", "Chris Liu", "Zengchang Qin", "Zhenbo Luo", "Jingdong Wang" ], "title": "Structured knowledge distillation for semantic segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Yu Liu" ], "title": "Towards flops-constrained face recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Yuchen Liu", "Hao Xiong", "Zhongjun He", "Jiajun Zhang", "Hua Wu", "Haifeng Wang", "Chengqing Zong" ], "title": "End-to-end speech translation with knowledge distillation", "venue": "arXiv preprint arXiv:1904.08075,", "year": 2019 }, { "authors": [ "Brianna Maze", "Jocelyn Adams", "James A Duncan", "Nathan Kalka", "Tim Miller", "Charles Otto", "Anil K Jain", "W Tyler Niggel", "Janet Anderson", "Jordan Cheney" ], "title": "Iarpa janus benchmark-c: Face dataset and protocol", "venue": "In 2018 International Conference on Biometrics (ICB),", "year": 2018 }, { "authors": [ "Yurii Nesterov" ], "title": "A method for unconstrained convex minimization problem with the rate of convergence o (1/kˆ 2)", "venue": "In Doklady an ussr,", "year": 1983 }, { "authors": [ "Hong-Wei Ng", "Stefan Winkler" ], "title": "A data-driven approach to cleaning large face datasets", "venue": "In 2014 IEEE international conference on image processing (ICIP),", "year": 2014 }, { "authors": [ "Sangyong Park", "Yong Seok Heo" ], "title": "Knowledge distillation for semantic segmentation using channel and spatial correlations and adaptive cross", "venue": "entropy. Sensors,", "year": 2020 }, { "authors": [ "Wonpyo Park", "Dongju Kim", "Yan Lu", "Minsu Cho" ], "title": "Relational knowledge distillation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Baoyun Peng", "Xiao Jin", "Jiaheng Liu", "Dongsheng Li", "Yichao Wu", "Yu Liu", "Shunfeng Zhou", "Zhaoning Zhang" ], "title": "Correlation congruence for knowledge distillation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Rajeev Ranjan", "Carlos D Castillo", "Rama Chellappa" ], "title": "L2-constrained softmax loss for discriminative face verification", "venue": "arXiv preprint arXiv:1703.09507,", "year": 2017 }, { "authors": [ "Adriana Romero", "Nicolas Ballas", "Samira Ebrahimi Kahou", "Antoine Chassang", "Carlo Gatta", "Yoshua Bengio" ], "title": "Fitnets: Hints for thin deep nets", "venue": "arXiv preprint arXiv:1412.6550,", "year": 2014 }, { "authors": [ "Soumyadip Sengupta", "Jun-Cheng Chen", "Carlos Castillo", "Vishal M Patel", "Rama Chellappa", "David W Jacobs" ], "title": "Frontal to profile face verification in the wild", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2016 }, { "authors": [ "Yifan Sun", "Changmao Cheng", "Yuhan Zhang", "Chi Zhang", "Liang Zheng", "Zhongdao Wang", "Yichen Wei" ], "title": "Circle loss: A unified perspective of pair similarity optimization", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Feng Wang", "Xiang Xiang", "Jian Cheng", "Alan Loddon Yuille" ], "title": "Normface: L2 hypersphere embedding for face verification", "venue": "In Proceedings of the 25th ACM international conference on Multimedia,", "year": 2017 }, { "authors": [ "Feng Wang", "Jian Cheng", "Weiyang Liu", "Haijun Liu" ], "title": "Additive margin softmax for face verification", "venue": "IEEE Signal Processing Letters,", "year": 2018 }, { "authors": [ "Xiaojie Wang", "Rui Zhang", "Yu Sun", "Jianzhong Qi" ], "title": "Kdgan: Knowledge distillation with generative adversarial networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Cameron Whitelam", "Emma Taborsky", "Austin Blanton", "Brianna Maze", "Jocelyn Adams", "Tim Miller", "Nathan Kalka", "Anil K Jain", "James A Duncan", "Kristen Allen" ], "title": "Iarpa janus benchmark-b face dataset", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Svante Wold", "Kim Esbensen", "Paul Geladi" ], "title": "Principal component analysis", "venue": "Chemometrics and intelligent laboratory systems,", "year": 1987 }, { "authors": [ "Junho Yim", "Donggyu Joo", "Jihoon Bae", "Junmo Kim" ], "title": "A gift from knowledge distillation: Fast optimization, network minimization and transfer learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer", "venue": "arXiv preprint arXiv:1612.03928,", "year": 2016 }, { "authors": [ "Tianyue Zheng", "Weihong Deng" ], "title": "Cross-pose lfw: A database for studying cross-pose face recognition in unconstrained environments", "venue": "Beijing University of Posts and Telecommunications, Tech. Rep,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Knowledge Distillation (KD) is a process of transferring knowledge from a large model to a smaller one. This technique is widely used to enhance model performance in many machine learning tasks such as image classification (Hinton et al., 2015), object detection (Chen et al., 2017b) and speech translation (Liu et al., 2019c). When applied to face recognition, the embeddings of a gallery are usually extracted by a larger teacher model while the embeddings of the query images are extracted by a smaller student model. The student model is encouraged to align its embedding space with that of the teacher, so as to improve its recognition capability.\nPrevious KD works promote the consistency in final predictions (Hinton et al., 2015), or in the activations of the hidden layer between student and teacher (Romero et al., 2014; Zagoruyko & Komodakis, 2016). Such an idea of only optimizing the consistency in predictions or activations brings limited performance boost since the student is often a small model with weaker capacity compared with the teacher. Later, Park et al. (2019); Peng et al. (2019) propose to exploit the correlation between instances to guide the student to mimic feature relationships of the teacher over a batch of input data, which achieves better performance. However, the above works all aim at guiding the student to mimic the behavior of the teacher, which is not suitable for practical face recognition. In reality, it is very important to directly align embedding spaces between student and teacher, which can enable models across different devices to share the same embedding space for feasible similarity comparison. To solve this, a simple and direct method is to directly minimize the L2 distance of embeddings extracted by student and teacher. However, this method (we call it L2KD) only considers minimizing the intra-class distance and ignores maximizing the inter-class distance, and is unable to benefit from some powerful loss functions with large margin (e.g. Cosface loss (Wang et al., 2018a), Arcface loss (Deng et al., 2019a)) constraint to further improve the performance.\n(a) L2KD (b) ProxylessKD\n1.0\n0.8\n0.6\n0.4\n0.2\n0.0\nFigure 1: The embedding distributions extracted by (a) L2KD, and (b) ProxylessKD\nIn this work, we propose an effective knowledge distillation method named ProxylessKD. According to Ranjan et al. (2017), the classifier neurons in a recognition model can be viewed as the approximate embedding centers of each class. This can be used to guide the embedding learning as in this way, the classifier can encourage the embedding to align with the approximate embedding centers corresponding to the label of the image. Inspired by this, we propose to initialize the weight of the student’s classifier with the weight of the teacher’s clas-\nsifier and fix it during the distillation process, which forces the student to produce an embedding space as consistent with that of the teacher as possible. Different from previous knowledge distillation works (Hinton et al., 2015; Zagoruyko & Komodakis, 2016; Romero et al., 2014; Park et al., 2019; Peng et al., 2019) and L2KD, the proposed ProxylessKD not only directly optimizes the target task but also considers minimizing the intra-class distance and maximizing the inter-class distance. Meanwhile it can benefit from large margin constraints (e.g. Cosface loss (Wang et al., 2018a) and Arcface loss (Deng et al., 2019a)). As shown in Figure 1, the intra-class distance in ProxylessKD combined with Arcface loss is much closer than L2KD, and the inter-class distance in ProxylessKD combined with Arcface loss is much larger than L2KD. Thus it can be expected that our ProxylessKD is able to improve the performance of face recognition, which will be experimentally validated.\nThe main contributions in this paper are summarized as follows:\n• We analyze the shortcomings of existing knowledge distillation methods: they only optimize the proxy task rather than the target task; and they cannot conveniently integrate with advanced large margin constraints to further lift performance.\n• We propose a simple yet effective KD method named ProxylessKD, which directly boosts embedding space alignment and can be easily combined with existing loss functions to achieve better performance.\n• We conduct extensive experiments on standard face recognition benchmarks, and the results well demonstrate the effectiveness of the proposed ProxylessKD." }, { "heading": "2 RELATED WORK", "text": "Knowledge distillation. Knowledge distillation aims to transfer the knowledge from the teacher model to a small model. The pioneer work is Buciluǎ et al. (2006), and Hinton et al. (2015) popularizes this idea by defining the concept of knowledge distillation (KD) as training the small model (the student) by exploiting the soft targets provided by a cumbersome model (the teacher). Unlike the one-hot label, the soft targets from the teacher contain rich related information among classes, which can guide the student to better learn the fine-grained distribution of data and thus lift performance. Lots of variants of model distillation strategies have been proposed and widely adopted in the fields like image classification (Chen et al., 2018), object detection (Chen et al., 2017a), semantic segmentation (Liu et al., 2019a; Park & Heo, 2020), etc. Concretely, Zagoruyko & Komodakis (2016) proposed a response-based KD model, Attention Transfer (AT), which aims to teach the student to activate the same region as the teacher model. Some relation-based distillation methods have also been developed, which encourage the student to mimic the relation of the output in different stages (Yim et al., 2017) and the samples in a batch (Park et al., 2019). The previous works mostly optimize the proxy tasks rather than the target task. In this work, we directly optimize face recognition accuracy by inheriting the teacher’s classifier as the student’s classifier to guide the student to learn discriminative embeddings in the teacher’s embedding space. In (Deng et al., 2019b), they also directly copy and fix the weights of the margin inner-product layer of the teacher model to the student model to train the student model and the motivation of (Deng et al., 2019b) is the student model can be trained with better pre-defined inter-class information from the teacher model. However, different from (Deng et al., 2019b), we firstly analyze the shortcomings of existing knowledge distillation methods. Specifically, the existing methods target optimizing the proxy task rather than\nthe target task; and they cannot conveniently integrate with advanced large margin constraints to further lift performance. These valuable analyses and observations are not found in (Deng et al., 2019b) and other existing works. Secondly strong motivation and the physical explanation of the proposed ProxylessKD is well explained in our work. Figure 1 and corresponding analysis explained why ProxylessKD can achieve better performance than the existing methods that optimize the proxy task. Such in-depth analysis and strong physical explanation are novel and cannot be found in (Deng et al., 2019b) and other existing works. We believe these novel findings and the proposed solution are valuable to the face recognition community and will inspire researchers in related fields. Finally, solid experiments are designed and conducted to justify the importance of directly optimize the final task rather than the proxy task when doing knowledge distillation. And the properties of ProxylessKD about using different margin-based loss function and hyper-parameters are well examined. These detailed analyses about ProxylessKD cannot be found in (Deng et al., 2019b) and other existing works. We believe the above important differences and novel contributions make our work differs from (Deng et al., 2019b) and existing works.\nLoss functions used in face recognition. Softmax loss is defined as the pipeline combination of the last fully connected layer, softmax function, and cross-entropy loss. Although it can help the network separate categories in a high-dimensional space, for fine-grained classification problems like face recognition, it offers limited accuracy due to the considerable inter-class similarity. Liu et al. (2017) proposed Sphereface to achieve smaller maximal intra-class distance than minimal inter-class distance, which can directly enhance feature discrimination. Compared with SphereFace in which the margin m is multiplied on the angle, Wang et al. (2018a); Whitelam et al. (2017) proposed CosFace, where the margin is directly subtracted from cosine, achieving better performance than SphereFace and relieving the need for joint supervision from the softmax loss. To further improve feature discrimination, Deng et al. (2018) proposed the ArcFace that utilizes the arc-cosine function to calculate the angle, i.e. adding an additive angular margin and back again by the cosine function. In this paper, we combine our ProxylessKD with the above loss functions to further lift performance, e.g. Arcface loss function." }, { "heading": "3 METHODOLOGY", "text": "We first revisit popular loss functions in face recognition in Sec. 3.1, and elaborate on our ProxylessKD in Sec. 3.2. Then we introduce how to combine our method with existing loss functions in Sec. 3.3." }, { "heading": "3.1 REVISIT LOSS FUNCTION IN FACE RECOGNITION", "text": "The most classical loss function in classification is the Softmax loss, which is represented as follows:\nL1 = − 1\nN N∑ i=1 log es·cos(θwy,xi ) es·cos(θwy,xi ) + ∑K k 6=y e s·cos(θwk,xi ) . (1)\nHere, wk denotes the weight of the model classifier, where k ∈ {1, 2, ...,K} and K denotes the number of classes. xi is the embedding of i-th sample and usually normalized with magnitude replaced with a scale parameter of s. θwk,xi denotes the angle between wk and xi. y is the ground truth label for the input embedding xi. N is the batch size. In recent years, several margin-based softmax loss functions (Liu et al., 2017; Wang et al., 2017; 2018a; Deng et al., 2019a) have been proposed to boost the embedding discrimination, which is represented as follows:\nL2 = − 1\nN N∑ i=1 log es·f(m, θwy,xi ) es·f(m, θwy,xi ) + ∑K k 6=y e s·cos(θwk,xi ) . (2)\nIn the above equation, f(m, θwy,xi) is a margin function. Precisely, f(m, θwy,xi) = cos(m · θwy,xi) is A-Softmax loss proposed in (Liu et al., 2017), where m is an integer and greater than zero. f(m, θwy,xi) = cos(θwy,xi) −m is the AM-Softmax loss proposed in Wang et al. (2018a) and the hyper-parameterm is greater than zero. f(m, θwy,xi) = cos(θwy,xi+m) withm > 0 is Arc-Softmax introduced in Deng et al. (2019a), which achieves better performance than the former. Fortunately, the proposed ProxylessKD can be combined with the above loss function, conveniently. In this paper, we combine our proposed ProxylessKD method with above loss functions and investigate their performance." }, { "heading": "3.2 INHERITED CLASSIFIER KNOWLEDGE DISTILLATION", "text": "The models trained for different devices are expected to share the same embedding space for similarity comparison. However, most existing knowledge distillation models only optimize proxy tasks, encouraging the student to mimic the teacher’s behavior, instead of directly optimizing the target accuracy. In this paper, we propose to directly optimize the target task by inheriting the teacher’s classifier to encourage better embedding space consistency between the student and the teacher.\nKnowledge distillation with single teacher. In most of the existing distillation works (Chen et al., 2018; Liu et al., 2019a; Park & Heo, 2020), single large model is utilized as the teacher to guide the student. Hence, we firstly introduce our ProxylessKD method under the common single teacher model knowledge distillation, as shown in Figure 2 (a), where ET and ES represent the embedding extracted by the teacher model T and the student model S, respectively. Unlike previous works (Hinton et al., 2015; Zagoruyko & Komodakis, 2016) that optimize the proxy task, we emphasize on optimizing the target task. To this end, we try to directly align the embedding space between the teacher model and the student model. Specifically, we firstly train a teacher model with the Arcface loss (Deng et al., 2019a), and then initialize the weight of the student’s classifier with the weight of the teacher’s classifier and fix the weight in the distillation stage. Using Arcface loss enables our method to benefit from large margin constraints in the distillation procedure. The distillation form of Figure 2 (a) can be defined as follows:\nL3 = − 1\nN N∑ i=1 log e s·f(m,θWty,xi ) e s·f(m,θWty,xi ) + ∑K k 6=y e s·cos(θWt k ,xi ) (3)\nf(m, θwty,xi) is a margin function, xi is the the embedding of i-th sample in a batch, w t y and w t k are classifier’s weights from the teacher model, y is the class of the xi, k ∈ {1, 2, 3...,K} and K is the number of classes in the dataset. θwt,xi is the angle between w\nt and xi. m denotes the preset hyper-parameter. When we adjust the value of m, the interval among different intre-class samples will be changed.\nKnowledge distillation with multiple teachers: Using an ensemble of teacher models would further boost the performance of knowledge distillation, according to previous work (Asif et al., 2019). Therefore, we here introduce how to implement ProxylessKD with the ensemble of teacher models in Figure 2, which better aligns with a practical face recognition system. To do this, we firstly train n different teacher models to acquire n embeddings for each input and concatenate the n embeddings to produce a high-dimensional embedding. Secondly, we employee a dimensionality reduction layer or the PCA (Principal Component Analysis) method (Wold et al., 1987) to reduce the high-dimensional embedding to adapt to student’s embedding dimensional. Finally, we input the embedding after dimensionality reduction into a new classifier and retrain it. We will do the same operate as the knowledge distillation with single teacher, when we obtain the new classifier. The\nensemble of teachers’ classifier can be optimized as\nET = ϕ(concatenation(Et1 , Et2 , ..., Etn))\nL4 = − 1\nN N∑ i=1 log e s·f(m,θWty,ET ) e s·f(m,θWty,ET ) + ∑K k 6=y e s·cos(θWt k ,ET )\n(4)\nEti , (i = 1, 2, ..., n), n ≥ 2 and n ∈ N+, is the embedding of the i-th sample extracted by n teacher models, ET is the dimensionality reduction vector of the i-th sample, concatenation is the operation of concatenating embeddings, ϕ is dimensionality reduction function (i.e., PCA function or the dimensionality reduction layer function)." }, { "heading": "3.3 INCORPORATING WITH OTHER LOSS FUNCTIONS", "text": "In Sec. 3.1, we only introduce the classic loss functions (i.e., Equation (1) and (2) in face recognition. As long as the loss function uses the output of the classifier to calculate the loss (e.g., ArcNegFace (Liu et al., 2019b)) for optimizing the network, the proposed ProxylessKD method can combine with it. Therefore, more powerful loss functions can be incorporated into our ProxylessKD to further improve the performance. The unified form can be defined as follows:\nL = − 1 N N∑ i=1 C (wt, xi) (5)\nxi is the embedding of the i-th sample. wt is the weight of the classifier from the teacher model. N is the number of samples in a batch. We use C (·) to represent the loss calculated by various loss function types. Note, it is not restricted to the field of face recognition but also applicable to other general classification tasks where our ProxylessKD is used for model performance improvement. For example, the recent work (Sun et al., 2020) proposed circle loss with excellent results achieved for the fine-grained classification tasks. It can be integrated into the proposed ProxylessKD to further boost the performance in the general classification tasks.\n4 EXPERIMENTS\n4.1 IMPLEMENTATION DETAILS\nDatasets. We adopt the high-quality version namely MS1MV2 refined from MS-Celeb-1M dataset (Guo et al., 2016) by Deng et al. (2019a) for training. For testing, we utilize three face verification datasets, i.e. LFW (Huang et al., 2008), CPLFW (Zheng & Deng, 2018), CFPFP (Sengupta et al., 2016). Besides, we also test our proposed method on large-scale image datasets MegaFace (Kemelmacher-Shlizerman et al., 2016), IJB-B (Whitelam et al., 2017) and IJB-C (Maze et al., 2018)). Details about these datasets are shown in Table 1.\nData processing. We follow Wang et al. (2018b); Deng et al. (2019a) to generate the normalized face crops (112× 112) with five fa-\ncial points in the data processing. All training faces are horizontally flipped with probability 0.5 for data augmentation.\nNetwork architecture. In this parper, we set the n=4, i.e., the four models are the ResNet152, ResNet101, AttentionNet92 and DenseNet201 as the ensemble of teacher models, and choose ResNet18 as the student model. After the last convolutional layer, we leverage the FC-BN structure to get the final 512-D embedding. In the ensemble procedure of four teacher models, we train again a new dimensionality reduction layer and the classifier layer with the feature that is cascaded\nfrom four teacher models as input to acquire a new embedding. This new embedding is applied to knowledge distillation in L2KD, and the new classifier is inherited by student model to do knowledge distillation.\nTraining. All models are trained from scratch with NAG (Nesterov, 1983) and 512 batch size for each teacher training and 1024 for the remaining training procedure. The momentum is set to 0.9 and the weight decay is 4e-5. The dimension of all embedding is 512. The initial learning rate is set to 0.1, 0.1, 0.9, 0.35 in the training of the teacher, student, L2KD, and ProxylessKD, respectively. The training process for the teacher model is finished with 8 epochs, and 16 epochs is used for all remaining experiments. We use the cosine decay in all training and Arcface loss as the supervision, in which m = 0.5 and s=64 following Deng et al. (2019a). All experiments are done on 8x2080Ti GPUs in parallel and implemented by the Mxnet library based on Gluon-Face.\nTesting. In practical face recognition or image search, the embeddings of database usually are extracted by a larger model while the embeddings of query images are are extracted by a smaller model. Considering this, we should evaluate the consistency of embeddings extracted by the larger model and the smaller model, which represents the performance of different KD methods. Specifically, in the identification task, a large model is used to extract the embeddings of the database, and a small model is used to extract embeddings of the query images. In the verification task, we firstly calculate the verification accuracy using embeddings of image pair extracted by the large and the small model respectively, then calculate the verification accuracy using embeddings of image pair extracted by the small and the large model in turn, finally take the average of them. In particular, we use the same verification method on the small datasets (i.e., LFW, CPLFW, and CFP-FP) following Wang et al. (2018b); Deng et al. (2019a). We use two kinds of measure methods (i.e., verification and identification) to test IJB-B, IJB-C dataset and MegaFace dataset. Note the images in the 1M interference set are extracted by the larger model, and the query images are extracted by the small model in the MegaFace dataset. Meanwhile, we introduce performance when we only use a small model." }, { "heading": "4.2 ABLATION STUDY", "text": "Results on different loss functions. Our ProxylessKD can be easily combined with the existing loss functions (e.g., L2softmax (Ranjan et al., 2017), Cosface loss (Wang et al., 2018a), Arcface loss (Deng et al., 2019a)) to supervise the learning of embedding and achieve better embedding space alignment. In Table 2, 3, 4, we show the performance difference of our method under the supervision of different loss functions.\nAs shown in Table 2, 3, 4, compared with L2softmax and Cosface loss, our method achieves better results with the supervision of the Arcface loss function. This proves that our ProxylessKD is able to lift performance by incorporating a powerful loss function. Hence, we can foresee that our method will achieve better results with the development of more powerful loss functions in classification.\nResults on different margins. As shown in Figure 3, to further illustrate the impact of different margins on different scale datasets, we utilize Arcface loss with different margins as the supervised loss of the proposed ProxylessKD to test its sensitivity to margins. Red points mark the best results.\nSpecifically, from Figure 3 (a), (b), and (c), we observe that margin=0.2 achieves better results than others. Though on the LFW dataset the best result is gained at margin=0.4, the gap is tiny and merely 0.02%, which demonstrates the small margin is more appropriate at the small scale dataset. However, on the IJB-B/C and MegaFace that are large scale datasets, we find larger margins bring better results. In particular, when the margin is set to 0.5, the same as the setting in training ensemble of teacher’s classifier, the performance is the best. This indicates the performance of ProxylessKD will be better if using a larger margin at a large scale dataset, as shown in Figure 3 (d) ∼ (k)." }, { "heading": "4.3 COMPARING WITH OTHER METHODS", "text": "Single and multiple model mode. We show two evaluation modes in Table 5, 6, 7. L2KD-s and ProxylessKD-s mean the embeddings are only extracted by the student model (single model mode), and the suffix of “-m” represents the embeddings of database are extracted with a teacher model while the embeddings of query images are extracted by the student model (multiple model mode). The more detailed information can be found in Sec. 4.1.\nAs shown in Table 5, the accuracy on LFW is similar between L2KD and ProxylessKD, but our ProxylessKD achieves better performance on CPLFW and CFP-FP under two evaluation modes. In particular, we achieve 0.91% and 1.32% improvements on CPLFW and CFP-FP under single model mode, and 1.02% and 0.74% better than the L2KD under multiple model mode. Note that ProxylessKD is trained with margin=0.2 with Arcface loss and the margin is set to 0.5 in the next experiments.\nThe MegaFace dataset contains 100K photos of 530 unique individuals from FaceScrub (Ng & Winkler, 2014) as the probe set and 1M images of 690K different individuals as the gallery set. On MegaFace, we employ two testing tasks (verification and identification) under two mode (i.e., single model mode and multiply model mode). In the testing, except the features of gallery images extracted by the teacher model, the features of the images input the gallery each time is also extracted by the teacher model. In Table 6, we show the performance of L2KD and ProxylessKD. Though MegaFace is a larger scale dataset, and for a more complex recognition task, our proposed method still achieves better results in Rank-1 and boosts the performance by 1.03% and 0.12% under the single model mode and multiple model mode, respectively. And, in multiple model mode evaluation, it performs better than single model mode.\nThe IJB-B dataset (Whitelam et al. (2017)) has 1,845 subjects with 21.8K static images and 55 K frames from 7,011 videos. There are 12,115 templates with 10,270 authentic matches and 8 M impostor matches in all. The IJB-C dataset (Maze et al., 2018) is the extension of IJB-B, containing 3,531 subjects with 31.3K static images and 117.5K frames from 11.779 videos. There are 23,124 templates with 19,557 genuine matches and 15,639 impostor matches in all. As shown in Table 7, compared with the L2KD, ProxylessKD achieves the best results among all evaluation methods on IJB-B and IJB-C and\nmore consistent improvement than L2KD in embedding space alignment between teacher and student. We can see the performance of multiple model mode is better than single model mode, which explains the reason why the base database embeddings are extracted by a large model in the practical\nface recognition. Especially at @FAR = 1e− 6 on IJB-C, the accuracy of multiple model mode is improved by 10 ∼ 15 % than single model mode. This explains why we advocate directly aligning the embedding space is more essential due to the base database features are also extracted by a large model in the parctical face recoginition. The experiments prove ProxylessKD is a more effective strategy than the existing practical face recognition method (i.e., L2KD)." }, { "heading": "5 CONCLUSIONS", "text": "In this work we propose a simple yet powerful knowledge distillation method named ProxylessKD, which inherits the teacher’s classifier as the student’s classifier to directly optimize the remaining networks of the student while fixing the classifier’s weights in the training procedure. Compared with L2KD, which only considers the intra-class distance and ignores the intra-class distance, our proposed ProxylessKD pays attention to them both. Meanwhile, it can benefit from the large margin of existing constraints, which is new to exiting knowledge distillation research. Our method can achieve better performance than other distillation methods in most evaluations, proving its effectiveness." } ]
2,020
PROXYLESSKD: DIRECT KNOWLEDGE DISTILLATION
SP:a15d5230fecc1dad8998905f17c82cf8e05c98d3
[ "This paper proposes a contrastive learning approach where one of the views, x, is converted into two subviews, x' and x'', and then separate InfoNCE style bounds constructed for each of I(x'';y) and I(x';y|x'') before being combined to form an overall training objective. Critically, the second of these is based on the conditional MI, I(x';y|x''), distinguishing it from previous work using multiple views that just take the marginal I(x';y). Estimating this conditional MI transpires to be somewhat trickier due to the additional intractability from p(y|x''), with approximations suggested to get around this. Experiments are performed on both vision and NLP problems." ]
Many self-supervised representation learning methods maximize mutual information (MI) across views. In this paper, we transform each view into a set of subviews and then decompose the original MI bound into a sum of bounds involving conditional MI between the subviews. E.g., given two views x and y of the same input example, we can split x into two subviews, x′ and x′′, which depend only on x but are otherwise unconstrained. The following holds: I(x; y) ≥ I(x′′; y) + I(x′; y|x′′), due to the chain rule and information processing inequality. By maximizing both terms in the decomposition, our approach explicitly rewards the encoder for any information about y which it extracts from x′′, and for information about y extracted from x′ in excess of the information from x′′. We provide a novel contrastive lower bound on conditional MI, that relies on sampling contrast sets from p(y|x′′). By decomposing the original MI into a sum of increasingly challenging MI bounds between sets of increasingly informed views, our representations can capture more of the total information shared between the original views. We empirically test the method in a vision domain and for dialogue generation.
[]
[ { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Proc. Conf. on Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "David Barber", "Felix Agakov" ], "title": "The im algorithm: A variational approach to information maximization", "venue": "In Proc. Conf. on Neural Information Processing Systems (NIPS),", "year": 2003 }, { "authors": [ "Mathilde Caron", "Ishan Misra", "Julien Mairal", "Priya Goyal", "Piotr Bojanowski", "Armand Joulin" ], "title": "Unsupervised learning of visual features by contrasting cluster assignments", "venue": "arXiv preprint arXiv:2006.09882,", "year": 2020 }, { "authors": [ "Ciwan Ceylan", "Michael U Gutmann" ], "title": "Conditional noise-contrastive estimation of unnormalised models", "venue": "arXiv preprint arXiv:1806.03664,", "year": 2018 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Xinlei Chen", "Haoqi Fan", "Ross Girshick", "Kaiming He" ], "title": "Improved baselines with momentum contrastive learning", "venue": "arXiv preprint arXiv:2003.04297,", "year": 2020 }, { "authors": [ "Chris Cremer", "Quaid Morris", "David Duvenaud" ], "title": "Reinterpreting importance-weighted autoencoders", "venue": "Proc. Int. Conf. on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "venue": "Proc. Conf. of the North American Chapter of the Assoc. for Computational Linguistics: Human Language Technologies (NAACL-HLT),", "year": 2019 }, { "authors": [ "Emily Dinan", "Stephen Roller", "Kurt Shuster", "Angela Fan", "Michael Auli", "Jason Weston" ], "title": "Wizard of wikipedia: Knowledge-powered conversational agents", "venue": "arXiv preprint arXiv:1811.01241,", "year": 2018 }, { "authors": [ "Peter Elias" ], "title": "Predictive coding–i", "venue": "IRE Transactions on Information Theory,", "year": 1955 }, { "authors": [ "Adam Foster", "Martin Jankowiak", "Matthew O’Meara", "Yee Whye Teh", "Tom Rainforth" ], "title": "A unified stochastic gradient approach to designing bayesian-optimal experiments", "venue": "Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "arXiv preprint arXiv:1803.07728,", "year": 2018 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Michael U Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Geoffrey E Hinton" ], "title": "A practical guide to training restricted boltzmann machines", "venue": "In Neural networks: Tricks of the trade,", "year": 2012 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In Proc. Int. Conf. on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Alexander Kraskov", "Harald Stögbauer", "Peter Grassberger" ], "title": "Estimating mutual information", "venue": "Physical review E,", "year": 2004 }, { "authors": [ "Jiwei Li", "Michel Galley", "Chris Brockett", "Jianfeng Gao", "Bill Dolan" ], "title": "A diversity-promoting objective function for neural conversation models", "venue": "In Proc. Conf. of the North American Chapter of the Assoc. for Computational Linguistics: Human Language Technologies (NAACL-HLT),", "year": 2016 }, { "authors": [ "Margaret Li", "Stephen Roller", "Ilia Kulikov", "Sean Welleck", "Y-Lan Boureau", "Kyunghyun Cho", "Jason Weston" ], "title": "Don’t say that! making inconsistent dialogue unlikely with unlikelihood training", "venue": null, "year": 1911 }, { "authors": [ "Zhuang Ma", "Michael Collins" ], "title": "Noise contrastive estimation and negative sampling for conditional models: Consistency and statistical efficiency", "venue": "arXiv preprint arXiv:1809.01812,", "year": 2018 }, { "authors": [ "David McAllester" ], "title": "Information theoretic co-training", "venue": "arXiv preprint arXiv:1802.07572,", "year": 2018 }, { "authors": [ "David McAllester", "Karl Stratos" ], "title": "Formal limitations on the measurement of mutual information", "venue": "arXiv preprint arXiv:1811.04251,", "year": 2018 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th annual meeting on association for computational linguistics,", "year": 2002 }, { "authors": [ "Ben Poole", "Sherjil Ozair", "Aaron van den Oord", "Alexander A Alemi", "George Tucker" ], "title": "On variational bounds of mutual information", "venue": "In Proc. Int. Conf. on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Tom Rainforth", "Adam R Kosiorek", "Tuan Anh Le", "Chris J Maddison", "Maximilian Igl", "Frank Wood", "Yee Whye Teh" ], "title": "Tighter variational bounds are not necessarily better", "venue": "arXiv preprint arXiv:1802.04537,", "year": 2018 }, { "authors": [ "Donald B. Rubin" ], "title": "The calculation of posterior distributions by data augmentation: Comment: A noniterative sampling/importance resampling alternative to the data augmentation algorithm for creating a few imputations when fractions of missing information are modest: The SIR algorithm", "venue": "Journal of the American Statistical Association,", "year": 1987 }, { "authors": [ "Øivind Skare", "Erik Bølviken", "Lars Holden" ], "title": "Improved sampling-importance resampling and reduced bias importance sampling", "venue": "Scandinavian Journal of Statistics,", "year": 2003 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Chen Sun", "Ben Poole", "Dilip Krishnan", "Cordelia Schmid", "Phillip Isola" ], "title": "What makes for good views for contrastive learning", "venue": "arXiv preprint arXiv:2005.10243,", "year": 2020 }, { "authors": [ "Michael Tschannen", "Josip Djolonga", "Paul K Rubenstein", "Sylvain Gelly", "Mario Lucic" ], "title": "On mutual information maximization for representation learning", "venue": null, "year": 1907 }, { "authors": [ "Tongzhou Wang", "Phillip Isola" ], "title": "Understanding contrastive representation learning through alignment and uniformity on the hypersphere", "venue": "arXiv preprint arXiv:2005.10242,", "year": 2020 }, { "authors": [ "Sean Welleck", "Ilia Kulikov", "Stephen Roller", "Emily Dinan", "Kyunghyun Cho", "Jason Weston" ], "title": "Neural text generation with unlikelihood training", "venue": null, "year": 1908 }, { "authors": [ "Thomas Wolf", "Victor Sanh", "Julien Chaumond", "Clement Delangue" ], "title": "Transfertransfo: A transfer learning approach for neural network based conversational agents", "venue": "In Proc. Conf. on Neural Information Processing Systems (NeurIPS) CAI Workshop,", "year": 2019 }, { "authors": [ "Yizhe Zhang", "Siqi Sun", "Michel Galley", "Yen-Chun Chen", "Chris Brockett", "Xiang Gao", "Jianfeng Gao", "Jingjing Liu", "Bill Dolan" ], "title": "Dialogpt: Large-scale generative pre-training for conversational response generation", "venue": null, "year": 1911 } ]
[ { "heading": "1 INTRODUCTION", "text": "The ability to extract actionable information from data in the absence of explicit supervision seems to be a core prerequisite for building systems that can, for instance, learn from few data points or quickly make analogies and transfer to other tasks. Approaches to this problem include generative models (Hinton, 2012; Kingma & Welling, 2014) and self-supervised representation learning approaches, in which the objective is not to maximize likelihood, but to formulate a series of (label-agnostic) tasks that the model needs to solve through its representations (Noroozi & Favaro, 2016; Devlin et al., 2019; Gidaris et al., 2018; Hjelm et al., 2019). Self-supervised learning includes successful models leveraging contrastive learning, which have recently attained comparable performance to their fully-supervised counterparts (Bachman et al., 2019; Chen et al., 2020a).\nMany self-supervised learning methods train an encoder such that the representations of a pair of views x and y derived from the same input example are more similar to each other than to representations of views sampled from a contrastive negative sample distribution, which is usually the marginal distribution of the data. For images, different views can be built using random flipping, color jittering and cropping (Bachman et al., 2019; Chen et al., 2020a). For sequential data such as conversational text, the views can be past and future utterances in a given dialogue. It can be shown that these methods maximize a lower bound on mutual information (MI) between the views, I(x; y), w.r.t. the encoder, i.e. the InfoNCE bound (Oord et al., 2018). One significant shortcoming of this approach is the large number of contrastive samples required, which directly impacts the total amount of information which the bound can measure (McAllester & Stratos, 2018; Poole et al., 2019).\nIn this paper, we consider creating subviews of x by removing information from it in various ways, e.g. by masking some pixels. Then, we use representations from less informed subviews as a source of hard contrastive samples for representations from more informed subviews. For example, in Fig. 1, one can mask a pixel region in x′ to obtain x′′ and ask (the representation of) x′′ to be closer to y than to random images of the corpus, and for x′ to be closer to y than to samples from p(y|x′′). This corresponds to decomposing the MI between x and y into I(x; y) ≥ I(x′′; y) + I(x′; y|x′′). The conditional MI measures the information about y that the model has gained by looking at x′ beyond the information already contained in x′′. In Fig. 1 (left), standard contrastive approaches\ncould focus on the overall “shape” of the object and would need many negative samples to capture other discriminative features. In our approach, the model is more directly encouraged to capture these additional features, e.g. the embossed detailing. In the context of predictive coding on sequential data such as dialogue, by setting x′′ to be the most recent utterance (Fig. 1, right), the encoder is directly encouraged to capture long-term dependencies that cannot be explained by x′′. We formally show that, by such decomposition, our representations can potentially capture more of the total information shared between the original views x and y.\nMaximizing MI between multiple views can be related to recent efforts in representation learning, amongst them AMDIM (Bachman et al., 2019), CMC (Tian et al., 2019) and SwAV (Caron et al., 2020). However, these models maximize the sum of MIs between views I({x′, x′′}; y) = I(x′′; y) + I(x′; y). E.g., in Bachman et al. (2019), x′ and x′′ could be global and local representations of an image, and in Caron et al. (2020), x′ and x′′ could be the views resulting from standard cropping and the aggressive multi-crop strategy. This equality is only valid when the views x′ and x′′ are statistically independent, which usually does not hold. Instead, we argue that a better decomposition is I({x′, x′′}; y) = I(x′′; y) + I(x′; y|x′′), which always holds. Most importantly, the conditional MI term encourages the encoder to capture more non-redundant information across views.\nTo maximize our proposed decomposition, we present a novel lower bound on conditional MI in Section 3. For the conditional MI maximization, we give a computationally tractable approximation that adds minimal overhead. In Section 4, we first show in a synthetic setting that decomposing MI and using the proposed conditional MI bound leads to capturing more of the ground-truth MI. Finally, we present evidence of the effectiveness of the method in vision and in dialogue generation." }, { "heading": "2 PROBLEM SETTING", "text": "The maximum MI predictive coding framework (McAllester, 2018; Oord et al., 2018; Hjelm et al., 2019) prescribes learning representations of input data such that they maximize MI. Estimating MI is generally a hard problem that has received a lot of attention in the community (Kraskov et al., 2004; Barber & Agakov, 2003). Let x and y be two random variables which can generally describe input data from various domains, e.g. text, images or sound. We can learn representations of x and y by maximizing the MI of the respective features produced by encoders f, g : X → Rd, which by the data processing inequality, is bounded by I(x; y):\narg max f,g\nI(f(x); g(y)) ≤ I(x; y). (1)\nWe assume that the encoders can be shared, i.e. f = g. The optimization in Eq. 1 is challenging but can be lower-bounded. Our starting point is the recently proposed InfoNCE lower bound on MI (Oord et al., 2018) and its application to self-supervised learning for visual representations (Bachman\net al., 2019; Chen et al., 2020a). In this setting, x and y are paired input images, or independentlyaugmented copies of the same image. These are encoded using a neural network encoder which is trained such that the representations of the two image copies are closer to each other in the embedding space than to other images drawn from the marginal distribution of the corpus. This can be viewed as a contrastive estimation of the MI (Oord et al., 2018). We present the InfoNCE bound next." }, { "heading": "2.1 INFONCE BOUND", "text": "InfoNCE (Oord et al., 2018) is a lower-bound on I(x; y) obtained by comparing pairs sampled from the joint distribution x, y1 ∼ p(x, y) to a set of negative samples, y2:K ∼ p(y2:K) = ∏K k=2 p(yk), also called contrastive, independently sampled from the marginal:\nINCE(x; y|E,K) = Ep(x,y1)p(y2:K)\n[ log\neE(x,y1)\n1 K ∑K k=1 e E(x,yk)\n] ≤ I(x, y), (2)\nwhere E is a critic assigning a real valued score to x, y pairs. We provide an exact derivation for this bound in the Appendix1. For this bound, the optimal critic is the log-odds between the conditional distribution p(y|x) and the marginal distribution of y, E∗(x, y) = log p(y|x)p(y) + c(x) (Oord et al., 2018; Poole et al., 2019). The InfoNCE bound is loose if the true mutual information I(x; y) is larger than logK. In order to overcome this difficulty, recent methods either train with large batch sizes (Chen et al., 2020a) or exploit an external memory of negative samples in order to reduce memory requirements (Chen et al., 2020b; Tian et al., 2020). These methods rely on uniform sampling from the training set in order to form the contrastive sets. For further discussion of the limits of variational bounds of MI, see McAllester & Stratos (2018)." }, { "heading": "3 DECOMPOSING MUTUAL INFORMATION", "text": "By the data processing inequality: I(x; y) ≥ I({x1, . . . , xN}; y), where {x1, . . . , xN} are different subviews of x – i.e., views derived from x without adding any exogenous information. For example, {x1, . . . , xN} can represent exchanges in a longer dialog x, sentences in a document x, or different augmentations of the same image x. Equality is obtained when the set of subviews retains all information about x, e.g. if x is in the set.\nWithout loss of generality, we consider the case N = 2, I(x; y) ≥ I({x′, x′′}; y), where {x′, x′′} indicates two subviews derived from the original x. We can apply the chain rule for MI:\nI(x; y) ≥ I({x′, x′′}; y) = I(x′′; y) + I(x′; y|x′′), (3) where the equality is obtained if and only if I(x; y|{x′, x′′}) = 0, i.e. x doesn’t give any information about y in excess to {x′, x′′}2. This suggests that we can maximize I(x; y) by maximizing each of the MI terms in the sum. The conditional MI term can be written as:\nI(x′; y|x′′) = Ep(x′,x′′,y) [\nlog p(y|x′, x′′) p(y|x′′)\n] . (4)\nThis conditional MI is different from the unconditional MI, I(x′; y), insofar it measures the amount of information shared between x′ and y which cannot be explained by x′′. Note that the decomposition holds for arbitrary partitions of x′, x′′, e.g. I({x′, x′′}; y) = I(x′; y) + I(x′′; y|x′). When X is high-dimensional, the amount of mutual information between x and y will potentially be larger than the amount of MI that INCE can measure given computational constraints associated with large K and the poor log scaling properties of the bound. The idea that we put forward is to split the total MI into a sum of MI terms of smaller magnitude, thus for which INCE would have less bias for any given K, and estimate each of those terms in turn. The resulting decomposed bound can be written into a sum of unconditional and conditional MI terms:\nINCES(x; y) = INCE(x ′′; y) + ICNCE(x ′; y|x′′) ≤ I(x; y), (5) 1The derivation in Oord et al. (2018) presented an approximation and therefore was not properly a bound. An alternative, exact derivation of the bound can be found in Poole et al. (2019). 2For a proof of this fact, it suffices to consider I({x, x′, x′′}; y) = I(x; y|{x′, x′′}) + I({x′, x′′}; y), given that I({x, x′, x′′}; y) = I(x; y), equality is obtained iff I(x; y|{x′, x′′}) = 0.\nwhere ICNCE is a lower-bound on conditional MI and will be presented in the next section. Both conditional (Eq. 6) and unconditional bounds on the MI (Eq. 14) can capture at most logK nats of MI. Therefore, the bound that arises from the decomposition of the MI in Eq. 5 potentially allows to capture up to N log K nats of MI in total, where N is the number of subviews used to describe x. This shows that measuring mutual information by decomposing it in a sequence of estimation problems potentially allows to capture more nats of MI than with the standard INCE , which is bounded by log K." }, { "heading": "4 CONTRASTIVE BOUNDS ON CONDITIONAL MUTUAL INFORMATION", "text": "One of the difficulties in computing the decomposed bound is measuring the conditional mutual information. In this section, we provide bounds and approximations of this quantity. First, we show that we can readily extend InfoNCE. Proposition 1 (Conditional InfoNCE). The following is a lower-bound on the conditional mutual information I(x′; y|x′′) and verifies the properties below:\nICNCE(x ′; y|x′′, E,K) = Ep(x′,x′′,y1)p(y2:K |x′′)\n[ log\neE(x ′′,x′,y1)\n1 K ∑K k=1 e E(x′′,x′,yk)\n] (6)\n1. ICNCE ≤ I(x′; y|x′′).\n2. E∗ = arg supE ICNCE = log p(y|x′′,x′) p(y|x′′) + c(x ′, x′′).\n3. When K →∞ and E = E∗, we recover the true conditional MI: limK→∞ ICNCE(x ′; y|x′′, E∗,K) = I(x′; y|x′′).\nThe proof can be found in Sec. A.2 and follows closely the derivation of the InfoNCE bound by applying a result from Barber & Agakov (2003) and setting the proposal distribution of the variational approximation to p(y|x′′). An alternative derivation of this bound was also presented in parallel in Foster et al. (2020) for optimal experiment design. Eq. 6 shows that a lower bound on the conditional MI can be obtained by sampling contrastive sets from the proposal distribution p(y|x′′). Indeed, since we want to estimate the MI conditioned on x′′, we should allow our contrastive distribution to condition on x′′. Note that E is now a function of three variables.\nComputing Eq. 6 requires access to a large number of samples from p(y|x′′), which is unknown and usually challenging to obtain. In order to overcome this, we propose two solutions." }, { "heading": "4.1 VARIATIONAL APPROXIMATION", "text": "The next proposition shows that it is possible to obtain a bound on the conditional MI by approximating the unknown conditional distribution p(y|x′′) with a variational distribution τ(y|x′′). Proposition 2 (Variational ICNCE). For any variational approximation τ(y|x′′) in lieu of p(y|x′′),\nIV AR(x ′, y|x′′, E, τ,K) = Ep(x′,x′′,y1)τ(y2:K |x′′)\n[ log\neE(x ′′,x′,y1)\n1 K ∑K k=1 e E(x′′,x′,yk)\n] (7)\n− Ep(x′′) [ KL (p(y|x′′) ‖ τ(y|x′′)) ] ,\nwith p(·|x′′) << τ(·|x′′) for any x′′, we have the following properties:\n1. IV AR ≤ I(x′; y|x′′).\n2. If τ(y|x′′) = p(y|x′′), IV AR = ICNCE .\n3. limK→∞ supE IV AR(x ′; y|x′′, E, τ,K) = I(x′; y|x′′).\nSee Sec. A.3 for the proof. This bound side-steps the problem of requiring access to an arbitrary number of contrastive samples from the unknown p(y|x′′) by i.i.d. sampling from the known and\ntractable τ(y|x′′). We prove that as the number of examples goes to∞, optimizing the bound w.r.t. E converges to the true conditional MI. Interestingly, this holds true for any value of τ , though the choice of τ will most likely impact the convergence rate of the estimator.\nEq. 3 is superficially similar to the ELBO (evidence lower bound) objective used to train VAEs (Kingma & Welling, 2014), where τ plays the role of the approximate posterior (although the KL direction in the ELBO is inverted). This parallel suggests that τ∗(y|x′′) = p(y|x′′) may not be the optimal solution for some values of K and E. However, we see trivially that if we ignore the dependency of the first expectation term on τ and only optimize τ to minimize the KL term, then it is guaranteed that τ∗(y|x) = p(y|x′′), for any K and E. Thus, by the second property in Proposition 2, optimizing IV AR(E, τ∗,K) w.r.t E will correspond to optimizing ICNCE .\nIn practice, the latter observation significantly simplifies the estimation problem as one can minimize a Monte-Carlo approximation of the KL divergence w.r.t τ by standard supervised learning: we can efficiently approximate the KL by taking samples from p(y|x′′). Those can be directly obtained by using the joint samples from p(x, y) included in the training set and computing x′′ from x.3" }, { "heading": "4.2 IMPORTANCE SAMPLING APPROXIMATION", "text": "Maximizing IV AR can still be challenging as it requires estimating a distribution over potentially high-dimensional inputs. In this section, we provide an importance sampling approximation of ICNCE that bypasses this issue.\nWe start by observing that the optimal critic for INCE(x′′; y|E,K) is Ē(x′′, y) = log p(y|x ′′) p(y) +c(x ′′), for any c. Assuming we have appropriately estimated Ē(x′′, y), it is possible to use importance sampling to produce approximate samples from p(y|x′′). This is achieved by first sampling y′1:M ∼ p(y) and resampling K ≤ M (K > 0) examples i.i.d. from the normalized importance distribution qSIR(yk) = wkδ(yk ∈ y′1:M ), where wk = exp Ē(x′′,yk)∑M m=1 exp Ē(x ′′,ym) . This process is also called “sampling importance resampling” (SIR). As M/K → ∞, it is guaranteed to produce samples from p(y|x′′) (Rubin, 1987). The SIR estimator is written as:\nISIR(x ′, y|x′′, E,K) = Ep(x′′,x′,y1)p(y′1:M )qSIR(y2:K)\n[ 1\nK log\neE(x ′′,x′,y1)∑K\nk=1 e E(x′′,x′,yk)\n] , (8)\nwhere we note the dependence of qSIR on wk and hence Ē. SIR is known to increase the variance of the estimator (Skare et al., 2003) and is wasteful given that only a smaller set of K examples are actually used for MI estimation. Hereafter, we provide a cheap approximation of the SIR estimator.\nThe key idea is to rewrite the contribution of the negative samples in the denominator of Eq. 8 as an average (K − 1) ∑K k=2 1 K−1e\nE(x′′,x′,yk) and use the normalized importance weights wk to estimate that term under the resampling distribution. We hypothesize that this variant has less variance as it does not require the additional resampling step. The following proposition shows that as the number of negative examples goes to infinity, the proposed approximation converges to the true value of the conditional MI.\nProposition 3 (Importance Sampling ICNCE). The following approximation of ISIR:\nIIS(x ′, y|x′′, E,K) = Ep(x′′,x′,y1)p(y2:K) log\neE(x ′′,x′,y1)\n1 K (e E(x′′,x′,y1) + (K − 1) ∑K k=2 wke E(x′′,x′,yk)) ,\n(9)\nwhere wk = exp Ē(x′′,yk)∑K\nk=2 exp Ē(x ′′,yk)\nand Ē = arg supE INCE(x ′′, y|E,K), verifies:\n1. limK→∞ supE IIS(x ′; y|x′′, E,K) = I(x′; y|x′′),\n2. limK→∞ arg supE IIS = log p(y|x′′,x′) p(y|x′′) + c(x ′, x′′).\n3The ability to perform that computation is usually a key assumption in self-supervised learning approaches.\nThe proof can be found in Sec. A.4. This objective up-weights the negative contribution to the normalization term of examples that have high probability under the resampling distribution. This approximation is cheap to compute given that the negative samples still initially come from the marginal distribution p(y) and avoids the need for resampling. The proposition shows that in the limit of K →∞, optimizing IIS w.r.t. E converges to the conditional MI and the optimal E converges to the optimal ICNCE solution. We also note that we suppose E The IIS approximation provides a general, grounded way of sampling “harder” negatives by filtering samples from the easily-sampled marginal p(y)." }, { "heading": "5 EXPERIMENTS", "text": "We start by investigating whether maximizing the decomposed MI using our conditional MI bound leads to a better estimate of the ground-truth MI in a synthetic experiment. Then, we experiment on a self-supervised image representation learning domain. Finally, we explore an application to natural language generation in a sequential setting, such as conversational dialogue." }, { "heading": "5.1 SYNTHETIC DATA", "text": "We extend Poole et al. (2019)’s two variable setup to three variables. We posit that {x′, x′′, y} are three Gaussian co-variates, x′, x′′, y ∼ N (0,Σ) and we choose Σ such that we can control the total mutual information I({x′, x′′}; y) such that I = {5, 10, 15, 20} (see Appendix for pseudo-code and details of the setup). We aim to estimate the total MI I({x′, x′′}; y) and compare the performance of our approximators in doing so. For more details of this particular experimental setting, see App. B.\nIn Figure 2, we compare the estimate of the MI obtained by:\n1. InfoNCE, which computes INCE({x′, x′′}, y|E,K) and will serve as our baseline; 2. InfoNCEs, which probes the effectiveness of decomposing the total MI into a sum of\nsmaller terms and computes INCE(x′′, y|E,K/2)+ICNCE(x′, y|x′′, E,K/2), whereK/2 samples are obtained from p(y) and K/2 are sampled from p(y|x′′);\n3. InfoNCEs IS, the decomposed bound using our importance sampling approximation to the conditional MI IIS , i.e. INCE(x′′, y|E,K) + IIS(x′, y|x′′, E,K). This does not require access to samples from p(y|x′′) and aims to test the validity of our approximation in an empirical setting. Both terms reuse the same number of samples K.\nFor 2., we use only half as many samples as InfoNCE to estimate each term in the MI decomposition (K/2), so that the total number of negative samples is comparable to InfoNCE. Note that we use K samples in “InfoNCE IS”, because those are reused for the conditional MI computation. All critics E are parametrized by MLPs as explained in Sec. B. Our results in Figure 2 show that, for larger amounts of true MI, decomposing MI as we proposed can capture more nats than InfoNCE with an order magnitude less examples. We also note that the importance sampling estimator seems to" }, { "heading": "Additional Losses / More Epochs", "text": "estimate MI reliably. Its empirical behavior for MI = {5, 10} could indicate that InfoNCEs IS is a valid lower bound on MI, although we couldn’t prove it formally." }, { "heading": "5.2 VISION", "text": "Imagenet We study self-supervised learning of image representations using 224x224 images from ImageNet. The evaluation is performed by fitting a linear classifier to the task labels using the pre-trained representations only, that is, we fix the weights of the pre-trained image encoder f . Each input image is independently augmented into two views x and y using a stochastically applied transformation. For the base model hyper-parameters and augmentations, we follow the “InfoMin Aug.” setup (Tian et al., 2020). This uses random resized crop, color jittering, gaussian blur, rand augment, color dropping, and jigsaw as augmentations and uses a momentum-contrastive memory buffer of K = 65536 examples (Chen et al., 2020b).\nWe fork x into two sub-views {x′, x′′}: we set x′ , x and x′′ to be an information-restricted view of x. We found beneficial to maximize both decompositions of the MI: I(x′; y) + I(x′′; y|x′) = I(x′′; y) + I(x′; y|x′′). By noting that I(x′′; y|x′) is likely zero given that the information of x′′ is contained in x′, our encoder f is trained to maximize:\nL = λ INCE(x′; y|f,K) + (1− λ) ( INCE(x ′′; y|f,K) + IIS(x′; y|x′′, f,K) )\n(10)\nNote that if x′′ = x, then our decomposition boils down to maximizing the standard InfoNCE bound. Therefore, InfoMin Aug. is recovered by fixing λ = 1 or by setting x′′ = x. The computation of the conditional MI term does not add computational cost as it can be computed by caching the logits used in the two unconditional MI terms (see Sec. B).\nWe experiment with two ways of obtaining restricted information views x′′: cut, which applies cutout to x, and crop which is inspired by Caron et al. (2020) and consists in cropping the image aggressively and resizing the resulting crops to 96x96. To do so, we use the RandomResizedCrop from the torchvision.transforms module with parameters: s = (0.05, 0.14). Results are reported in Table 1. Augmenting the InfoMin Aug. base model with our conditional contrastive loss leads to 0.8% gains on top-1 accuracy and 0.6% on top-5 accuracy. We notice that the crop strategy seems to perform slightly better than the cut strategy. One reason could be that cutout introduces image patches that do not follow the pixel statistics in the corpus. More generally, we think there could be information restricted views that are better suited than others. In order to isolate the impact on performance due to integrating an additional view x′′, i.e. the INCE(x′′; y|f,K) term in the optimization, we set the conditional mutual information term to zero in the line “without cond. MI”. We see that this does not improve over the baseline InfoMin Aug., and its performance is 1% lower than our method, pointing to the fact that maximizing conditional MI across views provides the observed gains. We also include the very recent results of SwAV (Caron et al., 2020) and ByOL (Grill et al., 2020) which use a larger number of views (SwAV) and different loss functions (SwAV, ByOL)\nand thus we think are orthogonal to our approach. We think our approach is general and could be integrated in those solutions as well.\nCIFAR-10 We also experiment on CIFAR-10 building upon SimCLR (Chen et al., 2020b), which uses a standard ResNet-50 architecture by replacing the first 7x7 Conv of stride 2 with 3x3 Conv of stride 1 and also remove the max pooling operation. In order to generate the views, we use Inception crop (flip and resize to 32x32) and color distortion. We train with learning rate 0.5, batch-size 800, momentum coefficient of 0.9 and cosine annealing schedule. Our energy function is the cosine similarity between representations scaled by a temperature of 0.5 (Chen et al., 2020b). We obtain a top-1 accuracy of 94.7% using a linear classifier compared to 94.0% as reported in Chen et al. (2020b) and 95.1% for a supervised baseline with same architecture." }, { "heading": "5.3 DIALOGUE", "text": "For dialogue language modeling, we adopt the predictive coding framework (Elias, 1955; McAllester & Stratos, 2018) and consider past and future in a dialogue as views of the same conversation. Given L utterances x = (x1, . . . , xL), we maximize INCS(x≤k;x>k|f,K), where past x≤k = (x1, . . . , xk) and future x>k = (xk+1, . . . , xL) are obtained by choosing a split point 1 < k < L. We obtain f(x≤k), f(x>k) by computing a forward pass of the fine-tuned “small” GPT2 model (Radford et al., 2019) on past and future tokens, respectively, and obtaining the state corresponding to the last token in the last layer.\nWe evaluate our introduced models against different baselines. GPT2 is a basic small pre-trained model fine-tuned on the dialogue corpus. TransferTransfo (Wolf et al., 2019) augments the standard next-word prediction loss in GPT2 with the next-sentence prediction loss similar to Devlin et al. (2019). Our baseline GPT2+InfoNCE maximizes INCE(x≤k;x>k|f,K) in addition to standard nextword prediction loss. In GPT2+InfoNCES , we further set x′ = x≤k and x′′ = xk, the recent past, and maximize INCES(x≤k, x>k). To maximize the conditional MI bound, we sample contrastive futures from p(x>k|xk; θGPT2), using GPT2 itself as the variational approximation4. We fine-tune all models on the Wizard of Wikipedia (WoW) dataset (Dinan et al., 2018) with early stopping on validation perplexity. We evaluate our models using automated metrics and human evaluation: we report perplexity (ppl), BLEU (Papineni et al., 2002), and word-repetition-based metrics from Welleck et al. (2019), specifically: seq-rep-n measures the portion of duplicate n-grams and seq-rep-avg averages over n ∈ {2, 3, 4, 5, 6}. We measure diversity via dist-n (Li et al., 2016), the number of unique n-grams, normalized by the total number of n-grams.\nTable 4 shows results on the validation set. For the test set results, please refer to the Appendix. Incorporating InfoNCE yields improvements in all metrics5. Please refer to the Appendix for sample dialogue exchanges. We also perform human evaluation on randomly sampled 1000 WoW dialogue contexts. We present the annotators with a pair of candidate responses consisting of GPT2+InfoNCES responses and baseline responses. They were asked to compare the pairs regarding interestingness, relevance and humanness, using a 3-point Likert scale (Zhang et al., 2019). Table 4 lists the difference between fraction of wins for GPT2+InfoNCES and other models as H-rel, H-hum, and H-int. Overall, GPT2+InfoNCES was strongly preferred over GPT2, TransferTransfo and GPT2+InfoNCE, but not the gold response. Bootstrap confidence intervals and p-values (t-test) indicate all improvements except for GPT2+InfoNCE on the relevance criterion are significant at α=0.05." }, { "heading": "6 DISCUSSION", "text": "The result in Eq. 5 is reminiscent of conditional noise-contrastive estimation (CNCE) (Ceylan & Gutmann, 2018) which proposes a framework for data-conditional noise distributions for noise contrastive estimation (Gutmann & Hyvärinen, 2012). Here, we provide an alternative interpretation in terms of a bound on conditional mutual information. In CNCE, the proposal distribution is obtained by noising the conditional proposal distribution. It would be interesting to investigate whether it\n4The negative sampling of future candidates is done offline. 5Note that our results are not directly comparable with Li et al. (2019) as their model is trained from scratch\non a not publicly available Reddit-based corpus.\nis possible to form information-restricted views by similar noise injection, and whether “optimal” info-restricted views exist.\nRecent work questioned whether MI maximization itself is at the core of the recent success in representation learning (Rainforth et al., 2018; Tschannen et al., 2019). These observed that models capturing a larger amount of mutual information between views do not always lead to better downstream performance and that other desirable properties of the representation space may be responsible for the improvements (Wang & Isola, 2020). Although we acknowledge that various factors can be at play for downstream performance, we posit that devising more effective ways to maximize MI will still prove useful in representation learning, especially if paired with architectural inductive biases or explicit regularization methods." }, { "heading": "A DERIVATIONS", "text": "A.1 DERIVATION OF INFONCE, INCE\nWe start from Barber and Agakov’s variational lower bound on MI (Barber & Agakov, 2003). I(x; y) can be bounded as follows:\nI(x; y) = Ep(x,y) log p(y|x) p(y) ≥ Ep(x,y) log q(y|x) p(y) , (11)\nwhere q is an arbitrary distribution. We show that the InfoNCE bound (Oord et al., 2018) corresponds to a particular choice for the variational distribution q followed by the application of the Jensen inequality. Specifically, q(y|x) is defined by independently sampling a set of examples {y1, . . . , yK} from a proposal distribution π(y) and then choosing y from {y1, . . . , yK} in proportion to the importance weights wy = eE(x,y)∑ k e E(x,yk) , where E is a function that takes x and y and outputs a scalar. In the context of representation learning, E is usually a dot product between some representations of x and y, e.g. f(x)T f(y) (Oord et al., 2018). The unnormalized density of y given a specific set of samples y2:K = {y2, . . . , yK} and x is:\nq(y|x, y2:K) = π(y) · K · eE(x,y) eE(x,y) + ∑K k=2 e E(x,yk) , (12)\nwhere we introduce a factor K which provides “normalization in expectation”. By normalization in expectation, we mean that taking the expectation of q(y|x, y2:K) with respect to resampling of the alternatives y2:K from π(y) produces a normalized density (see Sec. A.1.1 for a derivation):\nq̄(y|x) = Eπ(y2:K)[q(y|x, y2:K)], (13) where π(y2:K) = ∏K k=2 π(yk). The InfoNCE bound (Oord et al., 2018) is then obtained by setting the proposal distribution as the marginal distribution, π(y) ≡ p(y) and applying Jensen’s inequality, giving:\nI(x, y) ≥ Ep(x,y) log Ep(y2:K)q(y|x, y2:K)\np(y) ≥ Ep(x,y)\n[ Ep(y2:K) log p(y)K · wy p(y) ] = Ep(x,y) [ Ep(y2:K) log K · eE(x,y)\neE(x,y) + ∑K k=2 e E(x,yk)\n]\n= Ep(x,y1)p(y2:K) [ log eE(x,y)\n1 K ∑K k=1 e E(x,yk)\n] = INCE(x; y|E,K) ≤ logK, (14)\nwhere the second inequality has been obtained using Jensen’s inequality." }, { "heading": "A.1.1 DERIVATION OF NORMALIZED DISTRIBUTION", "text": "We follow Cremer et al. (2017) to show that q(y|x) = Ey2:K∼π(y)[q(y|x, y2:K)] is a normalized distribution:∫ x q(y|x) dy = ∫ y Ey2:K∼π(y) π(y) eE(x,y) 1 K (∑K k=2 e E(x,yk) + eE(x,y) ) dy\n= ∫ y π(y)Ey2:K∼π(y) eE(x,y) 1 K (∑K k=2 e E(x,yk) + eE(x,y) ) dy\n= Eπ(y)Eπ(y2:K) eE(x,y) 1 K (∑K k=2 e E(x,yk) + eE(x,y) ) \n= Eπ(y1:K)\n( eE(x,y)\n1 K ∑K k=1 e E(x,yk)\n)\n= K · Eπ(y1:K) ( eE(x,y1)∑K k=1 e E(x,yk) )\n= K∑ i=1 Eπ(y1:K) eE(x,yi)∑K k=1 e E(x,yk)\n= Eπ(y1:K) ∑K i=1 e\nE(x,yi)∑K k=1 e E(x,yk) = 1 (15)\nA.2 PROOFS FOR ICNCE\nProposition 1 (Conditional InfoNCE). The following is a lower-bound on the conditional mutual information I(x′; y|x′′) and verifies the properties below:\nICNCE(x ′; y|x′′, E,K) = Ep(x′,x′′,y1)p(y2:K |x′′)\n[ log\neE(x ′′,x′,y1)\n1 K ∑K k=1 e E(x′′,x′,yk)\n] (6)\n1. ICNCE ≤ I(x′; y|x′′).\n2. E∗ = arg supE ICNCE = log p(y|x′′,x′) p(y|x′′) + c(x ′, x′′).\n3. When K →∞ and E = E∗, we recover the true conditional MI: limK→∞ ICNCE(x ′; y|x′′, E∗,K) = I(x′; y|x′′).\nProof. We begin with 1., the derivation is as follows:\nI(x′; y|x′′) = Ep(x′′,x′,y) log p(y|x′′, x′) p(y|x′′) ≥ Ep(x ′′,x′,y) log q̄(y|x′′, x′) p(y|x′′) (16)\n= Ep(x′′,x′,y) log Ep(y2:K |x′′)q(y|x ′′, x′, y2:K)\np(y|x′′) (17)\n≥ Ep(x′′,x′,y)Ep(y2:K |x′′) log p(y|x′′)K · wy\np(y|x′′) (18)\n= Ep(x′′,x′,y)Ep(y2:K |x′′) log K · eE(x ′′,x′,y)∑K k=1 e E(x′′,x′,yk) (19)\n= Ep(x′′,x′,y)Ep(y2:K |x′′) log eE(x\n′′,x′,y)\n1 K ∑K k=1 e E(x′′,x′,yk) (20)\n= ICNCE(x ′; y|x′′, E,K), (21)\nwhere we used in Eq. 16 the Jensen’s inequality following Barber and Agakov’s bound (Barber & Agakov, 2003) and used p(y|x′′) as our proposal distribution for the variational approximation q̄(y|x′′, x′).\nFor 2., we rewrite ICNCE by grouping the expectation w.r.t x′′:\nEp(x′′) [ Ep(x′,y1|x′′)p(y2:K |x′′) [ log\neE(x ′′,x′,y1)\n1 K ∑K k=1 e E(x′′,x′,yk)\n] ] . (22)\nGiven that both distributions in the inner-most expectation condition on the same x′′, this term has the same form as INCE and therefore the optimal solution is E∗x′′ = log p(y|x′,x′′) p(y|x′′) + cx′′(x\n′) (Ma & Collins, 2018). The optimal E for ICNCE is thus obtained by choosing E(x′′, x′, y) = E∗x ′′ for each x′′, giving E∗ = log p(y|x\n′,x′′) p(y|x′′) + c(x ′, x′′).\nFor proving 3., we substitute the optimal critic and take the limit K →∞. We have:\nlim K→∞\nEp(x′′,x′,y1)p(y2:K |x′′) [ log\np(y|x′′,x′) p(y|x′′)\n1 K ( p(y1|x′′,x′) p(y1|x′′) + ∑K k=2 p(yk|x′′,x′) p(yk|x′′) ) ], (23) From the Strong Law of Large Numbers, we know that as 1 K−1 ∑K−1 k=1 p(yk|x′′,x′) p(yk|x′′) → Ep(y|x′′) p(y|x ′′,x′)\np(y|x′′) = 1, as K →∞ a.s., therefore (relabeling y = y1):\nICNCE ∼K→∞ Ep(x′′,x′,y) [ log\np(y|x′′,x′) p(y|x′′)\n1 K ( p(y|x′′,x′) p(y|x′′) +K − 1 ) ] (24) ∼K→∞ Ep(x′′,x′,y) [ log\np(y|x′′, x′) p(y|x′′) + log K( p(y|x′′,x′) p(y|x′′) +K − 1 ) ] (25) ∼K→∞ I(x′, y|x′′), (26)\nwhere the last equality is obtained by noting that the second term→ 0." }, { "heading": "A.3 PROOFS FOR IV AR", "text": "Proposition 2 (Variational ICNCE). For any variational approximation τ(y|x′′) in lieu of p(y|x′′),\nIV AR(x ′, y|x′′, E, τ,K) = Ep(x′,x′′,y1)τ(y2:K |x′′)\n[ log\neE(x ′′,x′,y1)\n1 K ∑K k=1 e E(x′′,x′,yk)\n] (7)\n− Ep(x′′) [ KL ( p(y|x′′) ‖ τ(y|x′′) ) ] ,\nwith p(·|x′′) << τ(·|x′′) for any x′′, we have the following properties:\n1. IV AR ≤ I(x′; y|x′′).\n2. If τ(y|x′′) = p(y|x′′), IV AR = ICNCE .\n3. limK→∞ supE IV AR(x ′; y|x′′, E, τ,K) = I(x′; y|x′′).\nProof. For 1., we proceed as follows: I(x′; y|x′′) ≥ Ep(x,y) [ log\nq(y|x′′, x′)τ(y|x′′) p(y|x′′)τ(y|x′′) ] = Ep(x,y) [ log\nq(y|x′′, x′) τ(y|x′′)\n] − Ep(x) [ KL(p(y|x′′)‖τ(y|x′′)) ] ≥ Ep(x,y1)τ(y2:K |x′′) [ log eE(x ′′,x′,y1)\n1 K ∑K k=1 e E(x′′,x′,y1)\n] − Ep(x) [ KL(p(y|x′′) ‖ τ(y|x′′)) ] ,\n= IV AR(x ′, y|x′′, E, τ,K) (27)\nwhere the last step has been obtained as in Eq. 18.\nProving 2. is straightforward by noting that if τ = p, KL(p(y|x′′)||τ(y|x′′)) = 0 and the first term corresponds to ICNCE .\nProving 3. goes as follows:\nsup E\nEp(x′,x′′,y1)τ(y2:K |x′′) [ log eE(x ′′,x′,y1)\n1 K ∑K k=1 e E(x′′,x′,yk)\n] − Ep(x′′) [ KL ( p(y|x′′) ‖ τ(y|x′′) ) ] (28)\n= Ep(x′′,x′,y1)τ(y2:K |x′′)\n[ log\np(y1|x′′, x′) τ(y1|x′′) − log p(y1|x ′′) τ(y1|x′′) − log 1 K K∑ k=1 p(yk|x′, x′′) τ(yk|x′′) ] (29)\n= I(x′, y|x′′)− Ep(x′′,x′,y1)τ(y2:K |x′′) [ log 1\nK K∑ k=1 p(yk|x′, x′′) τ(yk|x′′) ] (30)\n→K→∞ I(x′, y|x′′). (31)\nThis is obtained by noting that (1) for any K and τ , arg supE IV AR = p(y|x′′,x′) τ(y|x′) (because the KL doesn’t depend on E) and (2) the second term in the last line goes to 0 for K → ∞ (a straightforward application of the Strong Law of Large Numbers shows that for samples y2:K drawn from τ(y2:K |x′′), we have: 1 K ∑K k=2 p(yk|x′,x′′) τ(yk|x′′) →K→∞ 1).\nA.4 PROOFS FOR IIS\nWe will be using the following lemma. Lemma 1. For any x′′, x′ and y, and any sequence EK such that ||EK − E||∞ →K→∞ 0:\nlim K→∞\nEp(y2:K) log KeEK(x\n′′,x′,y) eEK(x′′,x′,y) + (K − 1) ∑K k=2 wke EK(x ′′,x′,yk)\n(32)\n= lim K→∞\nEp(y2:K |x′′) log KeE(x\n′′,x′,y) eE(x′′,x′,y) + ∑K k=2 e E(x′′,x′,yk) ,\n(33)\nwhere wk = exp Ē(x′′,yk)∑K\nk=2 exp Ē(x′′,yk)\nfor Ē(x′′, yk) = arg supE INCE(x ′′, y|E,K) = p(yk|x ′′) p(yk) .\nProof. We see that almost surely, for y2:K ∼ p(·): K∑ k=2 wke EK(x ′′,x′,yk) = 1 K−1 ∑K k=2 p(yk|x′′) p(yk) eEK(x ′′,x′,yk) 1 K−1 ∑K k=2 p(yk|x′′) p(yk) →K→∞ Ep(y|x′′)eE(x ′′,x′,y), (34) where we applied the Strong Law of Large Numbers to the denominator.\nFor the numerator, we write:\n1\nK − 1 K∑ k=2 p(yk|x′′) p(yk) eEK(x ′′,x′,yk) = 1 K − 1 K∑ k=2 p(yk|x′′) p(yk) eE(x ′′,x′,yk)\n+ 1\nK − 1 K∑ k=2 p(yk|x′′) p(yk) (eEK(x ′′,x′,yk) − eE(x ′′,x′,yk))\nand note that the first term is the standard IS estimator using p(yk) as proposal distribution and tends to Ep(y|x′′)eE(x\n′′,x′,y) from the Strong Law of Large Numbers, while the second term goes to 0 as EK tends to E uniformly.\nThis gives limK→∞ Ep(y2:K) log KeEK (x\n′′,x′,y)\neEK (x ′′,x′,y)+(K−1) ∑K k=2 wke EK (x ′′,x′,yk) = log e E(x′′,x′,y) Ep(y|x′′)eE(x ′′,x′,y) .\nFollowing the same logic, without the importance-sampling demonstrates that:\nlim K→∞\nEp(y2:K |x′′) log KeE(x\n′′,x′,y) eE(x′′,x′,y) + ∑K k=2 e E(x′′,x′,yk) = log\neE(x ′′,x′,y)\nEp(y|x′′)eE(x′′,x′,y) ,\nwhich concludes the proof.\nProposition 3 (Importance Sampling ICNCE). The following approximation of ISIR:\nIIS(x ′, y|x′′, E,K) = Ep(x′′,x′,y1)p(y2:K) log\neE(x ′′,x′,y1)\n1 K (eE(x′′,x′,y1) + (K − 1) ∑K k=2 wke E(x′′,x′,yk)) , (9)\nwhere wk = exp Ē(x′′,yk)∑K\nk=2 exp Ē(x′′,yk)\nand Ē = arg supE INCE(x ′′, y|E,K), verifies:\n1. limK→∞ supE IIS(x ′; y|x′′, E,K) = I(x′; y|x′′),\n2. limK→∞ arg supE IIS = log p(y|x′′,x′) p(y|x′′) + c(x ′, x′′).\nProof. By applying Lemma 1 with EK = E, we know that for any E:\nlim K→∞\nIIS(x ′; y|x′′, E, Ē,K) = lim\nK→∞ Ep(x′′,x′,y)p(y2:K |x′′) log\nKeE(x ′′,x′,y) eE(x′′,x′,y) + ∑K k=2 e E(x′′,x′,yk) .\nIn particular, the RHS of the equality corresponds to limK→∞ ICNCE(x′, y|x′′, E,K). That quantity is smaller than I(x′, y|x′′), with equality for E = E∗. This guarantees that:\nlim K→∞ sup E IIS(x ′; y|x′′, E, Ē,K) ≥ lim K→∞ IIS(x ′; y|x′′, E∗, Ē,K) = I(x′, y|x′′). (35)\nWe now prove the reverse inequality. We let 2 = limK→∞ supE IIS(x ′; y|x′′, E, Ē,K)− I(x′, y|x′′), and assume toward a contradiction that > 0. We know that: ∃K0, ∀K ≥ K0, sup\nE IIS(x\n′; y|x′′, E, Ē,K) ≥ I(x′, y|x′′) + .\nNow, ∀K ≥ K0, let EK be such that: IIS(x\n′; y|x′′, EK , Ē,K) ≥ sup E IIS(x ′; y|x′′, E, Ē,K)− 2 ,\nand thus: ∀K ≥ K0, IIS(x′; y|x′′, EK , Ē,K) ≥ I(x′, y|x′′) + 2 .\nSince EK ∈ R|X|×|X|×|Y|, {EK}K≥K0 contains a subsequence that converges to a certain E∞ ∈ R̄|X|×|X|×|Y|. Without loss of generality, we assume that ∀K, ∀x′′,∀x′,Ep(y)[EK(x′′, x′, y)] = 0 which implies that Ep(y)[E∞(x′′, x′, y)] = 0 (similarly to INCE , IIS is invariant to constants added to E).\nIn particular, this guarantees that ||E∞||∞ <∞. Otherwise, we would have E∞(x′′, x′, y) = −∞ for a given y, which would then imply IIS(x′; y|x′′, E∞, Ē,K) = −∞ and give a contradiction.\nWe can now apply Lemma 1 to {EK} and E∞ to show that limK→∞ IIS(x′; y|x′′, EK , Ē,K) = limK→∞ ICNCE(x\n′, y|x′′, E∞,K), and get a contradiction: the first term is larger than I(x′, y|x′′) + 2 while the second is smaller than I(x′, y|x′′)." }, { "heading": "B PSEUDOCODE", "text": "" }, { "heading": "B.1 LOSS COMPUTATION", "text": "We provide a pseudo-code for the loss computation which uses MocoV2 backbone comprising a memory of contrastive examples obtained using a momentum-averaged encoder (Chen et al., 2020b).\ndef compute_loss(xp, xpp, y, f, f_ema, memory, lam=0.5): \"\"\" Args:\nxpp: info-restricted view xp: a view y: a view f: standard encoder f_ema: momentum averaged encoder memory: memory bank of representations\nReturns: lam * mi(xp; y) + (1 - lam) * (mi(xpp; y) + mi(xp; y | xpp)) \"\"\" # encode xp and xpp with standard encoder, (1, dim) q_xp, q_xpp = f(x_p), f(x_pp) # encode y with momentum-averaged encoder, (1, dim) k_y = f_ema(y).detach() # (1 + n_mem,), first is xpp_y score logits_xpp_y = dot(q_xpp, cat(k_y, memory)) # (1 + n_mem,), first is xp_y score logits_xp_y = dot(q_xp, cat(k_y, memory)) # infonce bound between xp and y nce_xp_y = -log_softmax(logits_xp_y)[0] # infonce bound between xpp and y nce_xpp_y = -log_softmax(logits_xpp_y)[0] K = len(logits_xpp_y) # compute resampling importance weights w_pp_y = softmax(logits_xpp_y[1:]) # form approximation to the partition function (Eq. 12) Z_xp_y = (K - 1) * w_pp_y * exp(logits_xp_y[1:]) Z_xp_y = Z_xp_y.sum() + exp(logits_xp_y[0]) # infonce bound on the conditional mutual information nce_xp_y_I_xpp = -logits_xp_y[0] + log(Z_xp_y) # compose final loss loss = lam * nce_xp_y loss += (1-lam) * (nce_xpp_y + nce_xp_y_I_xpp) return loss" }, { "heading": "B.2 SYNTHETIC EXPERIMENTS", "text": "Here, we provide details for Sec. 5.1. In this experiment, each x′, x′′ and y are 20-dimensional. For each dimension, we sampled (x′i, x ′′ i , yi) from a correlated Gaussian with mean 0 and covariance matrix covi. For a\ngiven value of MI, mi = {5, 10, 15, 20}, we sample covariance matrices covi = sample_cov(mii), such that∑ i mii = mi, mii chosen at random. We optimize the bounds by stochastic gradient descent (Adam, learning rate 5 · 10−4). All encoders f are multi-layer perceptrons with a single hidden layer and ReLU activation. Both hidden and output layer have size 100.\nInfoNCE computes:\nEp [ log\nef([x ′,x′′])T f(y) ef([x′,x′′])T f(y) + ∑K k=2 e f([x′,x′′])T f(yk)\n] + logK, y2:K ∼ p(y),\nwhere the proposal is the marginal distribution p(y), E is chosen to be a dot product between representations, Ep denotes expectation w.r.t. the known joint distribution p(x′, x′′, y) and is approximated with Monte-Carlo, [x′, x′′] denotes concatenation and f is a 1-hidden layer MLP.\nInfoNCEs computes:\nEp(x′′,x′,y)p(y2:K)\n[ log\nef(x ′′)T f(y) ef(x′′)T f(y) + ∑K k=2 e f(x′′)T f(yk)\n] + (36)\nEp(x′′,x′,y)p(y2:K |x′′)\n[ log\nef([x ′′,x′])T f(y) ef([x′′,x′])T f(y) + ∑K k=2 e f([x′′,x′])T f(yk)\n] + 2 logK\nwhere f(x) is just f([x,0]) in order to re-use MLP parameters for the two terms. The negative samples of the conditional MI term come from the conditional distribution p(y|x′′), which is assumed to be known in this controlled setting. We maximize both lower bounds with respect to the encoder f .\nWe report pseudo-code for sample_cov, used to generate 3×3 covariance matrices for a fixed mi = I({x′, x′′}; y) and uniformly sampled α = I(x′′; y)/I({x′, x′′}; y): def sample_cov(mi): alpha = random.uniform(0.1, 0.9) params = random.normal(0, I6) # use black box optimizer (Nealder-Mead) to determine opt_params opt_param = arg minx residual(params, mi, α) return project_posdef(opt_params)\ndef project_posdef(x): # project x ∈ R6 to a positive definite 3x3 matrix cov = zeros(3, 3) cov[tril_indices(3)] = x cov /= column_norm(cov) return dot(cov, cov.T)\ndef analytical_mi(cov): # compute analytical MI of 3 covariate Gaussian variables cov_01 = cov[:2, :2] cov_2 = cov[2:3, 2:3] mi_xp_xpp_y = 0.5 * (log(det(cov_01)) + log(det(cov_2)) - log(det(cov))) cov_1 = cov[1:2, 1:2] cov_23 = cov[1:, 1:] mi_xp_y = 0.5 * (log(det(cov_1)) + log(det(cov_2)) - log(det(cov_23))) return mi_xp_xpp_y, mi_xp_y\ndef residual(x, mi, α): # penalize difference between analytical mi and target mi, α mi cov = project_posdef(x) mi_xp_y, mi_xp_y = analytical_mi(cov) return (mi_xp_xpp_y - mi) ** 2 + (mi_xp_y - α * mi) ** 2" }, { "heading": "C EXPERIMENTS ON DIALOGUE", "text": "C.1 INFONCE DETAILS\nFor all InfoNCE terms, given the past, the model is trained to pick the ground-truth future among a set of N future candidates. This candidate set includes the ground-truth future and N − 1 negative futures drawn from different proposal distributions. To compute InfoNCE(f(x≤k); f(x>k)), we consider the ground truth future of each sample as a negative candidate for the other samples in the batch. Using this approach, the number of candidates N is equated to the batch size. This ensures that negative samples are sampled from the marginal distribution p(x>k). To compute the conditional information bound InfoNCES , we sample negative futures p(y|xk) by leveraging the GPT2 model itself, by conditioning the model only on the most recent utterance xk in the past." }, { "heading": "C.2 EXPERIMENTAL SETUP", "text": "Given memory constraints, the proposed models are trained with a batch size of 5 per GPU over 10 epochs, considering up to three utterances for the future and five utterances in the past. All the models are trained on 2 NVIDIA V100s. The models early-stop in the 4th epoch. We use the Adam optimizer with a learning rate of 6.25× 10−5, which we linearly decay to zero during training. Dropout is set to 10% on all layers. InfoNCE/InfoNCES terms are weighted with a factor 0.1 in the loss function.\ny ∼ p(y|xk) Bgt Yeah nowadays they are a lot more stable and well made.\ny1:N ∼ p(y|xk)\nB1 :That is great. I’ve been skydiving for days now . How is it ?\nModel ppl seq-rep rep wrep uniq dist-1 dist-2 BLEU\nGPT2 19.24 0.064 0.130 0.132 7393 0.064 0.392 0.775 TransferTransfo 19.33 0.078 0.134 0.132 7735 0.058 0.386 0.752 GPT2+InfoNCE (ours) 18.88 0.065 0.126 0.131 8432 0.065 0.390 0.799 GPT2+InfoNCES (ours) 18.76 0.050 0.120 0.128 8666 0.070 0.405 0.810\nGround Truth – 0.052 0.095 – 9236 0.069 0.416 –" }, { "heading": "C.3 HUMAN EVALUATION", "text": "We closely follow the protocol used in Zhang et al. (2019). Systems were paired and each response pair was presented to 3 judges in random order on a 3 point Likert scale. We use a majority vote for each response pair to decide whether system1, system2, or neither, performed better. We then bootstrap the set of majority votes to obtain a 95% confidence interval on the expected difference between system1 and system2. If this confidence interval contains 0, the difference is deemed insignificant. We also compute p-values from the confidence intervals6.\nIn the following tables, “pivot” is always the system given by our full InfoNCES model. Pairings where the pairwise confidence interval is marked with “*” have a significant difference between systems.\nHuman Evaluation: Which response is more relevant?\npivot_wins pivot_CI cmpsys_wins cmpsys_CI pairwise_CI p cmp_sys\nGPT2 0.48726 (0.46, 0.52] 0.28662 (0.26, 0.32] (0.15, 0.26]* 1.24835e-12 GPT2MMI 0.65833 (0.62, 0.7] 0.16250 (0.13, 0.2] (0.43, 0.56]* 6.11888e-42 GPT2_NSP 0.46888 (0.44, 0.5] 0.30043 (0.27, 0.33] (0.11, 0.22]* 6.67922e-09 InfoNCE 0.41711 (0.39, 0.45] 0.36748 (0.34, 0.4] (-0.01, 0.11] 8.09387e-02 gold_response 0.22679 (0.2, 0.25] 0.54325 (0.51, 0.58] (-0.37, -0.27]* 3.26963e-23\n6https://www.bmj.com/content/343/bmj.d2304\nHuman Evaluation: Which response is more humanlike?\npivot_wins pivot_CI cmpsys_wins cmpsys_CI pairwise_CI p cmp_sys\nGPT2 0.45084 (0.42, 0.48] 0.32636 (0.3, 0.36] (0.07, 0.18]* 1.17277e-05 GPT2MMI 0.61734 (0.57, 0.66] 0.18393 (0.15, 0.22] (0.36, 0.5]* 1.73160e-30 GPT2_NSP 0.43617 (0.41, 0.47] 0.35000 (0.32, 0.38] (0.03, 0.14]* 2.92302e-03 InfoNCE 0.44630 (0.42, 0.48] 0.34515 (0.32, 0.38] (0.04, 0.16]* 4.45383e-04 gold_response 0.22164 (0.2, 0.25] 0.56608 (0.53, 0.6] (-0.4, -0.29]* 9.29316e-28\nHuman Evaluation: Which response is more interesting?\npivot_wins pivot_CI cmpsys_wins cmpsys_CI pairwise_CI p cmp_sys\nGPT2 0.56157 (0.53, 0.59] 0.21444 (0.19, 0.24] (0.3, 0.4]* 2.13032e-36 GPT2MMI 0.68750 (0.65, 0.73] 0.12292 (0.09, 0.15] (0.5, 0.63]* 6.66687e-63 GPT2_NSP 0.51931 (0.49, 0.55] 0.24571 (0.22, 0.27] (0.22, 0.33]* 2.30585e-22 InfoNCE 0.41288 (0.38, 0.44] 0.33580 (0.31, 0.37] (0.02, 0.13]* 5.84741e-03 gold_response 0.32384 (0.29, 0.35] 0.46624 (0.44, 0.5] (-0.2, -0.09]* 1.08781e-03" } ]
2,020
null
SP:70bed0f6f729c03edcb03678fca53e1d82fc06ab
[ "The paper proposes a continual learning framework based on Bayesian non-parametric approach. The hidden layer is modeled using Indian Buffet Process prior. The inference uses a structured mean-field approximation with a Gaussian family for the weights, and Beta-Bernoulli for the task-masks. The variational inference is done with Bayes-by-backprop on a common ELBO setup. The experiments show less diminishing accuracy on the increment of tasks on five datasets for the discriminative problem, and for generation the methods learn one digit or character at a time on MNIST and notMNIST datasets." ]
Continual Learning is a learning paradigm where learning systems are trained on a sequence of tasks. The goal here is to perform well on the current task without suffering from a performance drop on the previous tasks. Two notable directions among the recent advances in continual learning with neural networks are (1) variational Bayes based regularization by learning priors from previous tasks, and, (2) learning the structure of deep networks to adapt to new tasks. So far, these two approaches have been orthogonal. We present a novel Bayesian framework for continual learning based on learning the structure of deep neural networks, addressing the shortcomings of both these approaches. The proposed framework learns the deep structure for each task by learning which weights to be used, and supports inter-task transfer through the overlapping of different sparse subsets of weights learned by different tasks. An appealing aspect of our proposed continual learning framework is that it is applicable to both discriminative (supervised) and generative (unsupervised) settings. Experimental results on supervised and unsupervised benchmarks shows that our model performs comparably or better than recent advances in continual learning.
[]
[ { "authors": [ "Tameem Adel", "Han Zhao", "Richard E. Turner" ], "title": "Continual learning with adaptive weights (claw)", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Hongjoon Ahn", "Sungmin Cha", "Donggyu Lee", "Taesup Moon" ], "title": "Uncertainty-based continual learning with adaptive regularization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "David M Blei", "Alp Kucukelbir", "Jon D McAuliffe" ], "title": "Variational inference: A review for statisticians", "venue": "Journal of the American statistical Association,", "year": 2017 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": null, "year": 2015 }, { "authors": [ "Arslan Chaudhry", "Marc’Aurelio Ranzato", "Marcus Rohrbach", "Mohamed Elhoseiny" ], "title": "Efficient lifelong learning with a-GEM", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Natalia Díaz-Rodríguez", "Vincenzo Lomonaco", "David Filliat", "Davide Maltoni" ], "title": "Don’t forget, there is more than forgetting: new metrics for continual learning", "venue": "arXiv preprint arXiv:1810.13166,", "year": 2018 }, { "authors": [ "Finale Doshi", "Kurt Miller", "Jurgen Van Gael", "Yee Whye Teh" ], "title": "Variational inference for the indian buffet process", "venue": "In AISTATS, pp", "year": 2009 }, { "authors": [ "Chrisantha Fernando", "Dylan Banarse", "Charles Blundell", "Yori Zwols", "David Ha", "Andrei A. Rusu", "Alexander Pritzel", "Daan Wierstra" ], "title": "Pathnet: Evolution channels gradient descent in super neural networks", "venue": "CoRR, abs/1701.08734,", "year": 2017 }, { "authors": [ "Timo Flesch", "Jan Balaguer", "Ronald Dekker", "Hamed Nili", "Christopher Summerfield" ], "title": "Comparing continual task learning in minds and machines", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Soumya Ghosh", "Jiayu Yao", "Finale Doshi-Velez" ], "title": "Structured variational learning of bayesian neural networks with horseshoe priors", "venue": null, "year": 2018 }, { "authors": [ "Siavash Golkar", "Michael Kagan", "Kyunghyun Cho" ], "title": "Continual learning via neural pruning", "venue": "CoRR, abs/1903.04476,", "year": 2019 }, { "authors": [ "Teofilo F. Gonzalez" ], "title": "Clustering to minimize the maximum intercluster distance", "venue": "Theor. Comput. Sci.,", "year": 1985 }, { "authors": [ "Thomas L Griffiths", "Zoubin Ghahramani" ], "title": "The indian buffet process: An introduction and review", "venue": "JMLR, 12(Apr):1185–1224,", "year": 2011 }, { "authors": [ "Matthew Hoffman", "David Blei" ], "title": "Stochastic Structured Variational Inference", "venue": null, "year": 2015 }, { "authors": [ "Wenpeng Hu", "Zhou Lin", "Bing Liu", "Chongyang Tao", "Zhengwei Tao", "Jinwen Ma", "Dongyan Zhao", "Rui Yan" ], "title": "Overcoming catastrophic forgetting via model adaptation", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax. 2017", "venue": "URL https://arxiv.org/abs/1611.01144", "year": 2017 }, { "authors": [ "Samuel Kessler", "Vu Nguyen", "Stefan Zohren", "Stephen Roberts" ], "title": "Hierarchical indian buffet neural networks for bayesian continual learning, 2020", "venue": null, "year": 1912 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "ICLR,", "year": 2013 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Richard Kurle", "Botond Cseke", "Alexej Klushyn", "Patrick van der Smagt", "Stephan Günnemann" ], "title": "Continual learning with bayesian neural networks for non-stationary data", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Sang-Woo Lee", "Jin-Hwa Kim", "Jaehyun Jun", "Jung-Woo Ha", "Byoung-Tak Zhang" ], "title": "Overcoming Catastrophic Forgetting by Incremental Moment Matching", "venue": null, "year": 2017 }, { "authors": [ "Soochan Lee", "Junsoo Ha", "Dongsu Zhang", "Gunhee Kim" ], "title": "A neural dirichlet process mixture model for task-free continual learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Xilai Li", "Yingbo Zhou", "Tianfu Wu", "Richard Socher", "Caiming Xiong" ], "title": "Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting", "venue": null, "year": 2019 }, { "authors": [ "David Lopez-Paz" ], "title": "Gradient episodic memory for continual learning", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Chris J Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": null, "year": 2017 }, { "authors": [ "T.K. Moon" ], "title": "The expectation-maximization algorithm", "venue": "IEEE Signal Processing Magazine,", "year": 1996 }, { "authors": [ "Eric Nalisnick", "Padhraic Smyth" ], "title": "Stick-Breaking Variational Autoencoders", "venue": "ICLR, art", "year": 2017 }, { "authors": [ "Radford M Neal" ], "title": "Bayesian learning for neural networks, volume 118", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Cuong V Nguyen", "Yingzhen Li", "Thang D Bui", "Richard E Turner" ], "title": "Variational continual learning", "venue": null, "year": 2018 }, { "authors": [ "Sinno Jialin Pan", "Qiang Yang" ], "title": "A survey on transfer learning", "venue": "IEEE Transactions on knowledge and data engineering,", "year": 2009 }, { "authors": [ "Konstantinos Panousis", "Sotirios Chatzis", "Sergios Theodoridis" ], "title": "Nonparametric bayesian deep networks with local competition", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "German I Parisi", "Ronald Kemker", "Jose L Part", "Christopher Kanan", "Stefan Wermter" ], "title": "Continual lifelong learning with neural networks: A review", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Dushyant Rao", "Francesco Visin", "Andrei Rusu", "Razvan Pascanu", "Yee Whye Teh", "Raia Hadsell" ], "title": "Continual unsupervised representation learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Dushyant Rao", "Francesco Visin", "Andrei Rusu", "Razvan Pascanu", "Yee Whye Teh", "Raia Hadsell" ], "title": "Continual unsupervised representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mark B Ring" ], "title": "Child: A first step towards continual learning", "venue": "Machine Learning,", "year": 1997 }, { "authors": [ "Anthony Robins" ], "title": "Catastrophic forgetting, rehearsal and pseudorehearsal", "venue": "Connection Science,", "year": 1995 }, { "authors": [ "Jonathan Schwarz", "Jelena Luketina", "Wojciech M. Czarnecki", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress &amp; Compress: A scalable framework for continual learning", "venue": null, "year": 2018 }, { "authors": [ "Joan Serrà", "Dídac Surís", "Marius Miron", "Alexandros Karatzoglou" ], "title": "Overcoming catastrophic forgetting with hard attention to the task", "venue": null, "year": 2018 }, { "authors": [ "James Smith", "Seth Baer", "Zsolt Kira", "Constantine Dovrolis" ], "title": "Unsupervised continual learning and self-taught associative memory hierarchies", "venue": null, "year": 2019 }, { "authors": [ "Michalis K Titsias", "Jonathan Schwarz", "Alexander G de G Matthews", "Razvan Pascanu", "Yee Whye Teh" ], "title": "Functional regularisation for continual learning using gaussian processes", "venue": null, "year": 2020 }, { "authors": [ "Johannes von Oswald", "Christian Henning", "João Sacramento", "Benjamin F. Grewe" ], "title": "Continual learning with hypernetworks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ju Xu", "Zhanxing Zhu" ], "title": "Reinforced Continual Learning", "venue": "NIPS, art", "year": 2018 }, { "authors": [ "Kai Xu", "Akash Srivastava", "Charles Sutton" ], "title": "Variational russian roulette for deep bayesian nonparametrics", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jaehong Yoon", "Eunho Yang", "Jeongtae Lee", "Sung Ju Hwang" ], "title": "Lifelong learning with dynamically expandable networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Chen Zeno", "Itay Golan", "Elad Hoffer", "Daniel Soudry" ], "title": "Task agnostic continual learning using online variational bayes", "venue": "arXiv preprint arXiv:1803.10123,", "year": 2018 }, { "authors": [ "Hao Zhang", "Bo Chen", "Dandan Guo", "Mingyuan Zhou" ], "title": "WHAI: Weibull hybrid autoencoding inference for deep topic modeling", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Nguyen" ], "title": "2018), we use task-specific encoders with 3 hidden layers of 500, 500, 500 units respectively with latent size of 100 units, and a symmetrically reversed decoder with last two layers of decoder being shared among all the tasks and the first layer", "venue": "EwC and VCL (Kirkpatrick et al.,", "year": 2018 }, { "authors": [ "Nguyen" ], "title": "2018) as a method for cleverly sidestepping the issue of catastrophic forgetting, the coreset comprises representative training data samples from all tasks", "venue": null, "year": 2018 }, { "authors": [ "∑ kNk" ], "title": "DETECTING BOUNDARIES Inspired from Rao et al. (2019a), we rely on a threshold to determine if the data point is an instance from a new task", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Continual learning (CL) (Ring, 1997; Parisi et al., 2019) is the learning paradigm where a single model is subjected to a sequence of tasks. At any point of time, the model is expected to (i) make predictions for the tasks it has seen so far, (ii) if subjected to training data for a new task, adapt to the new task leveraging the past knowledge if possible (forward transfer) and benefit the previous tasks if possible (backward transfer). While the desirable aspects of more mainstream transfer learning (sharing of bias between related tasks (Pan & Yang, 2009)) might reasonably be expected here too, the principal challenge is to retain the predictive power for the older tasks even after learning new tasks, thus avoiding the so-called catastrophic forgetting.\nReal world applications in, for example, robotics or time-series forecasting, are rife with this challenging learning scenario, the ability to adapt to dynamically changing environments or evolving data distributions being essential in these domains. Continual learning is also desirable in unsupervised learning problems as well (Smith et al., 2019; Rao et al., 2019b) where the goal is to learn the underlying structure or latent representation of the data. Also, as a skill innate to humans (Flesch et al., 2018), it is naturally an interesting scientific problem to reproduce the same capability in artificial predictive modelling systems.\nExisting approaches to continual learning are mainly based on three foundational ideas. One of them is to constrain the parameter values to not deviate significantly from their previously learned value by using some form of regularization or trade-off between previous and new learned weights (Schwarz et al., 2018; Kirkpatrick et al., 2017; Zenke et al., 2017; Lee et al., 2017). A natural way to accomplish this is to train a model using online Bayesian inference, whereby the posterior of the parameters learned from task t serve as the prior for task t + 1 as in Nguyen et al. (2018) and Zeno et al. (2018). This new informed prior helps in the forward transfer, and also prevents catastrophic forgetting by penalizing large deviations from itself. In particular, VCL (Nguyen et al., 2018) achieves the state of the art results by applying this simple idea to Bayesian neural networks. The second idea is to perform an incremental model selection for every new task. For neural networks, this is done by evolving the structure as newer tasks are encountered (Golkar et al., 2019; Li\net al., 2019). Structural learning is a very sensible direction in continual learning as a new task may require a different network structure than old unrelated tasks and even if the tasks are highly related their lower layer representations can be very different. Another advantage of structural learning is that while retaining a shared set of parameters (which can be used to model task relationships) it also allow task-specific parameters that can increase the performance of the new task while avoiding catastrophic forgetting caused due to forced sharing of parameters. The third idea is to invoke a form of ’replay’, whereby selected or generated samples representative of previous tasks, are used to retrain the model after new tasks are learned.\nIn this work, we introduce a novel Bayesian nonparametric approach to continual learning that seeks to incorporate the ability of structure learning into the simple yet effective framework of online Bayes. In particular, our approach models each hidden layer of the neural network using the Indian Buffet Process (Griffiths & Ghahramani, 2011) prior, which enables us to learn the network structure as new tasks arrive continually. We can leverage the fact that any particular task t uses a sparse subset of the connections of a neural network Nt, and different related tasks share different subsets (albeit possibly overlapping). Thus, in the setting of continual learning, it would be more effective if the network could accommodate changes in its connections dynamically to adapt to a newly arriving task. Moreover, in our model, we perform the automatic model selection where each task can select the number of nodes in each hidden layer. All this is done under the principled framework of variational Bayes and a nonparametric Bayesian modeling paradigm.\nAnother appealing aspect of our approach is that in contrast to some of the recent state-of-the-art continual learning models (Yoon et al., 2018; Li et al., 2019) that are specific to supervised learning problems, our approach applies to both deep discriminative networks (supervised learning) where each task can be modeled by a Bayesian neural network (Neal, 2012; Blundell et al., 2015), as well as deep generative networks (unsupervised learning) where each task can be modeled by a variational autoencoder (VAE) (Kingma & Welling, 2013)." }, { "heading": "2 PRELIMINARIES", "text": "Bayesian neural networks (Neal, 2012) are discriminative models where the goal is to model the relationship between inputs and outputs via a deep neural network with parametersw. The network parameters are assumed to have a prior p(w) and the goal is to infer the posterior given the observed dataD. The exact posterior inference is intractable in such models. One such approximate inference scheme is Bayes-by-Backprop (Blundell et al., 2015) that uses a mean-field variational posterior q(w) over the weights. Reparameterized samples from this posterior are then used to approximate the lower bound via Monte Carlo sampling. Our goal in the continual learning setting is to learn such Bayesian neural networks for a sequence of tasks by inferring the posterior qt(w) for each task t, without forgetting the information contained in the posteriors of previous tasks.\nVariational autoencoders (Kingma & Welling, 2013) are generative models where the goal is to model a set of inputs {x}Nn=1 in terms of a stochastic latent variables {z}Nn=1. The mapping from each zn to xn is defined by a generator/decoder model (modeled by a deep neural network with parameters θ) and the reverse mapping is defined by a recognition/encoder model (modeled by another deep neural network with parameters φ). Inference in VAEs is done by maximizing the variational lower bound on the marginal likelihood. It is customary to do point estimation for decoder parameters θ and posterior inference for encoder parameters φ. However, in the continual learning setting, it would be more desirable to infer the full posterior qt(w) for each task’s encoder and decoder parameters w = {θ, φ}, while not forgetting the information about the previous tasks as more and more tasks are observed. Our proposed continual learning framework address this aspect as well.\nVariational Continual Learning (VCL) Nguyen et al. (2018) is a recently proposed approach to continual learning that combats catastrophic forgetting in neural networks by modeling the network parameters w in a Bayesian fashion and by setting pt(w) = qt−1(w), that is, a task reuses the previous task’s posterior as its prior. VCL solves the follow KL divergence minimization problem\nqt(w) = arg min q∈Q\nKL ( q(w)|| 1\nZt qt−1(w)p(Dt|w)\n) (1)\nWhile offering a principled way that is applicable to both supervised (discriminative) and unsupervised (generative) learning settings, VCL assumes that the model structure is held fixed throughout,\nwhich can be limiting in continual learning where the number of tasks and their complexity is usually unknown beforehand. This necessitates adaptively inferring the model structure, that can potentially adapt with each incoming task. Another limitation of VCL is that the unsupervised version, based on performing CL on VAEs, only does so for the decoder model’s parameters (shared by all tasks). It uses completely task-specific encoders and, consequently, is unable to transfer information across tasks in the encoder model. Our approach addresses both these limitations in a principled manner." }, { "heading": "3 BAYESIAN STRUCTURE ADAPTATION FOR CONTINUAL LEARNING", "text": "In this section, we present a Bayesian model for continual learning that can potentially grow and adapt its structure as more and more tasks arrive. Our model extends seamlessly for unsupervised learning as well. For brevity of exposition, in this section, we mainly focus on the supervised setting where a task has labeled data with known task identities t (task-incremental). We then briefly discuss the unsupervised extension (based on VAEs) in Sec. 3.3 where task boundaries may or may not (taskagnostic) be available and provide further details in the appendix (Sec I).\nOur approach uses a basic primitive that models each hidden layer using a nonparametric Bayesian prior (Fig. 1a shows an illustration and Fig. 1b shows a schematic diagram). We can use these hidden layers to model feedforward connections in Bayesian neural networks or VAE models. For simplicity, we will assume a single hidden layer, the first task activates as many hidden nodes as required and learns the posterior over the subset of edge weights incident on each active node. Each subsequent task reuses some of the edges learned by the previous task and uses the posterior over the weights learned by the previous task as the prior. Additionally, it may activate some new nodes and learn the posterior over some of their incident edges. It thus learns the posterior over a subset of weights that may overlap with weights learned by previous tasks. While making predictions, a task uses only the connections it has learned. More slack for later tasks in terms of model size (allowing it to create new nodes) indirectly lets the task learn better without deviating too much from the prior (in this case, posterior of the previous tasks) and further reduces chances of catastrophic forgetting (Kirkpatrick et al., 2017)." }, { "heading": "3.1 GENERATIVE STORY.", "text": "Omitting the task id t for brevity, consider modeling tth task using a neural network having L hidden layers. We model the weights in layer l as W l = Bl V l, a point-wise multiplication of a realvalued matrix V l (with a Gaussian prior N (0, σ20) on each entry) and a task-specific binary matrix Bl. This ensures sparse connection weights between the layers. Moreover, we modelBl ∼ IBP(α) using the Indian Buffet Process (IBP) Griffiths & Ghahramani (2011) prior, where the hyperparameter α controls the number of nonzero columns in B and its sparsity. The IBP prior thus enables learning the size ofBl (and consequently of V l) from data. As a result, the number of nodes in the hidden layer is learned adaptively from data. The output layer weights are denoted as Wout with each weight having a Gaussian prior N (0, σ20). The outputs are yn ∼ Lik(WoutφNN (xn)), n = 1, . . . , N (2) Here φNN is the function computed (using parameter samples) up to the last hidden layer of the network thus formed, and Lik denotes the likelihood model for the outputs.\nSimilar priors on the network weights have been used in other recent works to learn sparse deep neural networks (Panousis et al., 2019; Xu et al., 2019). However, these works assume a single task to be learned. In contrast, our focus here is to leverage such priors in the continual learning setting where we need to learn a sequence of tasks while avoiding the problem of catastrophic forgetting. Henceforth, we further suppress the superscript denoting layer number from the notation for simplicity; the discussion will hold identically for all hidden layers. When adapting to a new task, the posterior of V learned from previous tasks is used as the prior. A newB is learned afresh, to ensure that a task only learns the subset of weights relevant to it.\nStick Breaking Construction. As described before, to adaptively infer the number of nodes in each hidden layer, we use the IBP prior (Griffiths & Ghahramani, 2011), whose truncated stick-breaking process (Doshi et al., 2009) construction for each entry of B is as follows\nνk ∼ Beta(α, 1), πk = k∏ i=1 νi, Bd,k ∼ Bernoulli(πk) (3)\nfor d ∈ 1, ..., D, where D denotes the number of input nodes for this hidden layer, and k ∈ 1, 2, ...,K, where K is the truncation level and α controls the effective value of K, i.e., the number of active hidden nodes. Note that the prior probability πk of weights incident on hidden node k being nonzero decreases monotonically with k, until, say, K nodes, after which no further nodes have any incoming edges with nonzero weights from the previous layer, which amounts to them being turned off from the structure. Moreover, due to the cumulative product based construction of the πk’s, an implicit ordering is imposed on the nodes being used. This ordering is preserved across tasks, and allocation of nodes to a task follows this, facilitating reuse of weights.\nThe truncated stick-breaking approximation is a practically plausible and intuitive solution for continual learning since a fundamental tenet of continual learning is that the model complexity should not increase in an unbounded manner as more tasks are encountered. Suppose we fix a budget on the maximum allowed size of the network (no. hidden nodes in a layer) after it has seen, say, T tasks. Which exactly corresponds to the truncation level for each layer. Then for each task, nodes are allocated conservatively from this total budget, in a fixed order, conveniently controlled by the α hyperparameter. In appendix (Sec. D), we also discuss a dynamic expansion scheme that avoids specifying a truncation level (and provide experimental results)." }, { "heading": "3.2 INFERENCE", "text": "Exact inference is intractable in this model due to non-conjugacy. Therefore, we resort to the variational inference (Blei et al., 2017). We employ structured mean-field approximation (Hoffman & Blei, 2015), which performs better than normally used mean-field approximation, as the former captures the dependencies in the approximate posterior distributions of B and ν. In particular, we use q(V ,B,v) = q(V )q(B|v)q(v), where, q(V ) = ∏D d=1 ∏K k=1N (Vd,k|µd,k, σ2d,k) is mean field Gaussian approximation for network weights. Corresponding to the BetaBernoulli hierarchy of (3), we use the conditionally factorized variational posterior family, that is, q(B|v) = ∏D d=1 ∏K k=1 Bern(Bd,k|θd,k), where θd,k = σ(ρd,k + logit(πk)) and q(v) =∏K\nk=1 Beta(vk|νk,1, νk,2). Thus we have Θ = {νk,1, νk,2, {µd,k, σd,k, ρd,k}Dd=1}Kk=1 as set of learnable variational parameters.\nEach column of B represents the binary mask for the weights incident to a particular node. Note that although these binary variables (in a single column of B) share a common prior, the posterior for each of these variables are different, thereby allowing a task to selectively choose a subset of the weights, with the common prior controlling the degree of sparsity.\nL = Eq(V ,B,v)[ln p(Y |V ,B,v)]−KL(q(V ,B,v)||p(V ,B,v)) (4)\nL = 1 S S∑ i=1 [f(V i,Bi,vi)−KL[q(B|vi)||p(B|vi)]]−KL[q(V )||p(V )]−KL[q(v)||p(v)] (5)\nBayes-by-backprop (Blundell et al., 2015) is a common choice for performing variational inference in this context. Eq. 4 defines the Evidence Lower Bound (ELBO) in terms of data-dependent likelihood and data-independent KL terms which further gets decomposed using mean-field factorization.\nThe expectation terms are optimized by unbiased gradients from the respective posteriors. All the KL divergence terms in (Eq. 5) have closed form expressions; hence using them directly rather than estimating them from Monte Carlo samples alleviates the approximation error as well as the computational overhead, to some extent. The log-likelihood term can be decomposed as\nf(V ,B,v) = log Lik(Y |V ,B,v) = log Lik(Y |WoutφNN (X;V,B, v)) (6)\nwhere (X,Y ) is the training data. For regression, Lik can be Gaussian with some noise variance, while for classification it can be Bernoulli with a probit or logistic link. Details of sampling gradient computation for terms involving beta and Bernoulli r.v.’s is provided in the appendix. (Sec. F)." }, { "heading": "3.3 UNSUPERVISED CONTINUAL LEARNING", "text": "Our discussion thus far has primarily focused on continual learning where each task is a supervised learning problem. Our framework however readily extends to unsupervised continual learning (Nguyen et al., 2018; Smith et al., 2019; Rao et al., 2019b) where we assume that each task involves learning a deep generative model, commonly a VAE. In this case, each input observation xn has an associated latent variable zn. Collectively denoting all inputs asX and all latent variables as Z, we can define ELBO similar to Eq. 4 as\nL = Eq(Z,V ,B,v)[ln p(X|Z,V ,B,v)]−KL(q(Z,V ,B,v)||p(Z,V ,B,v)) (7)\nNote that, unlike the supervised case, the above ELBO also involves an expectation over Z. Similar to Eq. 5 this can be approximated using Monte Carlo samples, where each zn is sampled from the amortized posterior q(zn|V ,B,v,xn). In addition to learning the model size adaptively, as shown in the schematic diagram (Fig. 1b (ii)), our model learns shared weights and task-specific masks for the encoder and decoder models. In contrast, VCL uses fixed-sized model with entirely task-specific encoders, which prevents knowledge transfer across the different encoders." }, { "heading": "3.4 OTHER KEY CONSIDERATIONS", "text": "Task Agnostic Setting Our framework extends to task-agnostic continual learning as well where the task boundaries are unknown. Based on Lee et al. (2020), we use a gating mechanism (Eq. 8 with tn represents the task identity of nth sample xn) and define marginal log likelihood as\np(tn = k|xn) = p(xn|tn = k)p(tn = k) / K∑ k=1 p(xn|tn = k)p(tn = k) (8)\nlog p(X) = Eq(t=k) [p(X, t = k|θ)] +KL (q(t = k)||p(t = k|X, θ)) (9)\nwhere, q(t = k) is the variational posterior over task identity. Similar to E-step in Expectation Maximization (Moon, 1996), we can reduce the KL-Divergence term to zero and get the M-step as\narg max θ log p(X) = arg max θ Ep(t=k|X,θold) log p(X|t = k) (10)\nHere, log p(X|t = k) is intractable but can be replaced with its variational lower bound (Eq. 7). We use Monte Carlo sampling for approximating p(xn|tn = k). Detecting samples from a new task is done using a threshold (Rao et al., 2019a) on the evidence lower bound (Appendix Sec. J)\nMasked Priors Using the previous task’s posterior as the prior for current task (Nguyen et al., 2018) may introduce undesired regularization in case of partially trained parameters that do not contribute to previous tasks and may promote catastrophic forgetting. Also, the choice of the initial prior as Gaussian leads to creation of more nodes than required due to regularization. To address this, we mask the new prior for the next task t with the initial prior pt defined as\npt(Vd,k) = B o d,kqt−1(Vd,k) + (1−Bod,k)p0(Vd,k) (11)\nwhereBo is the overall combined mask from all previously learned tasks i.e., (B1∪B2...∪Bt−1), qt−1, pt are the previous posterior and current prior, respectively, and p0 is the prior used for the first task. The standard choice of initial prior p0 can be a uniform distribution." }, { "heading": "4 RELATED WORK", "text": "One of the key challenges in continual learning is to prevent catastrophic forgetting, typically addressed through regularization of the parameter updates, preventing them from drastically changing from the value learnt from the previous task(s). Notable methods based on this strategy include EwC (Kirkpatrick et al., 2017), SI (Zenke et al., 2017), LP (Smola et al., 2003), etc. Superseding these methods is the Bayesian approach, a natural remedy of catastrophic forgetting in that, for any task, the posterior of the model learnt from the previous task serves as the prior for the current task, which is the canonical online Bayes. This approach is used in recent works like VCL (Nguyen et al., 2018) and task agnostic variational Bayes (Zeno et al., 2018) for learning Bayesian neural networks in the CL setting. Our work is most similar in spirit to and builds upon this body of work.\nAnother key aspect in CL methods is replay, where some samples from previous tasks are used to fine-tune the model after learning a new task (thus refreshing its memory in some sense and avoiding catastrophic forgetting). Some of the works using this idea include Lopez-Paz et al. (2017), which solves a constrained optimization problem at each task, the constraint being that the loss should decrease monotonically on a heuristically selected replay buffer; Hu et al. (2019), which uses a partially shared parameter space for inter-task transfer and generates the replay samples through a data-generative module; and Titsias et al. (2020), which learns a Gaussian process for each task, with a shared mean function in the form a feedforward neural network, the replay buffer being the set of inducing points typically used to speed up GP inference. For VCL and our work, the coreset serves as a replay buffer (Appx. C); but we emphasize that it is not the primary mechanism to overcome catastrophic forgetting in these cases, but rather an additional mechanism to preventing it.\nRecent work in CL has investigated allowing the structure of the model to dynamically change with newly arriving tasks. Among these, strong evidence in support of our assumptions can be found in Golkar et al. (2019), which also learns different sparse subsets of the weights of each layer of the network for different tasks. The sparsity is enforced by a combination of weighted L1 regularization and threshold-based pruning. There are also methods that do not learn subset of weights but rather learn the subset of hidden layer nodes to be used for each task; such a strategy is adopted by either using Evolutionary Algorithms to select the node subsets (Fernando et al., 2017) or by training the network with task embedding based attention masks (Serrà et al., 2018). One recent approach Adel et al. (2020), instead of using binary masks, tries to adapt network weights at different scales for different tasks; it is also designed only for discriminative tasks.\nAmong other related work, Li et al. (2019); Yoon et al. (2018); Xu & Zhu (2018) either reuse the parameters of a layer, dynamically grows the size of the hidden layer, or spawn a new set of parameters (the model complexity being bounded through regularization terms or reward based reinforcements). Most of these approaches however tend to be rather expensive and rely on techniques, such as neural architecture search. In another recent work (simultaneous development with our work), Kessler et al. (2020) did a preliminary investigation on using IBP for continual learning. They however use IBP on hidden layer activations instead of weights (which they mention is worth considering), do not consider issues such as the ones we discussed in Sec. 3.4, and only applies to supervised setting. Modelling number active nodes for a given task has also been explored by Serrà et al. (2018); Fernando et al. (2017); Ahn et al. (2019), but modelling posterior over connections weights between these nodes achieves more sparsity and flexibility in terms of structural learning at the cost of increased number of parameters, von Oswald et al. (2020) tries to amortize the network parameters directly from input samples which is a promising direction and can be adapted for future research.\nFor non-stationary data, online variational Bayes is not directly applicable as it assumes independently and identically distributed (i.i.d.) data. As a result of which the variance in Gaussian posterior approximation will shrink with an increase in the size of training data, Kurle et al. (2020) proposed use of Bayesian forgetting, which can be naturally applied to our approach enabling it to work with non-stationary data but it requires some modifications for task-agnostic setup. In this work, we have not explored this extension keeping it as future work." }, { "heading": "5 EXPERIMENTS", "text": "We perform experiments on both supervised and unsupervised CL and compare our method with relevant state-of-the-art methods. In addition to the quantitative (accuracy/log-likelihood compar-\nisons) and qualitative (generation) results, we also examine the network structures learned by our model. Some of the details (e.g., experimental settings) have been moved to the appendix 1.\n1 2 3 4 5\nTasks\n97.0\n97.4\n97.8\n98.2\nPerm. MNIST\n1 2 3 4 5 Tasks 99.6\n99.7\n99.8\n99.9\nSplit MNIST\n1 2 3 4 5\nTasks\n94\n95\n96\n97" }, { "heading": "Not MNIST", "text": "1 2 3 4 5\nTasks\n97.5\n98.0\n98.5\n99.0\nFashion MNIST\n2 4 6 8 10\nTasks\n50\n55\n60\n65\n70\nSplit Cifar 100\nNaive Rehersal EwC VCL VCL(coreset) IMM(mode) ours ours(coreset) DEN RCL\nFigure 2: Mean test accuracies of tasks seen so far as newer tasks are observed on multiple benchmarks" }, { "heading": "5.1 SUPERVISED CONTINUAL LEARNING", "text": "We first evaluate our model on standard supervised CL benchmarks. We experiment with different existing approaches such as, Pure Rehearsal (Robins, 1995), EwC (Kirkpatrick et al., 2017), IMM (Lee et al., 2017), DEN (Yoon et al., 2018), RCL (Xu & Zhu, 2018), and “Naïve” which learns a shared model for all the tasks. We perform our evaluations on five supervised CL benchmarks: SplitMNIST, Split notMNIST(small), Permuted MNIST, Split fashionMNIST and Split Cifar100. The last layer heads (Appx. E.1) were kept separate for each task for fair baseline comparison.\nFor Split MNIST, Split notMNIST and Split fashionMNIST each dataset is split into 5 binary classification tasks. For Split Cifar100 the dataset was split 10 multiclass classification tasks. For Permuted MNIST, each task is a multiclass classification problem with a fixed random permutation applied to the pixels of every image. We generated 5 such tasks for our experiments.\nPerformance evaluation Suppose we have a sequence of T tasks. To gauge the effectiveness of our model towards preventing catastrophic forgetting, we report (i) the test accuracy of first task after learning each of the subsequent tasks; and (ii) the average test accuracy over all previous tasks 1, 2, . . . t after learning each task t. For fair comparison, we use the same architecture for each of the baselines (details in Appx.), except for DEN and RCL that grows the structure size. We also report results on some additional CL metrics (Díaz-Rodríguez et al., 2018) in the Appx. (Sec. H.4).\nFig. 2 shows the mean test accuracies on all supervised benchmarks as new tasks are observed. As shown, the average test accuracy of our method (without as well as with coresets) is better than the compared baseline (here, we have used random point selection method for coresets). Moreover, the accuracy drops much more slowly than and other baselines showing the efficacy of our model in preventing catastrophic forgetting due to the adaptively learned structure. In Fig. 3, we show the accuracy on first task as new tasks arrive and compare specifically with VCL. In this case too, we observe that our method yields relatively stable first task accuracies as compared to VCL. We note that for permuted MNIST the accuracy of first task increases with training of new tasks which shows the presence of backward transfer, which is another desideratum of CL. We also report the performance with our dynamically growing network variant (for more details refer Appx. Sec. D).\n1The code for our model can be found at this link: https://github.com/npbcl/icml20\nStructural Observations An appealing aspect of our work is that, the results reported above, which are competitive with the state-of-the-art, are achieved with very sparse neural network structures learnt by the model, which we analyze qualitatively here (Appendix Sec. H.1 shows some examples of network structures learnt by our model).\nAs shown in Fig. 3 (Network Used) IBP prior concentrates weights on very few nodes, and learns sparse structures. Also most newer tasks tend to allocate fewer weights and yet perform well, implying effective forward transfer. Another important observation as shown in Fig. 3 is that the weight sharing between similar tasks like notMNIST is a higher than that of non-similar tasks like permuted MNIST. Note that new tasks show higher weight sharing irrespective of similarity, this is an artifact induced by IBP (Sec 3.1) which tends to allocate more active weights on upper side of matrix.\nWe therefore conclude that although a new task tend to share weights learnt by old tasks, the new connections that it creates are indispensable for its performance. Intuitively, the more unrelated a task is to previously seen ones, the more new connections it will make, thus reducing negative transfer (an unrelated task adversely affecting other tasks) between tasks." }, { "heading": "5.2 UNSUPERVISED CONTINUAL LEARNING", "text": "We next evaluate our model on generative tasks under CL setting. For that, we compare our model with existing approaches such as Naïve, EwC and VCL. We do not include other methods mentioned in supervised setup as their implementation does not incorporate generative modeling. We perform continual learning experiments for deep generative models using a VAE style network. We consider two datasets, MNIST and notMNIST. For MNIST, the tasks are sequence of single digit generation from 0 to 9. Similarily, for notMNIST each task is one character generation from A to J. Note that,\nunlike VCL and other baselines where all tasks have separate encoder and a shared decoder, as we discuss in Sec. 3.3, our model uses a shared encoder for all tasks, but with task-specific masks for each encoder (cf., Fig. 1b (ii)). This enables transfer of knowledge while the task-specific mask effectively prevent catastrophic forgetting.\nGeneration: As shown in Fig 5, the modeling innovation we introduce for the unsupervised setting, results in much improved log-likelihood on held-out sets. In each individual figure in Fig 4, each row represents generated samples from all previously seen tasks and the current task. We see that the quality of generated samples in does not deteriorate as compared to other baselines as more tasks are encountered. This shows that our model can efficiently perform generative modeling by reusing subset of networks and creating minimal number of nodes for each task.\nTask-Agnostic Learning: Fig 5 shows a particular case where nine tasks were inferred out of 10 class with high correlation among class 4 and 9 due to visual similarity between them. Since each task uses a set of network connection, this result enforces our models ability to model task relations based on network sharing. Further the log-likelihood obtained for task-agnostic setting is comparable to our model with known task boundaries, suggesting that our approach can be used effectively in task-agnostic settings as well.\nRepresentation Learning: Table 1 represents the quality of the unsupervisedly learned representation by our unsupervised continual learning approach. For this experiment, we use the learned representations to train a KNN classification model with different K values. We note that despite having task-specific encoders VCL and other baselines fail to learn good latent representation, while the proposed model learns good representations when task boundaries are known and is comparable to state-of-the-art baseline CURL (Rao et al., 2019a) under task-agnostic setting." }, { "heading": "6 CONCLUSION", "text": "We have successfully unified structure learning in neural networks with their variational inference in the setting of continual learning, demonstrating competitive performance with state-of-the-art models on both discriminative (supervised) and generative (unsupervised) learning problems. In this work, we have experimented with task-incremental continual learning for supervised setup and sequential generation task for unsupervised setting. we believe that our task-agnostic setup can be extended to class-incremental learning scenario where sample points from a set of classes arrives sequentially and model is expected to perform classification over all observed classes. It would also be interesting to generalize this idea to more sophisticated network architectures such as recurrent or residual neural networks, possibly by also exploring improved approximate inference methods. Few more interesting extensions would be in semi-supervised continual learning and continual learning with non-stationary data. Adapting other sparse Bayesian structure learning methods, e.g. Ghosh et al. (2018) to the continual learning setting is also a promising avenue. Adapting the depth of the network is a more challenging endeavour that might also be undertaken. We leave these extensions for future work." }, { "heading": "A DATA", "text": "The data sets used in our experiments with train test split information are listed in table given below. MNIST dataset comprises 28 × 28 monochromatic images consisting of handwritten digits from 0 to 9. notMNIST dataset comprises of glyph’s of letters A to J in different fonts formats with similar configuration as MNIST. fashion MNIST is also monochromatic comprising of 10 classes (T-shirt, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle boot) with similar to MNIST. Cifar100 dataset contains RGB images with 600 images per class.\nDataset Classes Training size Test size\nMNIST 10 60000 10000 notMNIST 10 14974 3750 fashionMNIST 10 50000 20000 Cifar100 100 50000 10000" }, { "heading": "B MODEL CONFIGURATIONS", "text": "For permuted MNIST, split MNIST, split notMNIST and fashion MNIST experiments, we use fixed architecture of network for all the models with single hidden layer of 200 units except for DEN (which grows structure dynamically) which used two hidden layers initialized to 256, 128 units.\nThe VCL implementation was taken from its official repository at https://github. com/nvcuong/variational-continual-learning. For DEN we used the official implementation https://github.com/jaehong-yoon93/DEN. IMM implementation was taken from https://github.com/btjhjeon/IMM_tensorflow, RCL implementation was taken from https://https://github.com/xujinfan/ Reinforced-Continual-Learning, For EwC we used HAT’s official implementation at https://github.com/joansj/hat. For others, we used our own implementations." }, { "heading": "B.1 SUPERVISED CONTINUAL LEARNING: HYPERPARAMETER SETTINGS", "text": "For MNIST, notMNIST, fashionMNIST datasets, our model uses single hidden layer neural network with 200 hidden units. For RCL (Xu & Zhu, 2018) and DEN (Yoon et al., 2018), two hidden layers were used with initial network size of 256, 128 units, respectively. For the Cifar100 dataset we used an Alex-net like structure with three convolutional layers of 128, 256, 512 channels with 4× 4, 3× 3, 2 × 2 channels followed by two dense layers of 2048, 2048 units each. For the convolutional layer, batch-norm layers were separate for each task. We adopt Adam optimizer for our model keeping a learning rate of 0.01 for the IBP posterior parameters and 0.001 for others; this is to avoid vanishing gradient problem introduced by sigmoid function. For selective finetuning, we use a learning rate of 0.0001 for all the parameters. The temperature hyperparameter of the Gumbelsoftmax reparameterization for Bernoulli gets annealed from 10.0 to a minimum limit of 0.25. The value of α is initialized to 30.0 for the initial task and maximum of the obtained posterior shape parameters for each of subsequent tasks. Similar to VCL, we initialize our models with maximumlikelihood training for the first task. For all datasets, we train our model for 5 epochs. We selectively finetune our model after that for 5 epochs. For experiments including coresets, we use a coreset size of 50. Coreset selection is done using random and k-center methods Nguyen et al. (2018). For our model with dynamic expansion, we initialize our network with 50 hidden units." }, { "heading": "B.2 UNSUPERVISED CONTINUAL LEARNING: HYPERPARAMETER SETTINGS", "text": "For all datasets, our model uses 2 hidden layers with 500, 500 units for encoder and symmetrically opposite for the decoder with a latent dimension of size 100 units. For other approaches like Naive, EwC and VCL (Kirkpatrick et al., 2017; Nguyen et al., 2018), we use task-specific encoders with 3 hidden layers of 500, 500, 500 units respectively with latent size of 100 units, and a symmetrically reversed decoder with last two layers of decoder being shared among all the tasks and the first layer\nbeing specific to each task. we use Adam optimizer for our model keeping the learning rate configuration similar to that of supervised setting. Temperature for gumbel-softmax reparametrization gets annealed from 10 to 0.25. We initialize encoder hidden layers α values as 40, 40, respectively, and symmetrically opposite in decoder for the first task. We update α’s in similar fashion to supervised setting for subsequent tasks. For latent layers, we intialize α to 20. For the unsupervised learning experiments, we did not use coresets." }, { "heading": "C CORESET METHOD EXPLANATION", "text": "Proposed in Nguyen et al. (2018) as a method for cleverly sidestepping the issue of catastrophic forgetting, the coreset comprises representative training data samples from all tasks. Let M (t−1) denote the posterior state of the model before learning task t. With the t-th task’s arrival having data Dt, a coreset Ct is created comprising choicest examples from tasks 1 . . . t. Using data Dt \\ Ct and having prior M (t−1), new model posterior M t is learnt. For predictive purposes at this stage (the test data comes from tasks 1 . . . t), a new posterior M tpred is learnt with M\nt as prior and with data Ct. Note that M tpred is used only for predictions at this stage, and does not have any role in the subsequent learning of, say, M (t+1). Such a predictive model is learnt after every new task, and discarded thereafter. Intuitively it makes sense as some new learnt weights for future tasks can help the older task to perform better (backward transfer) at testing time.\nCoreset selection can be done either through random selection or K-center greedy algorithm Gonzalez (1985). Next, the posterior is decomposed as follows:\np(θ|D1:t) ∝ p(θ|D1:t\\Ct)p(Ct|θ) ≈ q̃t(θ)p(Ct|θ)\nwhere, q(θ) is the variational posterior obtained using the current task training data, excluding the current coreset data. Applying this trick in a recursive fashion, we can write:\np(θ|D1:t\\Ct) = p(θ|D1:t−1\\Ct−1)p(Dt ∪ Ct−1\\Ct|θ) ≈ q̃t−1(θ)p(Dt ∪ Ct−1\\Ct|θ)\nWe then approximate this posterior using variational approximation as q̃t(θ) = proj(q̃t−1(θ)p(Dt∪ Ct−1\\Ct|θ)) Finally a projection step is performed using coreset data before prediction as follows: qt(θ) = proj(q̃t(θ)p(Ct|θ)). This way of incorporating coresets into coreset data before prediction tries to mitigate any residual forgetting. Algorithm 1 summarizes the training procedure for our model for setting with known task boundaries." }, { "heading": "D DYNAMIC EXPANSION METHOD", "text": "Although our inference scheme uses a truncation-based approach for the IBP posterior, it is possible to do inference in a truncation-free manner. One possibility is to greedily grow the layer width until performance saturates. However we found that this leads to a bad optima (low peaks of likelihood). We can leverage the fact that, given a sufficiently large number of columns, the last columns of the IBP matrix tends to be all zeros. So we increase the number of hidden nodes after every iteration to keep the number of such empty columns equal to a constant value T l in following manner.\nClj = C l j+1 Dl∏ i I(Blij = 0), Gl = T l − Kl∑ j=1 Clj (12)\nwhere l represents current layer index, Bl is the sampled IBP mask for current task, Clj indicates if all columns from jth column onward are empty. Gl is the number of hidden units to expand in the current network layer." }, { "heading": "E OTHER PRACTICAL DETAILS", "text": "" }, { "heading": "E.1 SEGREGATING THE HEAD", "text": "It has been shown in prior work on supervised continual learning Zeno et al. (2018) that using separate last layers (commonly referred to as “heads”) for different tasks dramatically improves\nAlgorithm 1 Nonparametric Bayesian CL Input:Initial Prior p0(Θ) Initialize the network parameters and coresets Initialize : pnew ← p0(Θ) for i = 1 to T do\nObserve current task data Dt; Update coresets (Sec. C); Masked Training; Lt ← ELBO with prior pnew; Θt ← arg minLt; Selective Finetuning; Fix the IBP parameters and learned mask; Θt ← arg minLt; pnew ← qt(Θ); pnew ←Mask(pnew) using Eq 11; Perform prediction for given test set..\nend for\nperformance in continual learning. Therefore, in the supervised setting, we use a generalized linear model that uses the embeddings from the last hidden layer, with the parameters up to the last layer involved in transfer and adaptation. Although we do report comparision of single head models available in Sec H.2." }, { "heading": "E.2 SPACE COMPLEXITY", "text": "The proposed scheme entails storing a binary matrix for each layer of each task which results into 1 bit per weight parameter, which is not very prohibitive and can be efficiently stored as sparse matrices. Moreover, the tasks make use of very limited number of columns of the IBP matrix, and hence does not pose any significant overhead. Space complexity grows logarithmically with number of tasks T as O(M + T log2(M)) where M number of parameters." }, { "heading": "E.3 ADJUSTING BIAS TERMS", "text": "The IBP selection acts on the weight matrix only. For the hidden nodes not selected in a task, their corresponding biases need to be removed as well. In principle, the bias vector for a hidden layer should be multiplied by a binary vector u, with ui = I[∃d : Bd,i = 1]. In practice, we simply scale each bias component by the maximum reparameterized Bernoulli value in that column." }, { "heading": "E.4 SELECTIVE FINETUNING", "text": "While training with reparameterization (Gumbel-softmax), the sampled masks are close to binary but not completely binary which reduces performance a bit with complete binary mask. So we finetune the network with fixed masks to restore performance. A summarized version of Algorithm 1 summarizes our models training procedure. The method for update of coresets that we used are similar to as it was proposed in Nguyen et al. (2018)." }, { "heading": "F ADDITIONAL INFERENCE DETAILS", "text": "Sampling Methods We obtain unbiased reparameterized gradients for all the parameters of the variational posterior distributions. For the Bernoulli distributed variables, we employ the Gumbelsoftmax trick Jang et al. (2017), also known as CONCRETE Maddison et al. (2017). For Beta distributed v’s, the Kumaraswamy Reparameterization Gradient technique Nalisnick & Smyth (2017) is used. For the real-valued weights, the standard location-scale trick of Gaussians is used.\nInference over parameters φ that involves a random or stochastic node Z (i.e Z ∼ qφ(Z)) cannot be done in a straightforward way, if the objective involves Monte Carlo expectation with respect that random variable (L = Eqφz(L(z)))). This is due to the inability to back-propagate through a\nrandom node. To overcome this issue, Kingma & Welling (2013) introduced the reparametrization trick. This involves deterministically mapping the random variable Z = f(φ, ) to rewrite the expectation in terms of new random variable , where is now randomly sampled instead of Z (i.e L = Eq [L( , φ)]). In this section, we discuss some of the reparameterization tricks we used." }, { "heading": "F.1 GAUSSIAN DISTRIBUTION REPARAMETERIZATION", "text": "The weights of our Bayesian nueral network are assumed to be distributed according to a Gaussian with diagonal variances (i.e Vk ∼ N (Vk|µVk , σ2Vk)). We reparameterize our parameters using location-scale trick as: Vk = µVk + σVk × , ∼ N (0, I) where k is the index of parameter that we are sampling. Now, with this reparameterization, the gradients over µVk , σVk can be calculated using back-propagation." }, { "heading": "F.2 BETA DISTRIBUTION REPARAMETERIZATION", "text": "The beta distribution for parameters ν in the IBP posterior can be reparameterized using Kumaraswamy distribution Nalisnick & Smyth (2017), since Kumaraswamy distribution and beta distribution are identical if any one of rate or shape parameters are set to 1. The Kumaraswamy distribution is defined as p(ν;α, β) = αβνα−1(1− να)β−1 which can be reparameterized as:\nν = (1− u1/β)1/α, u ∼ U(0, 1) where U represents a uniform distribution. The KL-Divergence between Kumaraswamy and beta distributions can be written as:\nKL(q(ν; a, b)||p(ν;α, β)) = a− α a\n( −γ −Ψ(b)− 1\nb\n) + log ab+ log(B(α, β))− b\n1− b\n+ (β − 1)b ∞∑ m=1\n1\nm+ ab B(\nm a , b) (13)\nwhere γ is the Euler constant, Ψ is the digamma function and B is the beta function. As described in Nalisnick & Smyth (2017), we can approximate the infinite sum in Eq.13 with a finite sum using first 11 terms." }, { "heading": "F.3 BERNOULLI DISTRIBUTION REPARAMETERIZATION", "text": "For Bernoulli distribution over mask in the IBP posterior, we employ the continuous relaxation of discrete distribution as proposed in Categorical reparameterization with Gumbel-softmax Jang et al. (2017), also known as the CONCRETE Maddison et al. (2017) distribution. We sample a concrete random variable from the probability simplex as follows:\nBk = exp((log(αk) + gk)/λ)∑K i=1 exp((log(αi) + gi)/λ) , gk ∼ G(0, 1)\nwhere, λ ∈ (0,∞) is a temperature hyper-parameter, αk is posterior parameter representing the discrete class probability for kth class and gk is a random sample from Gumbel distribution G. For binary concrete variables, the sampling reduces to the following form:\nYk = log (αk) + log (uk/(1− uk))\nλ , u ∼ U(0, 1)\nthen, Bk = σ(Yk) where σ is sigmoid function and uk is sample from uniform distribution U. To guarantee a lower bound on the ELBO, both prior and posterior Bernoulli distribution needs to be replaced by concrete distributions. Then the KL-Divergence can be calculated as difference of log density of both distributions. The log density of concrete distribution is given by:\nlog q(Bk;α, λ) = log (λ)− λYk + logαk − 2 log (1 + exp (−λYk + logαk)) With all reparameterization techniques discussed above, we use Monte Carlo sampling for approximating the ELBO with sample size of 10 while training and a sample size of 100 while at test time.\nG IBP HYPERPARAMETER α\nIn this section, we discuss the approach to tune the IBP prior hyperparameter α. We found that using a sufficiently large value of α without tuning performs reasonably well in practice. However, we experimented with other alternatives as well. For example, we tried adapting α with respect to previous posterior as α = max(α,max(aν)) for each layer, where aν is Beta posterior shape parameter. Several other considerations can also be made regarding its choice." }, { "heading": "G.1 SCHEDULING ACROSS TASKS", "text": "Intuitively, α should be incremented for every new task according to some schedule. Information about task relatedness can be helpful in formulating the schedule. Smaller increments of α discourages creation of new nodes and encourages more sharing of already existing connections across tasks.\nG.2 LEARNING α\nAlthough not investigated in this work, one viable alternative to choosing α by cross-validation could be to learn it. This can be accommodated into our variational framework by imposing a gamma prior on α and using a suitably parameterized gamma variational posterior. The only difference in the objective would be in the KL terms: the KL divergence of v will then also have to estimated by Monte Carlo approximation (because of dependency on α in the prior). Also, since gamma distribution does not have an analytic closed form KL divergence, the Weibull distribution can be a suitable alternative Zhang et al. (2018)." }, { "heading": "H ADDITIONAL RESULTS: SUPERVISED CONTINUAL LEARNING", "text": "In this section, we provide some additional experimental results for supervised continual learning setup. Table 2 shows final mean accuracies over 5 tasks with deviations, obtained by all the approaches on various datasets. It also shows that our model performs comparably or better than the baselines. We have included some more models in this comparison namely, HIBNN (Kessler et al., 2020), UCL (Ahn et al., 2019), HAT (Serrà et al., 2018) and A-GEM (Chaudhry et al., 2019). Note that coreset based replay is not helping much in our case, In of VCL use of coresets performs better since it forces all parameters to be shared leading to catastrophic forgetting. Our method has very less catastrophic forgetting hence the use of coresets does not improve performance significantly. Although in cases where we do not grow the model size dynamically and keep feeding tasks to it even after the model has reached its capacity (model will be forced to share more parameters), it will lead to forgetting and their use of coresets might help as it did for VCL." }, { "heading": "H.1 LEARNED NETWORK STRUCTURES", "text": "In this section, we analyse the network structures that were learned after training our model. As\nwe can see in Fig. 6(a), the masks are captured on the pixel values where the digits in MNIST\ndatasets have high value and zeros elsewhere which represents that our models adapts with respect to data complexity and only uses those weights that are required for the task. Due to the use of the IBP prior, the number of active weights tends to shrink towards the first few nodes of the first hidden layer. This observation enforces that our idea of using IBP prior to learn the model structure based on data complexity is indeed working. Similar behaviour can be seen in notMNIST and fashionMNIST in Fig. 6(b and c).\nOn the other hand Fig 7 (left) shows the sharing of weights between subsequent tasks of different datasets. It can be observed that the tasks that are similar at input level of representation have more overlapping/sharing of parameters (e.g split MNIST) in comparison to those that are not very similar (e.g permuted MNIST). It also shows Fig 7 (right) that the amount of total network capacity used by our model differs for each task, which shows that complex tasks require more parameters as compared to easy tasks. Since the network size is fixed, the amount of network usage for all previous tasks tends to converge towards 100 percent. This promotes parameter sharing but also introduces forgetting, since the network is forced to share parameters and is not able to learn new nodes." }, { "heading": "H.2 ADDITIONAL PERMUTED MNIST RESULT", "text": "We have done our experiments with separate heads for each task of permuted MNIST. Some approaches use a single head for permuted MNIST task and don’t task labels at test-time. Here we compare some of the baselines (that supports single head) with our model (single head) on Permuted MNIST for 10 tasks. We also report number of epochs and average time to run for a rough comparision of time complexity taken by each model. To justify the choice of single hidden layer with\nMethod Epochs/Task Time/Task (sec) Avg acc(10 tasks)\nOurs 10 142 0.9794 VCL 100 380 0.9487 EwC 10 51 0.9173\n200 units in MNIST like experiments, we compare our model on Permuted MNIST experiment with multiple network depths and with separate heads, From table 3 we can conclude that a single hidden layer is sufficient for obtaining good enough results. Further, to analyse the performance decrease and generality of approach with number of tasks, we perform Permuted MNIST experiment with separate heads and a single hidden layer of 200 units for different number of tasks. Table 4 shows\nUnder review as a conference paper at ICLR 2021\n1\n2\n3\n4\nTa sk s (s pl it M NI\nST )\n35 35\n49\n34\n50\n59\n30\n45\n53\n59 20\n30\n40\n50\n2 3 4 5\n1\n2\n3\n4\nTa sk s (n ot M\nNI ST\n)\n42 43\n49\n43\n49\n59\n41\n46\n57\n60\n2 4\n20\n25\n30\n35\n1\n2\n3\n4\nTa sk\ns\n(fa sh io n M\nNI ST\n) 38 40\n54\n40\n55\n64\n39\n54\n60\n63 30\n40\n50\n60\n70\nTasks\n1\n2\n3\n4\nTa\nsk\ns\n(p er m\nut\ned\nM NI\nST ) 9 9\n22\n10\n23\n26\n10\n23\n27\n29\nTasks\n20\n40\n60\n1 2 3 4 Ta sk s (s pl it M NI ST ) 35 35 49 34 50 59 30 45 53 59 20 30 40 50\n2 3 4 5\n1\n2\n3\n4\nTa sk s (n ot M\nNI ST\n)\n42 43\n49\n43\n49\n59\n41\n46\n57\n60\n2 4\n20\n25\n30\n35\n1\n2\n3\n4\nTa sk s (fa sh io n M\nNI ST\n) 38 40\n54\n40\n55\n64\n39\n54\n60\n63 30\n40\n50\n60\n70\nTasks\n1\n2\n3\n4\nTa sk s (p er m ut ed\nM NI\nST ) 9 9\n22\n10\n23\n26\n10\n23\n27\n29\nTasks\n20\n40\n60\nFigure 7: Percentage weight sharing between tasks (left), percentage of network capacity already used by previous tasks(right).\nNetwork hidden layer sizes Avg accuracy (5 tasks)\n[200] 98.180 ± 0.187 [100, 50] 98.188 ± 0.163 [250, 100, 50] 98.096 ± 0.152\nTable 3: Comparing performance on Permuted MNIST under different network configurations\nthat model quite stable and performance does not drop alot even with large number of tasks for a fixed model size." }, { "heading": "H.3 ADDITIONAL CIFAR RESULT", "text": "MNIST data experiments are relatively easier to model and an approach might not generalize to more complex datasets like image or textual data. This section includes extra results on cifar-10 and cifar-100 datasets with comparisons to some very strong baselines for observing performance under complex settings.\nTable 5 shows that our approach is comparable to the some strong baselines like HAT, VCL on complex tasks like cifar-10 and cifar-100 classifications. Therefore, suggesting that it can be generalized to more complex task settings. For split cifar-100 (20 tasks) each task is a 5 class classification task and, split cifar-10 has 2 class classification tasks." }, { "heading": "H.4 OTHER METRICS", "text": "We quantified and observed the forward and backward transfer of our and VCL model, using the three metrics given in Díaz-Rodríguez et al. (2018) on Permuted MNIST dataset as follows:\nACCURACY is defined as the overall model performance averaged over all the task pairs as follows:\nAcc = ∑ i≥j Ri,j N(N−1)\n2\nwhere, Ri,j is obtained test classification accuracy of the model on task tj after observing the last sample from task ti.\nFORWARD TRANSFER is the ability of previously learnt task to perform on new task better and is give by:\nFWT = ∑N i<j Ri,j N(N−1)\n2\nBACKWARD TRANSFER is the ability of newly learned task to affect the performance of previous tasks. It can be defined as:\nBWT =\n∑N i=2 ∑i−1 j=1(Ri,j −Rj,j) N(N−1)\n2\nWe compare our model with VCL and other baselines over these three metrics in Table 6.\nWe can observe that backward transfer for our model is more as compared to most baselines, which shows that our approach has suffers from less forgetting as well. On the other hand forward transfer seems to give close to random accuracy (0.1) which is due to the fact that the model is not trained on the correct class labels and is asked to predict the correct label. So this metric is not very useful here; an alternative would be to train a linear classifier on the representations that are learned after each subsequent tasks for future task." }, { "heading": "I UNSUPERVISED CONTINUAL LEARNING", "text": "Here we describe the complete generative model for our unsupervised continual learning approach. The generative story for unsupervised setting can be written as follows (for brevity we have omitted the task id t):\nBl ∼ IBP (α) V ld,k ∼ N (0, σ20) W l = Bl V l\nW outd,k ∼ N (0, σ20) Zn ∼ N (µz, σ2z)\nXn ∼ Bernoulli(σ(W outφNN (W ,Zn)))\nwhere, µz, σ2z are prior parameters of latent representation; they can either be fixed or learned, and σ is the sigmoid function. The stick-breaking process for the IBP prior remains the same here as well. For doing inference here, once again we resort to structured mean-field assumption:\nq(Z,V ,B,v) = q(Z|B,V ,ν,X)q(V )q(B|v)q(v) where, q(Z|B,V ,ν,X) = ∏N n=1N (µφNN , σ2φNN ), and φNN is IBP masked neural network used for amortization of Gaussian posterior parameters. Rest of variational posteriors are factorized in a similar way as in the supervised approach. Evidence lower bound calculation can done as explained in section 3.3." }, { "heading": "I.1 ADDITIONAL EXPERIMENTAL RESULTS FOR UNSUPERVISED CONTINUAL LEARNING", "text": "In this section, we show further results for unsupervised continual learning. Fig 10 shows, for MNIST and notMNIST datasets, how the likelihoods vary for individual tasks as subsequent tasks arrive. It can be observed that the individual task accuracies learned by our model are better than\nother baselines; this suggests that use of new weights when needed helps in retaining a better optima per task, and also the deterioration of our model is much less as compared to other model, representing effective protection against catastrophic forgetting. Fig 12 shows the reconstructed images of MNIST and also the t-SNE plot of latent codes our model produces. it can be observed that reconstruction quality is good despite heavy constraints on the model. Fig 11a shows the generated\nsamples from the learned prior over latent space after all tasks are observed.\nSimilarily, Fig 9 shows the reconstructed images of not MNIST dataset and the t-SNE plot of latent codes our model produces, and Fig 11b shows the generated samples from the learned prior over latent space after all tasks are observed.\nREPRESENTATION LEARNING In t-SNE plots, it can be observed that the latent space for MNIST dataset is more clearly seperated as compared to notMNIST dataset. This can be attributed to the abundance of data and less variation in MNIST dataset as compared to notMNIST dataset. we further analyzed the representations that were learned by our model by doing K-Nearest Neighbour classification on the latent space. Table 7 shows the KNN test error of our model and few other benchmarks on MNIST and notMNIST datasets. We performed the test with three different values for K. As shown in the table, the representations learned by other baselines are not very useful (as evidenced by the large test errors), since the latent space are not shared among the tasks, whereas our model uses a shared latent space (yet modulated for each task based on the learned task-specific mask) which results in effective latent representation learning." }, { "heading": "J TASK AGNOSTIC SETTING", "text": "We extended our unsupervised continual learning model to a generative mixture model, where each mixture component is considered as a task distribution (i.e p(X) = ∑K k=1 p(X|t = k)p(t = k) with t representing the task identity). Here, p(t = k) can be assumed to be a uniform distribution but it fails to consider the degree upto which each mixture is being used. Therefore, we keep a count over the number of instances belonging to each task and use that as prior (i.e p(t = k) = NkN , with Nk being effective number of instances belonging to task k and N = ∑ kNk).\nDETECTING BOUNDARIES Inspired from Rao et al. (2019a), we rely on a threshold to determine if the data point is an instance from a new task or not. During training, any instance with Ep(tn|xn)(ELBOtn) less than threshold Tnew is added to a bufferDnew. Once the bufferDnew reaches\na fixed size limitM , we extend our network with new task parameters and train our network onDnew, with known task labels (i.e p(y = T + 1) = 1 where T is total number of tasks learned)\nSELECTIVE TRAINING Note that training this mixture model will require us to have all task specific variational parameters to be present at every time step unlike the case in earlier settings where we only need to store the masks and can discard the variational parameters of previously seen tasks. This will result in storage problems since the number of parameters will grow linearly with the number of tasks. To overcome this issue we fix the task specific mask parameters and prior parameters before the network is trained on new task instances. After the task specific parameters have been fixed, the arrival of data belonging to a previously seen task tprev is handled by training the network parameters with task specific masks Bprev .\nREPRESENTATION LEARNING It makes more sense do learn representations when we don’t have target class labels or task labels. As discussed, we trained our model using a gating mechanism with a threshold value of −130. Fig 13 qualitatively shows the t-SNE plots and reconstruction for each class data points. Based on these results, we can conclude that the task boundaries are well understood and separated by our model." } ]
2,020
null
SP:d27e98774183ece8d82b87f1e7067bf2a28a4fca
[ "This paper describes a system for separating \"on-screen\" sounds from \"off-screen\" sounds in an audio-visual task, meaning sounds that are associated with objects that are visible in a video versus not. It is trained to do this using mixture invariant training to separate synthetic mixtures of mixtures. It is evaluated on a subset of the YFCC100m that is annotated by human raters as to whether the clips have on-screen, off-screen, or both types of sounds, with the predictions of a previously described model (Jansen et al, 2020) helping to reduce the number with only off-screen sounds. The predictions are evaluated in terms of how well they can estimate the true on-screen sound (in terms of SI-SNR) and how well they can reject off-screen sound (in terms of a metric called off-screen suppression ratio, OSR). The results show that the system can successfully distinguish between on- and off-screen sound, but that different training regimens lead to different tradeoffs in these two metrics. The system with the best SI-SNR (8.0 dB) is trained using just data from the previous model along with the mixture invariant training criterion." ]
Recent progress in deep learning has enabled many advances in sound separation and visual scene understanding. However, extracting sound sources which are apparent in natural videos remains an open problem. In this work, we present AudioScope, a novel audio-visual sound separation framework that can be trained without supervision to isolate on-screen sound sources from real in-the-wild videos. Prior audio-visual separation work assumed artificial limitations on the domain of sound classes (e.g., to speech or music), constrained the number of sources, and required strong sound separation or visual segmentation labels. AudioScope overcomes these limitations, operating on an open domain of sounds, with variable numbers of sources, and without labels or prior visual segmentation. The training procedure for AudioScope uses mixture invariant training (MixIT) to separate synthetic mixtures of mixtures (MoMs) into individual sources, where noisy labels for mixtures are provided by an unsupervised audio-visual coincidence model. Using the noisy labels, along with attention between video and audio features, AudioScope learns to identify audio-visual similarity and to suppress off-screen sounds. We demonstrate the effectiveness of our approach using a dataset of video clips extracted from open-domain YFCC100m video data. This dataset contains a wide diversity of sound classes recorded in unconstrained conditions, making the application of previous methods unsuitable. For evaluation and semi-supervised experiments, we collected human labels for presence of on-screen and off-screen sounds on a small subset of clips.
[ { "affiliations": [], "name": "ON-SCREEN SOUNDS" }, { "affiliations": [], "name": "Efthymios Tzinis" }, { "affiliations": [], "name": "Scott Wisdom" }, { "affiliations": [], "name": "Aren Jansen" }, { "affiliations": [], "name": "Shawn Hershey" }, { "affiliations": [], "name": "Tal Remez" }, { "affiliations": [], "name": "Daniel P.W. Ellis" }, { "affiliations": [], "name": "John R. Hershey" } ]
[ { "authors": [ "Triantafyllos Afouras", "Andrew Owens", "Joon Son Chung", "Andrew Zisserman" ], "title": "Self-supervised learning of audio-visual objects from video", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Relja Arandjelovic", "Andrew Zisserman" ], "title": "Look, listen and learn", "venue": "In Proc. IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In Proc. International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Lukas Drude", "Daniel Hasenklever", "Reinhold Haeb-Umbach" ], "title": "Unsupervised training of a deep clustering model for multichannel blind source separation", "venue": "In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Ariel Ephrat", "Inbar Mosseri", "Oran Lang", "Tali Dekel", "Kevin Wilson", "Avinatan Hassidim", "William T Freeman", "Michael Rubinstein" ], "title": "Looking to listen at the cocktail party: a speaker-independent audio-visual model for speech separation", "venue": "ACM Transactions on Graphics (TOG),", "year": 2018 }, { "authors": [ "Chuang Gan", "Deng Huang", "Hang Zhao", "Joshua B Tenenbaum", "Antonio Torralba" ], "title": "Music gesture for visual sound separation", "venue": "In Proc. IEEE International Conference on Computer Vision (CVPR),", "year": 2020 }, { "authors": [ "Ruohan Gao", "Kristen Grauman" ], "title": "Co-separating sounds of visual objects", "venue": "In Proc. IEEE International Conference on Computer Vision (CVPR),", "year": 2019 }, { "authors": [ "Ruohan Gao", "Rogerio Feris", "Kristen Grauman" ], "title": "Learning to separate object sounds by watching unlabeled video", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Jort F Gemmeke", "Daniel P.W. Ellis", "Dylan Freedman", "Aren Jansen", "Wade Lawrence", "R Channing Moore", "Manoj Plakal", "Marvin Ritter" ], "title": "Audio set: An ontology and human-labeled dataset for audio events", "venue": "In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2017 }, { "authors": [ "Rohit Girdhar", "Deva Ramanan" ], "title": "Attentional pooling for action recognition", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "David Harwath", "Adria Recasens", "Dídac Surís", "Galen Chuang", "Antonio Torralba", "James Glass" ], "title": "Jointly discovering visual objects and spoken words from raw sensory input", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "John R Hershey", "Michael Casey" ], "title": "Audio-visual sound separation via hidden Markov models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2002 }, { "authors": [ "John R Hershey", "Javier R Movellan" ], "title": "Audio vision: Using audio-visual synchrony to locate sounds", "venue": "In Advances in Neural Information Processing Systems,", "year": 2000 }, { "authors": [ "Jen-Cheng Hou", "Syu-Siang Wang", "Ying-Hui Lai", "Yu Tsao", "Hsiu-Wen Chang", "Hsin-Min Wang" ], "title": "Audio-visual speech enhancement using multimodal deep convolutional neural networks", "venue": "IEEE Transactions on Emerging Topics in Computational Intelligence,", "year": 2018 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Di Hu", "Zheng Wang", "Haoyi Xiong", "Dong Wang", "Feiping Nie", "Dejing Dou" ], "title": "Curriculum audiovisual learning", "venue": "arXiv preprint arXiv:2001.09414,", "year": 2020 }, { "authors": [ "Aren Jansen", "Daniel PW Ellis", "Shawn Hershey", "R Channing Moore", "Manoj Plakal", "Ashok C Popat", "Rif A Saurous" ], "title": "Coincidence, categorization, and consolidation: Learning to recognize sounds with minimal supervision", "venue": "In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Ilya Kavalerov", "Scott Wisdom", "Hakan Erdogan", "Brian Patton", "Kevin Wilson", "Jonathan Le Roux", "John R. Hershey" ], "title": "Universal sound separation", "venue": "In Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA),", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proc. International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Qiuqiang Kong", "Yuxuan Wang", "Xuchen Song", "Yin Cao", "Wenwu Wang", "Mark D Plumbley" ], "title": "Source separation with weakly labelled data: An approach to computational auditory scene analysis", "venue": "In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Bruno Korbar", "Du Tran", "Lorenzo Torresani" ], "title": "Cooperative learning of audio and video models from self-supervised synchronization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jonathan Le Roux", "Scott Wisdom", "Hakan Erdogan", "John R. Hershey" ], "title": "SDR–half-baked or well done", "venue": "In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Oded Maron", "Tomás Lozano-Pérez" ], "title": "A framework for multiple-instance learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 1998 }, { "authors": [ "Daniel Michelsanti", "Zheng-Hua Tan", "Shi-Xiong Zhang", "Yong Xu", "Meng Yu", "Dong Yu", "Jesper Jensen" ], "title": "An overview of deep-learning-based audio-visual speech enhancement and separation", "venue": null, "year": 2008 }, { "authors": [ "Tsubasa Ochiai", "Marc Delcroix", "Yuma Koizumi", "Hiroaki Ito", "Keisuke Kinoshita", "Shoko Araki" ], "title": "Listen to what you want: Neural network-based universal sound selector", "venue": "In Proc. Interspeech,", "year": 2020 }, { "authors": [ "Andrew Owens", "Alexei A Efros" ], "title": "Audio-visual scene analysis with self-supervised multisensory features", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "F. Pishdadian", "G. Wichern", "J. Le Roux" ], "title": "Finding strength in weakness: Learning to separate sounds with weak supervision", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2020 }, { "authors": [ "Andrew Rouditchenko", "Hang Zhao", "Chuang Gan", "Josh McDermott", "Antonio Torralba" ], "title": "Selfsupervised audio-visual co-segmentation", "venue": "In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Prem Seetharaman", "Gordon Wichern", "Jonathan Le Roux", "Bryan Pardo" ], "title": "Bootstrapping singlechannel source separation via unsupervised spatial clustering on stereo mixtures", "venue": "In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Arda Senocak", "Tae-Hyun Oh", "Junsik Kim", "Ming-Hsuan Yang", "In So Kweon" ], "title": "Learning to localize sound source in visual scenes", "venue": "In Proc. IEEE International Conference on Computer Vision (CVPR),", "year": 2018 }, { "authors": [ "Bart Thomee", "David A Shamma", "Gerald Friedland", "Benjamin Elizalde", "Karl Ni", "Douglas Poland", "Damian Borth", "Li-Jia Li" ], "title": "Yfcc100m: The new data in multimedia research", "venue": "Communications of the ACM,", "year": 2016 }, { "authors": [ "Yapeng Tian", "Jing Shi", "Bochen Li", "Zhiyao Duan", "Chenliang Xu" ], "title": "Audio-visual event localization in unconstrained videos", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Efthymios Tzinis", "Shrikant Venkataramani", "Paris Smaragdis" ], "title": "Unsupervised deep clustering for source separation: Direct learning from mixtures using spatial information", "venue": "In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Efthymios Tzinis", "Scott Wisdom", "John R. Hershey", "Aren Jansen", "Daniel P.W. Ellis" ], "title": "Improving universal sound separation using sound classification", "venue": "In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Emmanuel Vincent", "Rémi Gribonval", "Cédric Févotte" ], "title": "Performance measurement in blind audio source separation", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2006 }, { "authors": [ "Scott Wisdom", "John R. Hershey", "Kevin Wilson", "Jeremy Thorpe", "Michael Chinen", "Brian Patton", "Rif A. Saurous" ], "title": "Differentiable consistency constraints for improved deep speech enhancement", "venue": "In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Scott Wisdom", "Efthymios Tzinis", "Hakan Erdogan", "Ron J. Weiss", "Kevin Wilson", "John R. Hershey" ], "title": "Unsupervised sound separation using mixture invariant training", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Scott Wisdom", "Hakan Erdogan", "Daniel Ellis", "Romain Serizel", "Nicolas Turpault", "Eduardo Fonseca", "Justin Salamon", "Prem Seetharaman", "John Hershey" ], "title": "What’s all the fuss about free universal sound separation data", "venue": "In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2021 }, { "authors": [ "Yu Wu", "Linchao Zhu", "Yan Yan", "Yi Yang" ], "title": "Dual attention matching for audio-visual event localization", "venue": "In Proc. IEEE International Conference on Computer Vision (CVPR),", "year": 2019 }, { "authors": [ "Xudong Xu", "Bo Dai", "Dahua Lin" ], "title": "Recursive visual sound separation using minus-plus net", "venue": "In Proc. IEEE International Conference on Computer Vision (CVPR),", "year": 2019 }, { "authors": [ "Hang Zhao", "Chuang Gan", "Andrew Rouditchenko", "Carl Vondrick", "Josh McDermott", "Antonio Torralba" ], "title": "The sound of pixels", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hang Zhao", "Chuang Gan", "Wei-Chiu Ma", "Antonio Torralba" ], "title": "The sound of motions", "venue": "In Proc. IEEE International Conference on Computer Vision (CVPR),", "year": 2019 }, { "authors": [ "Lingyu Zhu", "Esa Rahtu" ], "title": "Separating sounds from a single image", "venue": "arXiv preprint arXiv:2007.07984,", "year": 2020 }, { "authors": [ "Lingyu Zhu", "Esa Rahtu" ], "title": "Visually guided sound source separation using cascaded opponent filter network", "venue": "In Proc. Asian Conference on Computer Vision,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Audio-visual machine perception has been undergoing a renaissance in recent years driven by advances in large-scale deep learning. A motivating observation is the interplay in human perception between auditory and visual perception. We understand the world by parsing it into the objects that are the sources of the audio and visual signals we can perceive. However, the sounds and sights produced by these sources have rather different and complementary properties. Objects may make sounds intermittently, whereas their visual appearance is typically persistent. The visual percepts of different objects tend to be spatially distinct, whereas sounds from different sources can blend together and overlap in a single signal, making it difficult to separately perceive the individual sources.\n∗Work done during an internship at Google.\nThis suggests that there is something to be gained by aligning our audio and visual percepts: if we can identify which audio signals correspond to which visual objects, we can selectively attend to an object’s audio signal by visually selecting the object.\nThis intuition motivates using vision as an interface for audio processing, where a primary problem is to selectively preserve desired sounds, while removing unwanted sounds. In some tasks, such as speech enhancement, the desired sounds can be selected by their class: speech versus non-speech in this case. In an open-domain setting, the selection of desired sounds is at the user’s discretion. This presents a user-interface problem: it is challenging to select sources in an efficient way using audio. This problem can be greatly simplified in the audio-visual case if we use video selection as a proxy for audio selection, for example, by selecting sounds from on-screen objects, and removing off-screen sounds. Recent work has used video for selection and separation of speech (Ephrat et al., 2018; Afouras et al., 2020) or music (Zhao et al., 2018; Gao & Grauman, 2019; Gan et al., 2020). However, systems that address this for arbitrary sounds (Gao et al., 2018; Rouditchenko et al., 2019; Owens & Efros, 2018) may be useful in more general cases, such as video recording, where the sounds of interest cannot be defined in advance.\nThe problem of associating arbitrary sounds with their visual objects is challenging in an open domain. Several complications arise that have not been fully addressed by previous work. First, a large amount of training data is needed in order to cover the space of possible sound. Supervised methods require labeled examples where isolated on-screen sounds are known. The resulting data collection and labeling burden limits the amount and quality of available data. To overcome this, we propose an unsupervised approach using mixture invariant training (MixIT) (Wisdom et al., 2020), that can learn to separate individual sources from in-the-wild videos, where the on-screen and off-screen sounds are unknown. Another problem is that different audio sources may correspond to a dynamic set of on-screen objects in arbitrary spatial locations. We accommodate this by using attention mechanisms that align each hypothesized audio source with the different spatial and temporal positions of the corresponding objects in the video. Finally we need to determine which audio sources appear on screen, in the absence of strong labels. This is handled using a weakly trained classifier for sources based on audio and video embeddings produced by the attention mechanism." }, { "heading": "2 RELATION TO PREVIOUS WORK", "text": "Separation of arbitrary sounds from a mixture, known as “universal sound separation,” was recently shown to be possible with a fixed number of sounds (Kavalerov et al., 2019). Conditional information about which sound classes are present can improve separation performance (Tzinis et al., 2020). The FUSS dataset (Wisdom et al., 2021) expanded the scope to separate a variable number of sounds, in order to handle more realistic data. A framework has also been proposed where specific sound classes can be extracted from input sound mixtures (Ochiai et al., 2020). These approaches require curated data containing isolated sounds for training, which prevents their application to truly open-domain data and introduces difficulties such as annotation cost, accurate simulation of realistic acoustic mixtures, and biased datasets.\nTo avoid these issues, a number of recent works have proposed replacing the strong supervision of reference source signals with weak supervision labels from related modalities such as sound class (Pishdadian et al., 2020; Kong et al., 2020), visual input (Gao & Grauman, 2019), or spatial location from multi-microphone recordings (Tzinis et al., 2019; Seetharaman et al., 2019; Drude et al., 2019). Most recently, Wisdom et al. (2020) proposed mixture invariant training (MixIT), which provides a purely unsupervised source separation framework for a variable number of latent sources.\nA variety of research has laid the groundwork towards solving audio-visual on-screen source separation (Michelsanti et al., 2020). Generally, the two main approaches are to use audio-visual localization (Hershey & Movellan, 2000; Senocak et al., 2018; Wu et al., 2019; Afouras et al., 2020), or object detection networks, either supervised (Ephrat et al., 2018; Gao & Grauman, 2019; Gan et al., 2020) or unsupervised (Zhao et al., 2018), to predict visual conditioning information. However, these works only consider restricted domains such as speech (Hershey & Casey, 2002; Ephrat et al., 2018; Afouras et al., 2020) or music (Zhao et al., 2018; Gao & Grauman, 2019; Gan et al., 2020). Gao et al. (2018) reported results with videos from a wide domain, but relied on supervised visual object detectors, which precludes learning about the appearance of sound sources outside of a closed set of classes defined by the detectors. Rouditchenko et al. (2019) proposed a system for a wide domain of sounds,\nbut required sound class labels as well as isolated sounds from these classes. Our approach avoids the supervision of class labels and isolated sources in order to handle unknown visual and sound classes occurring in multi-source data.\nTowards learning directly from a less restrictive open domain of in-the-wild video data, Tian et al. (2018) learned to localize audio-visual events in unconstrained videos and presented an ad hoc dataset. Korbar et al. (2018) pretrained models to discern temporal synchronization of audio-video pairs, and demonstrated promising results on action recognition and audio classification. Arandjelovic & Zisserman (2017) took a similar approach by classifying audio-visual correspondences of pairs of one video frame and one second of audio. Hu et al. (2020) proposed a curriculum learning approach where the model gradually learns harder examples to separate.\nClosest to our work is the approach of Owens & Efros (2018), a self-supervised audio-visual onscreen speech separation system based on temporal audio-visual alignment. However, Owens & Efros (2018) assumes training videos containing only on-screen sources, and it is unclear how to adapt it to the case where training videos include off-screen sources.\nOur approach significantly differs from these prior works in that we do not restrict our domain to musical instruments or human speakers, and we train and test with real in-the-wild videos containing an arbitrary number of objects with no object class restrictions. Our proposed framework can deal with noisy labels (e.g. videos with no on-screen sounds), operate on a completely open-domain of in-the-wild videos, and effectively isolate sounds coming from on-screen objects.\nWe address the following task, which extends the formulation of the on-screen speech separation problem (Owens & Efros, 2018). Given an input video, the goal is to separate all sources that constitute the input mixture, and then estimate an audio-visual correspondence score for each separated source. These probability scores should be high for separated sources which are apparent on-screen, and low otherwise. The separated audio sources, weighted by their estimated on-screen probabilities, can be summed together to reconstruct the on-screen mixture. We emphasize that our approach is more generally applicable than previous proposals, because real-world videos may contain an unknown number of both on-screen and off-screen sources belonging to an undefined ontology of classes.\nWe make the following contributions in this paper:\n1. We provide the first solution for training an unsupervised, open-domain, audio-visual onscreen separation system from scratch on real in-the-wild video data, with no requirement on modules such as object detectors that require supervised data.\n2. We develop a new dataset for the on-screen audio-visual separation task, drawn from 2,500 hours of unlabeled videos from YFCC100m, and 55 hours of videos that are human-labeled for presence of on-screen and off-screen sounds." }, { "heading": "3 MODEL ARCHITECTURE", "text": "The overall architecture of AudioScope is built from the following blocks: an image embedding network, an audio separation network, an audio embedding network, an audio-visual attention mechanism, and an on-screen classifier (see Figure 2). The separation and embedding networks are based on prior work and are described in the following subsections. However, the main focus of this work is the overall architecture, as well as the training framework and loss functions.\nThe video is analyzed with the image embedding network, which generates local embeddings for each of 64 locations within each frame, as well as an embedding of the whole frame. These embeddings are used both as a conditioning input to an audio separation network, as well as an input for classification of the on-screen sounds. The audio separation network takes the mixed input waveform as input, and generates a fixed number of output waveforms, a variable number of which are non-zero depending on the estimated number of sources in the mixture. Conditioning on the video enables the separation to take advantage of cues about the sources present when performing separation. The audio embedding network is applied to each estimated source to obtain one embedding per frame for each source. These audio embeddings are then pooled over time and used in the audio-visual spatio-temporal attention network to retrieve, for each source, a representation of the visual activity that best matches the\naudio, similar to the associative maps extracted from the internal network representations proposed by Harwath et al. (2018).\nThe architecture is designed to address the problem of unsupervised learning on in-the-wild opendomain data. First, because the target training videos can contain both on-screen and off-screen sounds, training a system to directly produce the audio of the target video would encourage inclusion of off-screen sounds as well as on-screen ones1. Our proposed multi-source separation network instead produces latent source estimates using an unsupervised MixIT objective, which has been shown to perform well at general sound separation (Wisdom et al., 2020). By decoupling separation from on-screen classification, our architecture facilitates the use of robust objectives that allow some of the sources to be considered off-screen, even if they appear in the soundtrack of the target videos.\nThe audio-visual attention architecture is motivated by the alignment problem between audio and video: sound source objects in video may be localized, may move over time, and may be present before and after the corresponding audio activity. Because of the open domain we cannot rely on a pre-defined set of object detectors to anchor the video representations of on-screen sources, as is done in some prior works (Ephrat et al., 2018; Gao & Grauman, 2019; Gan et al., 2020). Instead we propose attention to find the video representations that correspond to a source in a more flexible way.\nThe proposed strategy of temporal pooling of the audio embeddings, before using them in the spatiotemporal attention, allows the network to derive embeddings that represent the active segments of the source audio, and ignore the ambiguous silent regions. In the present model, video is analyzed at a low frame rate, and so the audio-visual correspondence is likely based on relatively static properties of the objects, rather than the synchrony of their motion with the audio. In this case, a single time-invariant representation of the audio may be sufficient as a proof of concept. However, in future work, with higher video frame rates, it may be worthwhile to consider using attention to align sequences of audio and video embeddings in order to detect synchrony in their activity patterns.\nThe on-screen classifier operates on an audio embedding for one estimated source, as well as the video embedding retrieved by the spatio-temporal attention mechanism, using a dense network. This presumably allows detection of the congruence between the embeddings. To provide additional context for this decision, a global video embedding, produced by temporal pooling, is provided as an additional input. Many alternative choices are possible for this classifier design, which we leave for future work, such as using a more complex classification architecture, or providing additional audio embeddings as input.\n1We train such a system in Appendix A.3.5, and find that it is not an effective approach." }, { "heading": "3.1 AUDIO SEPARATION NETWORK", "text": "The separation networkMs architecture consists of learnable convolutional encoder and decoder layers with an improved time-domain convolutional network (TDCN++) masking network (Wisdom et al., 2020). A mixture consistency projection (Wisdom et al., 2019) is applied to constrain separated sources to add up to the input mixture. The separation network processes a T -sample input mixture waveform and outputs M estimated sources ŝ ∈ RM×T . Internally, the network estimates M masks which are multiplied with the activations of the encoded input mixture. The time-domain signals ŝ are computed by applying the decoder, a transposed convolutional layer, to the masked coefficients." }, { "heading": "3.2 AUDIO EMBEDDING NETWORK", "text": "For each separated source ŝm, we extract a corresponding global audio embedding using the MobileNet v1 architecture (Howard et al., 2017) which consists of stacked 2D separable dilated convolutional blocks with a dense layer at the end. This network Ma first computes log Mel-scale spectrograms with Fa audio frames from the time-domain separated sources, and then applies stacks of depthwise separable convolutions to produce the Fa ×N embedding matrix Zam, which contains an N -dimensional row embedding for each frame. An attentional pooling operation (Girdhar & Ramanan, 2017) is used, for each source, m, to form a static audio embedding vector zam = attend(Z̄ a m, Z a m, Z a m), where the average embedding Z̄ a m = 1 Fa ∑ i Z a m,i is the query vector for source m. The attention mechanism (Bahdanau et al., 2015) is defined as follows:\nattend(q,K, V ) = αT fV(V ), α = softmax(tanh (fK(K)) tanh (fq(q)) T ), (1)\nwith query row vector q, the attention weight distribution column vector α, key matrix K, value matrix V , and trainable row-wise dense layers fq, fV, fK, all having conforming dimensions." }, { "heading": "3.3 IMAGE EMBEDDING NETWORK", "text": "To extract visual features from video frames, we again use a MobileNet v1 architecture. This visual embedding model Mv is applied independently to each one of the Fv input video frames and a static-length embedding is extracted for each image Zvj , j ∈ {1, . . . , Fv}. Conditioning separation network with the temporal video embedding: The embeddings of the video input Zvj can be used to condition the separation network (Tzinis et al., 2020). Specifically, the image embeddings are fed through a dense layer, and a simple nearest neighbor upsampling matches the time dimension to the time dimension of the intermediate separation network activations. These upsampled and transformed image embeddings are concatenated with the intermediate TDCN++ activations and fed as input to the separation network layers.\nGlobal video embedding: A global embedding of the video input is extracted using attentional pooling over all video frames, given by zvg = attend(Z̄v, Zv, Zv), where the average embedding Z̄v = 1Fv ∑ j Z v j is the query vector.\nLocal spatio-temporal video embedding: We also use local features extracted from an intermediate level in the visual convolutional network, that has 8 × 8 spatial locations. These are denoted Zvlk , where k = (j, n) indexes video frame j and spatial location index n. These provide spatial features for identification of sources with visual objects to be used with audio-visual spatio-temporal attention." }, { "heading": "3.4 AUDIO-VISUAL SPATIO-TEMPORAL ATTENTION", "text": "An important aspect of this work is to combine audio and visual information in order to infer correspondence between each separated source and the relevant objects in video. This in turn will be used to identify which sources are visible on-screen. To this end, we employ an audio-visual spatiotemporal attention scheme by letting the network attend to the local features of the visual embeddings for each separated source. In this mechanism, we use the audio embedding zam as the query input for source m, and the key and value inputs are given by the spatio-temporal video embeddings, Zvl. As a result, the flattened version of the output spatio-temporal embedding, corresponding to the m-th source, is zavm = attend(z a m, Z vl, Zvl)." }, { "heading": "3.5 ON-SCREEN CLASSIFIER", "text": "To infer the visual presence each separated source, we concatenate the global video embedding zvg, the global audio embedding for each source zam, and the corresponding local spatio-temporal audio-visual embedding zavm . The concatenated vector is fed through a dense layer fC with a logistic activation: ŷm = logistic (fC ([zvg, zam, z av m ]))." }, { "heading": "3.6 SEPARATION LOSS", "text": "We use a MixIT separation loss (Wisdom et al., 2020), which optimizes the assignment of M estimated sources ŝ =Ms (x1 + x2) to two reference mixtures x1, x2 as follows:\nLsep (x1, x2, ŝ) = min A\n( LSNR (x1, [Aŝ]1) + LSNR (x2, [Aŝ]2) ) , (2)\nwhere the mixing matrix A ∈ B2×M is constrained to the set of 2×M binary matrices where each column sums to 1. Due to the constraints on A, each source ŝm can only be assigned to one reference mixture. The SNR loss for an estimated signal t̂ ∈ RT and a target signal t ∈ RT is defined as:\nLSNR(t, t̂) = 10 log10 ( ‖t− t̂‖2 + 10−3‖t‖2 ) . (3)" }, { "heading": "3.7 CLASSIFICATION LOSS", "text": "To train the on-screen classifier, we consider the following classification losses. These losses use the binary labels ym, where are given for supervised examples, and in the unsupervised case ym = A∗1,m for each source m, where A∗ is the optimial mixing matrix found by the minimization in (2). We also use the notationR = {m|ym = 1, m ∈ {1, . . . ,M}} to denote the set of positive labels. Exact binary cross entropy:\nLexact (y, ŷ) = M∑\nm=1\n( − ym log (ŷm) + (ym − 1) log (1− ŷm) ) . (4)\nMultiple-instance cross entropy: Since some separated sources assigned to the on-screen mixture are not on-screen, a multiple-instance (MI) (Maron & Lozano-Pérez, 1998) loss, which minimizes over the set of positive labelsR may be more robust:\nLMI (y, ŷ) = min m∈R\n( − log (ŷm)− ∑ m′ /∈R log (1− ŷm′) ) . (5)\nActive combinations cross entropy: An alternative to the MI loss, active combinations (AC), corresponds to the minimum loss over all settings ℘≥1 (R) of the labels s.t. at least one label is 1:\nLAC (y, ŷ) = min S∈℘≥1(R) ( − ∑ m∈S log (ŷm)− ∑ m′ /∈S log (1− ŷm′) ) . (6)\nwhere ℘≥1 (R) denotes the power set of indices with label of 1." }, { "heading": "4 EXPERIMENTAL FRAMEWORK", "text": "" }, { "heading": "4.1 DATA PREPARATION", "text": "In order to train on real-world audio-visual recording environments for our open-domain system, we use the Yahoo Flickr Creative Commons 100 Million Dataset (YFCC100m) (Thomee et al., 2016). The dataset is drawn from about 200,000 videos (2,500 total hours) of various lengths and covering a diverse range of semantic sound categories. By splitting on video uploader, we select 1,600 videos for training, and use the remaining videos for validation and test. We extract 5-second clips with a hop size of 1 second, resulting in around 7.2 million clips. Clips consist of a 5-second audio waveform sampled at 16 kHz and 5 video frames x(f), where each frame is a 128× 128× 3 RGB image.\nOur goal is to train our system completely unsupervised, but we sought to reduce the proportion of videos with no on-screen sounds. We thus created a filtered subset Df of YFCC100m of clips with a high audio-visual coincidence probability predicted by an unsupervised audio-visual coincidence prediction model (Jansen et al., 2020) trained on sounds from AudioSet (Gemmeke et al., 2017). The resulting selection is noisy, because the coincidence model is not perfect, and clips that have high audio-visual coincidence may contain both on-screen and off-screen sounds, or even no on-screen sounds. However, this selection does increase the occurrence of on-screen sounds, as shown below. The final filtered dataset consists of all clips (about 336,000) extracted from the 36,000 highest audio-visual coincidence scoring videos. The threshold for filtering was empirically set to keep a fair amount of diverse videos while ensuring that not too many off-screen-only clips were accepted.\nTo evaluate the performance of the unsupervised filtering and our proposed models, and to experiment with a small amount of supervised training data, we obtained human annotations for 10,000 unfiltered training clips, 10,000 filtered training clips, and 10,000 filtered validation/test clips. In the annotation process, the raters indicated “present” or “not present” for on-screen and off-screen sounds. Each clip is labeled by 3 individual raters, and is only considered on-screen-only or off-screen-only if raters are unanimous. We constructed an on-screen-only subset with 836 training, 735 validation, and 295 test clips, and an off-screen-only subset with 3,681 training, 836 validation, and 370 test clips.\nBased on human annotations, we estimate that for unfiltered data 71.3% of clips contain both on-andoff-screen sounds, 2.8% contain on-screen-only sounds, and 25.9% only off-screen sounds. For the filtered data, 83.5% of clips contain on-screen and off-screen sounds, 5.6% of clips are on-screenonly, and 10.9% are off-screen-only. Thus, the unsupervised filtering reduced the proportion of off-screen-only clips and increased the proportion of clips with on-screen sounds." }, { "heading": "4.2 TRAINING", "text": "Both audio and visual embedding networks were pre-trained on AudioSet (Gemmeke et al., 2017) for unsupervised coincidence prediction (Jansen et al., 2020) and fine-tuned on our data (see Appendix A.3.1 for ablation), whereas the separation network is trained from scratch using MixIT (2) on mixtures of mixtures (MoMs) from the audio of our data. All models are trained on 4 Google Cloud TPUs (16 chips) with Adam (Kingma & Ba, 2015), batch size 256, and learning rate 10−4.\nTo train the overall network, we construct minibatches of video clips, where the clip’s audio is either a single video’s soundtrack (“single mixture” example), or a mixture of two videos’ soundtracks (“MoM” example): NOn (noisy-labeled on-screen), SOff (synthetic off-screen-only), LOn (humanlabeled on-screen-only), and LOff (human-labeled off-screen-only). For all MoM examples, the second audio mixture is drawn from a different random video in the filtered data. Unsupervised minibatches consist of either 0% or 25% SOff examples, with the remainder as NOn. NOn examples are always MoMs, and SOff examples are evenly split between single mixtures and MoMs. A NOn MoM uses video clip frames and audio from the filtered high-coincidence subset of our data, Df , and SOff MoMs combine video frames of a filtered clip with random audio drawn from the dataset Df . Semi-supervised minibatches additionally include LOn and LOff examples. Half of these examples in the minibatch are single-mixture examples, and the other half are MoM examples. LOn and LOff examples are constructed in the manner as NOn, except that the corresponding video clip is drawn from unanimously human-labeled on-screen-only videos and unanimously human-labeled offscreen-only videos, respectively. We experiment with using 0% or 25% SOff examples: (NOn, SOff) proportions of (50%, 0%) or (25%, 25%), respectively, with the remainder of the minibatch evenly split between LOn single-mixture, LOn MoM, LOff single-mixture, and LOff MoM.\nClassification labels ym for all separated sources ŝm in SOff and LOff examples are set to 0. For NOn and LOn examples, we set the label for each separated source as the first row of the MixIT mixing matrix (2): ym = A1,m. The MixIT separation loss (2) is used for all MoM example types." }, { "heading": "4.3 EVALUATION", "text": "All evaluations use human-labeled test videos, which have been unanimously labeled as containing either only on-screen or only off-screen sounds. Using this data, we construct four evaluation sets: on-screen single mixtures, off-screen single mixtures, on-screen MoMs, and off-screen MoMs. The single-mixture evaluations consist of only data drawn from the particular label, either on-screen or\noff-screen. Each on-screen (off-screen) MoM consists of an on-screen-only (off-screen-only) video clip, mixed with the audio from another random clip, drawn from the off-screen-only examples." }, { "heading": "4.3.1 ON-SCREEN DETECTION", "text": "Detection performance for the on-screen classifier is measured using the area under the curve of the weighted receiver operator characteristic (AUC-ROC). Specifically, we set the weight for each source’s prediction equal to the linear ratio of source power to input power, which helps avoid ambiguous classification decisions for inactive or very quiet sources. For single-mixture evaluations, positive labels are assigned for all separated sources from on-screen-only mixtures, and negative labels for all separated sources from off-screen-only mixtures. For on-screen MoM evaluations, labels for separated sources from on-screen MoMs are assigned using the first row of the oracle MixIT mixing matrix, and negative labels are assigned to sources separated from off-screen MoMs." }, { "heading": "4.3.2 SEPARATION", "text": "Since we do not have access to individual ground-truth reference sources for our in-the-wild data, we cannot evaluate the per-source separation performance. The only references we have are mixtures. Thus, we compute an estimate of the on-screen audio by combining the separated sources using classifier predictions: x̂on = ∑M m=1 pmŝm. For on-screen single mixture and MoM evaluations, we measure scale-invariant SNR (SI-SNR) (Le Roux et al., 2019), between x̂on and the reference on-screen-only mixture x(on). SI-SNR measures the fidelity between a target t ∈ RT and an estimate t̂ ∈ RT within an arbitrary scale factor in units of decibels:\nSI-SNR(t, t̂) = 10 log10 ‖αt‖2\n‖αt− t̂‖2 , α = argmina‖at− t̂‖2 =\ntT t̂\n‖t‖2 . (7)\nTo measure the degree to which AudioScope rejects off-screen audio, we define the off-screen suppression ratio (OSR), which is the ratio in decibels of the power of the input mixture to the power of the on-screen estimate x̂on. We only compute OSR for off-screen evaluation examples where the input mixture only contains off-screen audio. Thus, higher OSR implies greater suppression of off-screen sounds. The minimum value of OSR is 0 dB, which means that x̂on is equal to the input mixture, which corresponds to all on-screen classifier probabilities being equal to 1.\nIn some cases, SI-SNR and OSR might yield infinite values. For example, the estimate ŷ may be zero, in which case SI-SNR (7) is −∞ dB. This can occur when the input SNR of an on-screen mixture in a MoM is very low and none of the separated sources are assigned to it by MixIT. Conversely, if the estimate perfectly matches the target, SI-SNR can yield a value of∞ dB, which occurs for on-screen single mixture evaluation cases when the separated sources trivially add up to the on-screen input due to mixture consistency of the separation model. For off-screen examples, OSR can also be infinite if the separation model achieves perfect off-screen suppression by predicting zero for x̂on. To avoid including these infinite values, we elect to measure median SI-SNR and OSR." }, { "heading": "5 RESULTS", "text": "Results are shown in Table 1. Note that there is a trade-off between preservation of on-screen sounds, as measured by SI-SNR, and suppression of off-screen sounds, as measured by OSR: higher on-screen SI-SNR on on-screen examples generally means lower OSR on off-screen examples. Different classification losses have different operating points: for MoMs, compared to using the exact cross-entropy loss, models trained with active combinations or multiple instance loss achieve lower on-screen SI-SNR, while achieving more suppression (higher OSR) of off-screen sounds. Exact cross-entropy models achieve higher AUC-ROC for single mixtures and MoMs, and achieve better reconstruction of on-screen single mixtures at the expense of less rejection of off-screen mixtures.\nTraining only with the noisy labels provided by the unsupervised coincidence model (Jansen et al., 2020) achieves lower AUC-ROC compared to the semi-supervised condition that adds a small amount of human-labeled examples. Semi-supervised and unsupervised models achieve comparable onscreen SI-SNR, but semi-supervised models achieve better off-screen suppression. For example, the best on-screen SI-SNR for unsupervised and semi-supervised is 8.0 dB and 7.3 dB, respectively,\nwhile OSR is 5.3 dB and 10.7 dB. Using 25% synthetic off-screen particularly shifts the behavior of semi-supervised models by biasing them towards predicting lower probabilities of on-screen. This bias results in lower on-screen SI-SNR, yet very strong off-screen rejection (i.e. very large OSRs).\nFigure 3 shows scatter plots of input SI-SNR versus SI-SNR of MixIT or x̂on on-screen estimates. From these plots, it is clear that the models tend to improve on-screen SI-SNR more often than not, and that these improvements are most significant around ±10 dB input SI-SNR. Note that for MixIT, a number of points have a SI-SNR of −∞, which happens when MixIT assigns all separated sources to the off-screen mixture. OSR is sometimes∞ when AudioScope achieves excellent off-screen suppression by predicting nearly 0 for the on-screen audio from off-screen-only input. To provide a sense of the qualitative performance of AudioScope, we include visualizations of best, worst, and typical predictions in the appendix, and the supplementary material contains audio-visual demos.\nTo benchmark AudioScope against other audio-visual separation approaches and measure performance on mismatched data, we evaluate on existing audio-visual separation test sets in Appendix A.2. We also performed a number of ablations for AudioScope, described in Appendix A.3." }, { "heading": "6 CONCLUSION", "text": "In this paper we have proposed the first solution for training an unsupervised, open-domain, audiovisual on-screen separation system, without reliance on prior class labels or classifiers. We demonstrated the effectiveness of our system using a small amount of human-labeled, in-the-wild videos. A recipe for these will be available on the project webpage: https://audioscope.github.io. In future work, we will explore more fine-grained visual features, especially synchrony, which we expect will be especially helpful when multiple instances of the same object are present in the video. We also plan to use our trained classifier to refilter YFCC100m to get better noisy labels for the presence of on-screen sounds, which should further improve the performance of the system." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 QUALITATIVE EXAMPLES", "text": "For a range of input SNRs, Figure 4 shows best-case examples of separating on-screen sounds with AudioScope, while Figure 5 shows failure cases. Figures 6 and 7 show random examples at various SNRs, comparing the outputs of semi-supervised SOff 0% models trained with either exact cross entropy (4) or active combinations cross entropy (6). Figure 6 shows the outputs of the two models on 7 random examples, and Figure 7 shows the outputs of the two models on 5 examples that have maximum absolute difference in terms of SI-SNR of the on-screen estimate.\nThe supplementary material includes audio-visual demos of AudioScope on single mixtures and MoMs with visualizations of MixIT assignments and predicted on-screen probabilities. For more examples please see: https://audioscope.github.io/." }, { "heading": "A.2 EVALUATION ON MISMATCHED DATA", "text": "To evaluate the generalization capability of AudioScope and facilitate a comparison to prior works, we evaluated our model using test data from an audio-visual speech enhancement task (Hou et al., 2018) as well as an audio-visual task derived from a single-source subset of AudioSet (Gao et al., 2018). In both cases, the evaluation is on a restricted domain, and the prior methods used both matched and supervised training on that domain. In contrast, the AudioScope model is trained on open-domain YFCC100m videos using unsupervised training. For all evaluations we use the unsupervised AudioScope model using 0% SOff and active combinations loss." }, { "heading": "A.2.1 EVALUATION ON AUDIO-VISUAL SPEECH ENHANCEMENT", "text": "Since our method can be used to separate on-screen sounds for arbitrary classes of sound, to compare to existing approaches we evaluate the trained AudioScope model on the more restricted domain of audio-visual speech enhancement. To that end, we used the Mandarin sentences dataset, introduced by Hou et al. (2018). The dataset contains video utterances of Mandarin sentences spoken by a native speaker. Each sentence is unique and contains 10 Chinese characters. The length of each utterance is approximately 3 to 4 seconds. Synthetic noise is added to each ground truth audio. Forty such videos are used as the official testing set. For our evaluation we regard the speech of the filmed speaker as the on-screen sounds and the interference as off-screen sounds. Thus, we can compute quality metrics for the on-screen estimate while comparing to speech enhancement methods. To compare to previously-published numbers, we use signal-to-distortion ratio (SDR) (Vincent et al., 2006), which measures signal-to-noise ratio within a linear filtering of the reference signal.\nTable 2 shows the comparison between Hou et al. (2018), Ephrat et al. (2018), AudioScope x̂on (on-screen estimate using predicted on-screen probabilities), AudioScope source with max ŷm (use separated source with highest predicted on-screen probability), AudioScope best source (oracle selection of the separated source with the highest SDR with on-screen reference), and AudioScope MixIT* (on-screen estimate using oracle binary on-screen weights using references). Note that the AudioScope models are trained with mismatched open-domain training data, whereas the others were trained on matched speech enhancement data. It can be seen that although non-oracle AudioScope estimates do not advance on state-of-the-art performance of speech enhancement-specific methods, the oracle AudioScope estimates improve over (Hou et al., 2018). Thus AudioScope show promising results on this challenging data which is not explicitly represented in its open-domain training set. We believe that by adding such data to our training set, perhaps by fine-tuning, AudioScope could improve its performance significantly on this more specific task, which we leave for future work." }, { "heading": "A.2.2 EVALUATION ON AUDIOSET-SINGLESOURCE", "text": "We evaluated AudioScope on the musical instrument portion of the AudioSet-SingleSource dataset (Gao et al., 2018), which is a small number of clips from AudioSet (Gemmeke et al., 2017) that have been verified by humans to contain single sources. We use the same procedure as Gao & Grauman (2019) to construct a MoM test set, which creates 105 synthetic mixtures from all pairs of 15 musical instrument classes. For each pair, audio tracks are mixed together, and we perform separation twice for each pair, conditioning on the video for each source. The results are shown in table 3.\nThe non-oracle AudioScope methods perform rather poorly, but the oracle methods, especially MixIT* (which matches the MixIT training loss), achieve state-of-the-art performance compared to\nmethods form the literature. This suggests that the on-screen classifier is less accurate on this data. Also, mixing the predicted AudioScope sources using the probabilities of the on-screen classifier may be suboptimal, and exploring alternative mixing methods to estimate on-screen audio is an avenue for future work. Fine-tuning on data for this specific task could also improve performance, which we also leave for future work." }, { "heading": "A.2.3 EVALUATION ON MUSIC", "text": "We also evaluated AudioScope on MUSIC (Zhao et al., 2018), which includes video clips of solo musical performances that have been verified by humans to contain single sources. We use the same procedure as Gao & Grauman (2019) to construct a MoM test set, which creates 550 synthetic mixtures from all 55 pairs of 11 musical instrument classes, with 10 random 10 second clips per pair2. For each pair, the two audio clips are mixed together, and we perform separation twice for each pair, conditioning on the video for each source. The results are shown in table 4.\nWe see a similar pattern compared to the results for AudioSet-SingleSource in Table 3: non-oracle methods that use the predicted on-screen probability ŷm do not perform very well. However, oracle selection of the best source, or oracle remixing of the sources, both achieve better performance than a number of recent specialized supervised in-domain systems from the literature, though they do not achieve state-of-the-art performance. These results seem to suggest that the predictions ŷm are less accurate for this restricted-domain task, but the excellent oracle results suggest potential. In particular, non-oracle performance could improve if the classifier were more accurate, perhaps by fine-tuning. Also, there may be better ways of combining separated sources together to reconstruct on-screen sounds.\n2Gao & Grauman (2019) did not provide the exact clip timestamps they used. We used a sliding window of 10 seconds with a hop of 5 seconds, and randomly selected 10 of these." }, { "heading": "A.3 ABLATIONS", "text": "We performed a number of ablations on AudioScope. The following subsections show the results of a number of ablations using either unsupervised or semi-supervised training. All models for these ablation use 0% SOff examples and the active combinations loss (6)." }, { "heading": "A.3.1 AUDIO AND VIDEO EMBEDDINGS", "text": "Table 5 shows the results of various ablations involving audio and video embeddings in the model.\nFirst, notice that removing video conditioning for the separation model reduces on-screen SI-SNR by 2 dB on single mixtures and 0.9 dB on MoMs, with negligible or slight improvement in OSR. Thus, we can conclude that visual conditioning does have some benefit for the model.\nNext, we consider training the audio and video embedding networks from scratch, instead of using the coincidence model weights pretrained using AudioSet (Jansen et al., 2020). Training from scratch is quite detrimental, as AUC-ROC decreases by a minimum of 0.13 and maximum of 0.23 across single-mixtures/MoMs and unsupervised/semi-supervised conditions. Furthermore, separation performance suffers, with on-screen SI-SNR dropping by multiple for all conditions.\nFinally, we consider removing the global video embedding, or both the global video embedding and audio embeddings, from the input of the on-screen classifier. This results in equivalent or slightly worse AUC-ROC, with equivalent or worse on-screen SI-SNR. For unsupervised training, removing both embeddings at the classifier input improves on-screen SI-SNR a bit (0.5 dB for single mixtures, 0.6 dB for MoMs) with a slight drop in OSR, though for semi-supervised on-screen SI-SNR drops by 3.7 dB for single mixtures and 0.5 dB for MoMs. Overall, the best result is achieved by including these embeddings at the classifier input." }, { "heading": "A.3.2 ATTENTIONAL POOLING", "text": "We tried decreasing the embedding dimension from 256 to 128, as well as replacing the attentional pooling with mean pooling for audio sources, video frames, or both. The results are shown in Table 6.\nDecreasing the embedding dimension reduces performance, dropping on-screen SI-SNR by 1.4 dB on single mixtures and 0.6 dB on MoMs, also with reduction in OSR. Replacing attentional pooling with mean pooling generally does not change AUC-ROC or on-screen SI-SNR that much, but does result\nin a OSR reduction of at least 0.6 dB for single mixtures and 1.7 dB for MoMs. Thus, attentional pooling seems to have a beneficial effect in that it improves off-screen suppression, with equivalent classification and on-screen separation performance." }, { "heading": "A.3.3 DATA FILTERING", "text": "As described in section 4.1, we use an unsupervised audio-visual coincidence model (Jansen et al., 2020) to filter training videos for on-screen sounds. To ablate the benefit of this filtering, we tried using different combinations of filtered and unfiltered data for NOn examples, as described in section 4.2, which uses filterd data for both on-screen and off-screen mixtures. Filtered data has the advantage of less noisy on-screen labels, but the disadvantage that it lacks the variety of unfiltered data, being only 4.7% of the unfiltered data.\nThe results are shown in Table 7. For unsupervised training, unfiltered on-screen with filtered off-screen achieves improved performance in terms of AUC-ROC and on-screen SI-SNR, yet OSR decreases for MoMs. This suggests that in the absence of cleanly-labeled on-screen videos, a larger amount of data with noisier labels is better compares to a smaller amount of data with less noisy labels. However, for semi-supervised training that includes a small amount of cleanly-labeled on-screen examples, AUC-ROC is consistently worse for all ablations, and on-screen SI-SNR and OSR are generally equivalent or worse for all ablations. Thus, these ablations validate that using filtered data for both on-screen and off-screen components of NOn examples with semi-supervised training achieves the best results overall." }, { "heading": "A.3.4 NUMBER OF OUTPUT SOURCES", "text": "For all experiments in this paper, we generally used M = 4 output sources for the separation model, which is the maximum number of sources that it can predict. Here we see if increasing the number of output sources can improve performance. More output source slots provides a separation model with more flexibility in decomposing the input waveform, yet the drawback is that the model may over-separate (i.e. split sources into multiple components), and there is more pressure on the classifier to correctly group components of the on-screen sound together. The results are shown in Table 8.\nFor unsupervised training, increasing the number of output sources generally degrades AUC-ROC and on-screen SI-SNR, while boosting OSR a bit. Note that the MixIT* improves for MoMs with 8 output sources (10.5 dB→ 11.1 dB), which suggests the greater flexibility of the model, yet the\non-screen estimate x̂on is quite a bit worse (3.6 dB), also compared to on-screen SI-SNR for 4 output sources (6.3 dB).\nFor semi-supervised training, MixIT* performance also improves with more output sources, but AUC-ROC and on-screen SI-SNR decrease, suggesting the increased pressure on the classifier to make correct predictions for more, and potentially partial, sources. OSR increases with more output sources, which suggests the classifier biases towards predicting 0s more often. Thus, increasing the number of sources shifts the operating point of the model away from separating on-screen sounds and towards suppressing off-screen sounds." }, { "heading": "A.3.5 BASELINE SEPARATION MODEL", "text": "We also trained two-output baseline separation models without the on-screen classifier, where the first estimated source is the on-screen estimate x̂on with training target of on-screen audio, and the second estimated source is the off-screen estimate x̂off with training target of off-screen audio. These models were trained with or without video conditioning, using the negative SNR loss (3). The training data is exactly the same as in Table 1, with 0% SOff.\nFirst, notice that none of these models approach the performance of separation models that include the on-screen classifier, as shown in Table 1. Second, the unsupervised and semi-supervised models here achieve distinctly different operating points. Without video conditioning, the unsupervised model achieves a trivial solution, nearly equivalent to just outputting 1/2 the input mixture for each estimated source. Adding video conditioning for the unsupervised model actually reduces single-mixture performance a bit (66.2 dB to 29.7 dB).\nThe semi-supervised model without video conditioning is very poor at single-mixture on-screen SI-SNR (-18.0 dB), yet achieves quite high single-mixture OSR (51.1 dB). As indicated by the ISRs,\nthe model tends to prefer nearly-zero on-screen estimates, which may be due to the additional cleanlylabeled off-screen examples provided during training. For the video-conditioned semi-supervised model, single-mixture on-screen SI-SNR improves by quite a lot (-18.0 dB to 18.8 dB), but on-screen SI-SNR performance for on-screen MoMs is abysmal (-19.7 dB without visual conditioning, -5.3 dB with visual conditioning).\nOverall, we can conclude from these baselines that simply training a two-output separation model with on-screen and off-screen targets, even with visual conditioning, is not a feasible approach for our open-domain and noisily-labeled data." }, { "heading": "A.4 NEURAL NETWORK ARCHITECTURES", "text": "We briefly present the architectures used in this work for the separation network Ms, the audio embedding networkMa, and the image embedding networkMv, and referred in Sections 3.1, 3.2 and 3.3, respectively.\nWe present the architecture of the TDCN++ separation network in Table 10. The input to the separation network is a mixture waveform with T time samples and the output is a tensor containing the M estimated source waveforms ŝ ∈ RM×T . The input for the ith depth-wise (DW) separable convolutional block is the summation of all skip-residual connections and the output of the previous block. Specifically, there are the following skip connections defined w.r.t. their index i = 0, . . . , 31: 0 8, 0 16, 0 24, 8 16, 8 24 and 16 24.\nIn a similar way, in Table 11 we define the image and audio embedding networks, which use the same MobileNet v1 architecture (Howard et al., 2017) with different input tensors.\nThe extraction of each image embedding Zvj , j = 1, . . . , 5 relies on the application of the image embedding networkMv on top of each input video frame individually. Moreover, in order to extract the local video spatio-temporal embedding, we extract the output of the 8 × 8 convolutional map (denoted with a * in Table 11) for each input video frame and feed it through a dense layer in order to reduce its channel dimensions to 1. By concatenating all these intermediate convolutional maps we form the local spatio-temporal video embedding Zvl as specificed in Section 3.3.\nOn the other hand, we extract a time-varying embedding Zam for the mth separated source waveform by applying the audio embedding networkMa on overlapping audio segments and concatenating those outputs. The audio segments are extracted with an overlap of 86 windows or equivalently 0.86 seconds. Specifically, for each segment, we extract the mel-spectrogram representation from 96 windows with a length of 25ms and a hop size of 10ms forming the input for the audio embedding network as a matrix with size 96× 64, where 64 is the number of mel-features. After feeding this mel-spectrogram as an input to our audio embedding networkMa, we extract the corresponding static length representation for this segment Zaj , where j denotes the segment index." }, { "heading": "A.5 HUMAN EVALUATION", "text": "To determine the subjective quality of AudioScope predictions, we performed another round of human annotation on on-screen test MoM videos. The rating task is the same as the one used to annotate data, as described in Section 4.1, where raters were asked to mark the presence of on-screen sounds and off-screen sounds. All models for these evaluations are the same as the base model used in Appendix A.3: 0% SOff examples with active combinations loss (6). Each example was annotated by 3 raters, and the ultimate binary rating for each example is determined by majority.\nThe results for the on-screen MoM test set are shown in Table 12. We evaluated both the estimate x̂on computed by a weighted sum of the separated sources ŝm with the predicted probabilities ŷm, as well as the oracle remixture of separated sources to match the on-screen and off-screen reference audios (denoted by MixIT*). In this case, notice that all methods improve the percentage of videos rated as on-screen-only from 25.7% to about 37% or 38% for all methods.\nOverall, these human evaluation results suggest lower performance than the objective metrics in Table 1. One reason for this is that the binary rating task is ill-suited towards measuring variable levels of off-screen sounds. That is, a video will be rated as on-screen only if there is absolutely no off-screen sound. However, even if there is quiet off-screen sound present, or artifacts from the separation, a video will be rated as having off-screen sound. Thus, the proportion of human-rated on-screen-only videos can be interpreted as the number of cases where the model did a perfect job at removing off-screen sounds.\nWe plan to run new human evaluation tasks with better-matched questions. For example, we could ask raters to use a categorical scale, e.g. mean opinion score from 1 to 5. Another idea is to ask raters to score the loudness of on-screen sounds with respect to off-screen sounds on a sliding scale, where the bottom of the scale means on-screen sound is much quieter than off-screen sound, middle of the scale means on-screen sound is equal in loudness to off-screen sound, and top of the scale means on-screen sound is much louder than off-screen sound." }, { "heading": "A.6 PERFORMANCE ANALYSIS OF BEST MODELS", "text": "In Figure 8, we show the distributions of overall SI-SNR and SI-SNR improvement, as well as OSR for the best unsupervised and semi-supervised models. We have neglected outliers (including infinite values) in both axes in order to focus on the most common samples.\nIn Figure 9, for on-screen MoMs we show the distribution of each performance metric for these models versus different ranges of input SI-SNRs lying between [−30, 30]dB, both for absolute onscreen SI-SNR (Figure 9a) and on-screen SI-SNR improvement (Figure 9b). For off-screen test MoM videos, we plot the distribution of OSR for different ranges of input mixture power lying between [−40, 0]dB (Figure 9c). For on-screen SI-SNR and SI-SNRi, notice that the performance of the unsupervised and semisupervised models is similar except for the [−30,−20] dB range of input SI-SNR. In Figure 9c, note that both models achieve OSR of at least 0 dB for 75% of examples, and thus suppress off-screen sounds for at least 75% of the test data.\n[-30, -20] [-20, -10] [-10, 0] [0, 10] [10, 20] [20, 30] Input SI-SNR (dB)\n40\n20\n0\n20\nOn -s\ncr ee\nn SI\n-S NR\n(d B)\nUnsupervised Semi-supervised\n(a) On-screen reconstruction performance in terms of SI-SNR for on-screen MoMs, for each input SI-SNR bucket." }, { "heading": "A.7 ATTRIBUTIONS", "text": "Images in figures are resized stills with or without overlaid attention maps from the following videos.\nFigure 1 “Whitethroat” by S. Rae CC-BY 2.0 Figure 2 “Luchador and Yellow Jumpsuit” by tenaciousme CC-BY 2.0 Figure 4 “Video of six hands on the piano” by superhua CC-BY 2.0 “Small waterfall” by Jay Tamboli CC-BY 2.0 “Roval lap” by mcipseric CC-BY 2.0 “photo” by Lentini CC-BY 2.0 “IMG_0202 (2 of 14)” by Kerry Goodwin Photography CC-BY 2.0 “Archive Video 7: Party in my tummy.” by danoxster CC-BY-SA 2.0 “Untitled” by Jacob Davies CC-BY-SA 2.0 Figure 5 “Steve’s Bobber finished , test ride Video !” by REDMAXSPEEDSHOP.COM CC-BY-SA 2.0 “Natural Resources Program” by Kentuckyguard CC-BY 2.0 “Ray and Nana” by spilltojill CC-BY-SA 2.0 “HDV_0083” by winsors CC-BY 2.0 “Somewhere Over the Rainbow” by Mikol CC-BY-SA 2.0 “IMG_2797” by Lentini CC-BY 2.0 Figure 6 “MOV04180” by mike_troiano CC-BY 2.0 “Video of him playing drums” superhua CC-BY 2.0 “Day 08 - Killarney” by brandonzeman CC-BY-SA 2.0 “Jenner” by Mr. Gunn CC-BY-SA 2.0 “Ray - 6 months” by spilltojill CC-BY-SA 2.0 “Hedwig’s Theme” by Mikol CC-BY-SA 2.0 “5-20 (6)” by nmuster1 CC-BY 2.0 Figure 7 “Bretagne” by MadMonday CC-BY 2.0 “Untitled” by Sack-Sama CC-BY-SA 2.0 “Natural Resources Program” by Kentuckyguard CC-BY 2.0 “Archive Video 7: Party in my tummy.” by danoxster CC-BY-SA 2.0 “Video of Micah singing \"Old MacDonald\"” by superhua CC-BY 2.0" } ]
2,021
null
SP:958f2aacb0790ffe7399fd918c023c7e4e4c314c
[ "The paper is generally well presented. However, a main issue is that the optimization algorithms for the l0-norm regularized problems (Section 3.1.2 and Section 3.2) are not correctly presented. Specifically, in the algorithm development to solve the \"Fix $\\boldsymbol{R}$, optimize $\\boldsymbol{Y}$\" subproblem, it overlooks the coupling/interaction between the variables $y_1, y_2, \\dots,y_M$ and mistakenly obtains a closed-form solution. See Comment 1 for details." ]
Deep models have achieved great success in many applications. However, vanilla deep models are not well-designed against the input perturbation. In this work, we take an initial step to design a simple robust layer as a lightweight plug-in for vanilla deep models. To achieve this goal, we first propose a fast sparse coding and dictionary learning algorithm for sparse coding problem with an exact k-sparse constraint or l0 norm regularization. Our method comes with a closedform approximation for the sparse coding phase by taking advantage of a novel structured dictionary. With this handy approximation, we propose a simple sparse denoising layer (SDL) as a lightweight robust plug-in. Extensive experiments on both classification and reinforcement learning tasks manifest the effectiveness of our methods.
[]
[ { "authors": [ "Chenglong Bao", "Jian-Feng Cai", "Hui Ji" ], "title": "Fast sparsity-based orthogonal dictionary learning for image restoration", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2013 }, { "authors": [ "Jonathan T Barron" ], "title": "A general and adaptive robust loss function", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jean Bourgain", "Alexey" ], "title": "A Glibichuk, and SERGEI VLADIMIROVICH KONYAGIN. Estimates for the number of sums and products and for exponential sums in fields of prime order", "venue": "Journal of the London Mathematical Society,", "year": 2006 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search space", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2020 }, { "authors": [ "Michael Elad", "Michal Aharon" ], "title": "Image denoising via sparse and redundant representations over learned dictionaries", "venue": "IEEE Transactions on Image processing,", "year": 2006 }, { "authors": [ "Gamaleldin Elsayed", "Dilip Krishnan", "Hossein Mobahi", "Kevin Regan", "Samy Bengio" ], "title": "Large margin deep networks for classification", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Yunchao Gong", "Svetlana Lazebnik", "Albert Gordo", "Florent Perronnin" ], "title": "Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2012 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "ICLR,", "year": 2015 }, { "authors": [ "Minghao Guo", "Yuzhe Yang", "Rui Xu", "Ziwei Liu", "Dahua Lin" ], "title": "When nas meets robustness: In search of robust architectures against adversarial attacks", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kyle Helfrich", "Devin Willmott", "Qiang Ye" ], "title": "Orthogonal recurrent neural networks with scaled cayley transform", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Yueming Lyu" ], "title": "Spherical structured feature maps for kernel approximation", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": null, "year": 2018 }, { "authors": [ "Alexander J Ratner", "Henry Ehrenberg", "Zeshan Hussain", "Jared Dunnmon", "Christopher Ré" ], "title": "Learning to compose domain-specific transformations for data augmentation", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ron Rubinstein", "Michael Zibulevsky", "Michael Elad" ], "title": "Efficient implementation of the k-svd algorithm using batch orthogonal matching pursuit", "venue": "Technical report,", "year": 2008 }, { "authors": [ "Peter H Schönemann" ], "title": "A generalized solution of the orthogonal procrustes problem", "venue": null, "year": 1966 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Dong Su", "Huan Zhang", "Hongge Chen", "Jinfeng Yi", "Pin-Yu Chen", "Yupeng Gao" ], "title": "Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Singanallur V Venkatakrishnan", "Charles A Bouman", "Brendt Wohlberg" ], "title": "Plug-and-play priors for model based reconstruction", "venue": "IEEE Global Conference on Signal and Information Processing,", "year": 2013 }, { "authors": [ "Kaixuan Wei", "Angelica Aviles-Rivero", "Jingwei Liang", "Ying Fu", "Carola-Bibiane Schnlieb", "Hua Huang" ], "title": "Tuning-free plug-and-play proximal algorithm for inverse imaging problems", "venue": null, "year": 2020 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": null, "year": 2019 }, { "authors": [ "Kai Zhang", "Wangmeng Zuo", "Yunjin Chen", "Deyu Meng", "Lei Zhang" ], "title": "Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising", "venue": "IEEE Transactions on Image Processing,", "year": 2017 }, { "authors": [ "Stephan Zheng", "Yang Song", "Thomas Leung", "Ian Goodfellow" ], "title": "Improving the robustness of deep neural networks via stability training", "venue": "In Proceedings of the ieee conference on computer vision and pattern recognition,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have obtained a great success in many applications, including computer vision, reinforcement learning (RL) and natural language processing, etc. However, vanilla deep models are not robust to noise perturbations of the input. Even a small perturbation of input data would dramatically harm the prediction performance (Goodfellow et al., 2015).\nTo address this issue, there are three mainstreams of strategies: data argumentation based learning methods (Zheng et al., 2016; Ratner et al., 2017; Madry et al., 2018; Cubuk et al., 2020), loss functions/regularization techniques (Elsayed et al., 2018; Zhang et al., 2019), and importance weighting of network architecture against noisy input perturbation. Su et al. (2018) empirically investigated 18 deep classification models. Their studies found that model architecture is a more critical factor to robustness than the model size. Most recently, Guo et al. (2020) employed a neural architecture search (NAS) method to investigate the robust architectures. However, the NAS-based methods are still very computationally expensive. Furthermore, their resultant model cannot be easily adopted as a plug-in for other vanilla deep models. A handy robust plug-in for backbone models remains highly demanding.\nIn this work, we take an initial step to design a simple robust layer as a lightweight plug-in for the vanilla deep models. To achieve this goal, we first propose a novel fast sparse coding and dictionary learning algorithm. Our algorithm has a closed-form approximation for the sparse coding phase, which is cheap to compute compared with iterative methods in the literature. The closedform update is handy for the situation that needs fast computation, especially in the deep learning. Based on this, we design a very simple sparse denoising layer for deep models. Our SDL is very flexible, and it enables an end-to-end training. Our SDL can be used as a lightweight plug-in for many modern architecture of deep models (e.g., ResNet and DenseDet for classification and deep PPO models for RL). Our contributions are summarized as follows:\n• We propose simple sparse coding and dictionary learning algorithms for both k-sparse constrained sparse coding problem and l0-norm regularized problem. Our algorithms have simple approximation form for the sparse coding phase.\n• We introduce a simple sparse denoising layer (SDL) based on our handy update. Our SDL involves simple operations only, which is a fast plug-in layer for end-to-end training.\n• Extensive experiments on both classification tasks and reinforcement learning tasks show the effectiveness of our SDL." }, { "heading": "2 RELATED WORKS", "text": "Sparse Coding and Dictionary Learning: Sparse coding and dictionary learning are widely studied in computer vision and image processing. One related popular method is K-SVD (Elad & Aharon, 2006; Rubinstein et al., 2008), it jointly learns an over-complete dictionary and the sparse representations by minimizing a l0-norm regularized reconstruction problem. Specifically, K-SVD alternatively iterates between the sparse coding phase and dictionary updating phase. The both steps are based on heuristic greedy methods. Despite its good performance, K-SVD is very computationally demanding. Moreover, as pointed out by Bao et al. (2013), both the sparse coding phase and dictonary updating of K-SVD use some greedy approaches that lack rigorous theoretical guarantee on its optimality and convergence. Bao et al. (2013) proposed to learn an orthogonal dictionary instead of the over-complete one. The idea is to concatenate the free parameters with predefined filters to form an orthogonal dictionary. This trick reduces the time complexity compared with KSVD. However, their algorithm relies on the predefined filters. Furthermore, the alternative descent method heavily relies on SVD, which is not easy to extend to deep models.\nIn contrast, our method learns a structured over-complete dictionary, which has a simple form as a layer for deep learning. Recently, some works (Venkatakrishnan et al., 2013) employed deep neural networks to approximate alternating direction method of multipliers (ADMM) or other proximal algorithms for image denoising tasks. In (Wei et al., 2020), reinforcement learning is used to learn the hyperparameters of these deep iterative models. However, this kind of method itself needs to train a complex deep model. Thus, they are computationally expensive, which is too heavy or inflexible as a plug-in layer for backbone models in other tasks instead of image denoising tasks, e.g., reinforcement learning and multi-class classification, etc. An illustration of number of parameters of SDL, DnCNN (Zhang et al., 2017) and PnP (Wei et al., 2020) are shown in Table 1. SDL has much less parameters and simpler structure compared with DnCNN and PnP, and it can serve as a lightweight plug-in for other tasks, e.g., RL.\nRobust Deep Learning: In the literature of robust deep learning, several robust losses have been studied. To achieve better generalization ability, Elsayed et al. (2018) proposed a loss function to impose a large margin of any chosen layers of a deep network. Barron (2019) proposed a general loss with a shape parameter to cover several robust losses as special cases. For the problems with noisy input perturbation, several data argumentation-based algorithms and regularization techniques are proposed (Zheng et al., 2016; Ratner et al., 2017; Cubuk et al., 2020; Elsayed et al., 2018; Zhang et al., 2019). However, the network architecture remains less explored to address the robustness of the input perturbation. Guo et al. (2020) employed NAS methods to search the robust architectures. However, the searching-based method is very computationally expensive. The resultant architectures cannot be easily used as a plug-in for other popular networks. In contrast, our SDL is based on a closed-form of sparse coding, which can be used as a handy plug-in for many backbone models." }, { "heading": "3 FAST SPARSE CODING AND DICTIONARY LEARNING", "text": "In this section, we present our fast sparse coding and dictionary learning algorithm for the k-sparse problem and the l0-norm regularized problem in Section 3.1 and Section 3.2, respectively. Both algorithms belong to the alternative descent optimization framework." }, { "heading": "3.1 K-SPARSE CODING", "text": "We first introduce the optimization problem for sparse coding with a k-sparse constraint. Mathematically, we aim at optimizing the following objective\nmin Y ,D ‖X −DY ‖2F\nsubject to ‖yi‖0 ≤ k, ∀i ∈ {1, · · · , N} (1) µ(D) ≤ λ ‖dj‖2 = 1,∀j ∈ {1, · · · ,M},\nwhere D ∈ Rd×M is the dictionary, and di denotes the ith column of matrix D. yi denotes the ith column of the matrix Y ∈ RM×N , and µ(·) denotes the mutual coherence that is defined as\nµ(D) = max i 6=j |d>i dj | ‖di‖2‖dj‖2 . (2)\nThe optimization problem (1) is discrete and non-convex, which is very difficult to optimize. To alleviate this problem, we employ a structured dictionary as\nD = R>B. (3)\nWe require that R>R = RR> = Id and BB> = Id, and each column vector of matrix B has a constant l2-norm, i.e., ‖bi‖2 = c. The benefit of the structured dictionary is that it enables a fast update algorithm with a closed-form approximation for the sparse coding phase." }, { "heading": "3.1.1 CONSTRUCTION OF STRUCTURED MATRIX B", "text": "Now, we show how to design a structured matrix B that satisfies the requirements. First, we construct B by concatenating the real and imaginary parts of rows of a discrete Fourier matrix. The proof of the following theorems regarding the properties of B can be found in Appendix.\nWithout loss of generality, we assume that d = 2m,M = 2n. Let F ∈ Cn×n be an n × n discrete Fourier matrix. Fk,j = e 2πikj n is the (k, j)thentry of F , where i = √ −1. Let Λ = {k1, k2, ..., km} ⊂ {1, ..., n− 1} be a subset of indexes. The structured matrix B can be constructed as Eq.(4).\nB = 1√ n [ ReFΛ −ImFΛ ImFΛ ReFΛ ] ∈ Rd×N (4)\nwhere Re and Im denote the real and imaginary parts of a complex number, and FΛ in Eq. (5) is the matrix constructed by m rows of F\nFΛ= e 2πik11 n · · · e 2πik1n n ... . . .\n... e 2πikm1 n · · · e 2πikmn n ∈ Cm×n. (5) Proposition 1. Suppose d = 2m,M = 2n. Construct matrix B as in Eq.(4). Then BB> = Id and ‖bj‖2 = √ m n , ∀j ∈ {1, · · · ,M}.\nTheorem 1 shows that the structured construction B satisfies the orthogonal constraint and constant norm constraint. One thing remaining is how to construct B to achieve a small mutual coherence.\nTo achieve this goal, we can leverage the coordinate descent method in (Lyu, 2017) to construct the index set Λ. For a prime number n such that m divides n−1, i.e., m|(n − 1), we can employ a closed-form construction. Let g denote a primitive root modulo n. We construct the index Λ = {k1, k2, ..., km} as\nΛ = {g0, g n−1 m , g 2(n−1) m , · · · , g (m−1)(n−1) m } mod n. (6)\nThe resulted structured matrix B has a bounded mutual coherence, which is shown in Theorem 1. Theorem 1. Suppose d = 2m,M = 2n, and n is a prime such that m|(n − 1). Construct matrix B as in Eq.(4) with index set Λ as Eq.(6). Let mutual coherence µ(B) := maxi 6=j |b>i bj | ‖bi‖2‖bj‖2 . Then µ(B) ≤ √ n m .\nRemark: The bound of mutual coherence in Theorem 1 is non-trivial when n < m2. For the case n ≥ m2, we can use the coordinate descent method in (Lyu, 2017) to minimize the mutual coherence.\nNow, we show that the structured dictionary D = R>B satisfies the constant norm constraint and has a bounded mutual coherence. The results are summarized in Theorem 1.\nCorollary 1. Let D = R>B with R>R = RR> = Id. Construct matrix B as in Eq.(4) with index set Λ as Eq.(6). Then µ(D) = µ(B) ≤ √ n m and ‖dj‖2 = ‖bj‖2 = √ m n , ∀j ∈ {1, · · · ,M}.\nCorollary 1 shows that, for any orthogonal matrix R, each column vector of the structured dictionary D has a constant l2-norm. Moreover, it remains a constant mutual coherence µ(D) = µ(B). Thus, given a fixed matrix B, we only need to learn matrix R for the dictionary learning without undermining the low mutual coherence property." }, { "heading": "3.1.2 JOINT OPTIMIZATION FOR DICTIONARY LEARNING AND SPARSE CODING", "text": "With the structured matrix B, we can jointly optimize R and Y for the optimization problem (7).\nmin Y ,R ‖X −R>BY ‖2F\nsubject to ‖yi‖0 ≤ k,∀i ∈ {1, · · · , N} (7) R>R = RR> = Id\nThis problem can be solved by the alternative descent method. For a fixed R, we show the sparse representation Y has a closed-form approximation thanks to the structured dictionary. For the fixed sparse codes Y , dictionary parameter R has a closed-form solution.\nFix R, optimize Y : Since the constraints of Y is column separable, i.e., ‖yi‖0 ≤ k, and the objective (8) is also decomposable,\n‖X −R>BY ‖2F = N∑ i=1 ‖xi −R>Byi‖2F . (8)\nIt is sufficient to optimize the sparse code yi ∈ RM for each point xi ∈ Rd separately. Without loss of generality, for any input x ∈ Rd, we aim at finding the optimal sparse code y ∈ RM such that ‖y‖0 ≤ k. Since R>R = RR>=Id and BB> = Id , we have\n‖x−R>By‖22 = ‖Rx−RR>By‖22 = ‖Rx−By‖22 = ‖BB>Rx−By‖22 = ‖B(B>Rx− y)‖22 = ‖B(h− y)‖22. (9)\nwhere h = B>Rx is a dense code. Case 1: When m = n (the columns of B are orthogonal), we can rewrite Eq.(9) into a summation form as\n‖B(h− y)‖22 = M∑ j=1 (hj − yj)2‖bj‖22. (10)\nCase 2: When m < n, we have an error-bounded approximation using R.H.S. in Eq.(10). Let z = h− y, we have\n∣∣‖Bz‖22 − M∑ j=1 z2j ‖bj‖22 ∣∣ = ∣∣ M∑ i=1 M∑ j=1,j 6=i zizjb > i bj ∣∣ (11) ≤\nM∑ i=1 M∑ j=1,j 6=i |zizj |‖bi‖2‖bj‖2µ(B) (12)\n= M∑ i=1 M∑ j=1,j 6=i |zizj | · m n · µ(B) (13)\nIt is worth to note that the error bound is small when the mutual coherence µ(B) is small. When we employ the structural matrix in Theorem 1. It follows that∣∣‖Bz‖22 − M∑\nj=1\nz2j ‖bj‖22 ∣∣ ≤ M∑\ni=1 M∑ j=1,j 6=i |zizj | · m n ·min( √ n m , 1) (14)\n= C M∑ i=1 M∑ j=1,j 6=i |zizj | (15)\n= C M∑ i=1 M∑ j=1,j 6=i |hi − yi||hj − yj | (16)\nwhere C = min( 1√ n , mn ). In Eq.(14), we use µ(B) ≤ √ n m from Theorem 1.\nConsidering the sparse constraint ‖y‖0 ≤ k, the error bound is minimized when all the non-zero term yj = hj to get |yj − hj | = 0. Let S denote the set of index of non-zero element yj of y. Now the problem is to find the index set S to minimize\nM∑ i=1 M∑ j=1,j 6=i |hi − yi||hj − yj | = ∑ i∈Sc ∑ j∈Sc,j 6=i |hi||hj | (17)\nwhere Sc denotes the complement set of S. We can see that Eq.(17) is minimized when S consists of the index of the k largest (in absolute value) elements of h.\nNow, we consider ∑M j=1 z 2 j ‖bj‖22. Note that ‖bj‖22 = mn , it follows that\nM∑ j=1 z2j ‖bj‖22 = m n M∑ j=1 (hj − yj)2. (18)\nBecause each term (hj − yj)2 ≥ 0 is minimized when yj = hj , we know that Eq.(18) under sparse constraints is minimized when all the non-zero term setting as yj = hj . Otherwise we can set a non-zero term yj to yj = hj to further reduce term (hj − yj)2 to zero. Now, the problem is to find the index set of the non-zero term to minimize Eq.(19).\nM∑ j=1 (hj − yj)2 = M∑ j=1 h2j − ∑ i∈S,|S|≤k h2i (19)\nwhere S := {j|yj 6= 0}. We can see that Eq.(19) is minimized when S consists of the index of the k largest (in absolute value ) elements of h.\nRemark: Both the approximation ∑M j=1 z 2 j ‖bj‖22 and the error bound is minimized by the same solution.\nFix Y , Optimize R : For a fixed Y , we know that ‖X −R>BY ‖2F = ‖X‖2F + ‖BY ‖2F − 2tr(R>BY X>) (20)\nThis is the nearest orthogonal matrix problem, which has a closed-form solution as shown in (Schönemann, 1966; Gong et al., 2012). Let BY X> = UΓV > obtained by singular value decomposition (SVD), where U ,V are orthgonal matrix. Then, Eq.(20) is minimized by\nR = UV > (21)\n3.2 l0-NORM REGULARIZATION\nWe employ the structured dictionary D = R>B same as in Section 3.1. The optimization problem with l0-norm regularization is defined as\nmin Y ,R ‖X −R>BY ‖2F + λ‖Y ‖0\nsubject to R>R = RR> = Id (22)\nThis problem can be solved by the alternative descent method. For a fixed R, we show Y has a closed-form approximation thanks to the structured dictionary. For fixed the sparse codes Y , dictionary parameter R also has a closed-form solution.\nFix R, optimize Y : Since the objective can be rewritten as Eq.(23)\n‖X −R>BY ‖2F + λ‖Y ‖0 = N∑ i=1 ‖xi −R>Byi‖2F + λ‖yi‖0. (23)\nIt is sufficient to optimize Yi for each point Xi separately. Without loss of generality, for any input x ∈ Rd, we aim at finding an optimal sparse code y ∈ RM . Since R>R = RR> = Id and BB> = Id , when m = n, following the derivation in Section 3.1.2, we have ‖x−R>By‖2F + λ‖y‖0 = ‖B(h− y)‖2F + λ‖y‖0, (24) where h = B>Rx is a dense code. Note that ‖bj‖22 = mn , together with Eq.(24), it follows that\n‖B(h− y)‖2F + λ‖y‖0 = m\nn M∑ j=1 (hj − yj)2 + nλ m 1[yj 6= 0] . (25) where 1[·] is an indicator function which is 1 if its argument is true and 0 otherwise. This problem is separable for each variable yj , and each term is minimized by setting\nyj = { hj if h2j ≥ nλm 0 otherwise . (26)\nFix Y , update R: For a fixed Y , minimizing the objective leads to the same nearest orthogonal matrix problem as shown in Section 3.1.2. Let BY X> = UΓV > obtained by SVD, where U ,V are orthogonal matrix. Then, the reconstruction problem is minimized by R = UV >.\nRemark: Problems with other separable regularization terms can be solved in a similar way. The key difference is how to achieve sparse codes y. For example, for l1-norm regularized problems, y can be obtained by a soft thresholding function, i.e., y = sign(y) max ( 0, |y| − nλ/(2m) ) ." }, { "heading": "4 SPARSE DENOISING LAYER", "text": "One benefit of our fast sparse coding algorithm is that it enables a simple closed-form reconstruction, which can be used as a plug-in layer for deep neural networks. Specifically, given an orthogonal matrix R and input vector x, the optimal reconstruction of our method can be expressed as\nx̃ = R>Bf(B>Rx) , (27)\nwhere f(·) is a non-linear mapping function. For the k-sparse constrained problem, f(·) is a k-max pooling function (w.r.t the absolute value) as Eq.(28)\nf(hj) = { hj if |hj | is one of the k-highest values of |h| ∈ RM 0 otherwise . (28)\nFor the l0-norm regularization problem, f(·) is a hard thresholding function as Eq.(29)\nf(hj) =\n{ hj if |hj | ≥ √ nλ m\n0 otherwise . (29)\nFor the l1-norm regularization problem, f(·) is a soft thresholding function as Eq.(30) f(hj) = sign(hj)×max ( 0, |hj | − nλ\n2m\n) , (30)\nwhere sign(·) denotes the Sign function. The reconstruction in Eq.(27) can be used as a simple plug-in layer for deep networks, we named it as sparse denoising layer (SDL). It is worth noting that only the orthogonal matrix R is needed to learn. The structured matrix B is constructed as in Section 3.1.1 and fixed.\nThe orthogonal matrix R can be parameterized by exponential mapping or Cayley mapping (Helfrich et al., 2018) of a skew-symmetric matrix. In this work, we employ the Cayley mapping to enable gradient update using deep learning tools. Specifically, the orthogonal matrix R can be obtained by the Cayley mapping of a skew-symmetric matrix as\nR = (I + W )(I −W )−1, (31)\nwhere W is a skew-symmetric matrix, i.e., W = −W> ∈ Rd×d. For a skew-symmetric matrix W , only the upper triangular matrix (without main diagonal) are free parameters. Thus, the number of free parameters of SDL is d(d − 1)/2, which is much smaller compared with the number of parameters of backbone deep networks.\nFor training a network with a SDL, we add a reconstruction loss term as a regularization. The optimization problem is defined as\nmin W∈Rd×d,Θ\n`(X;W ,Θ) + β‖Z̃ −Z‖2F , (32)\nwhere W is a skew-symmetric matrix parameter of SDL. Θ is the parameter of the backbone network. Z̃ is the reconstruction of the latent representation Z via SDL (Eq.(27)). An illustration of the SDL plug-in is shown in Figure 1. When SDL is used as the first layer, then Z = X , and Z̃ = X̃ . In this case, Z̃ is the reconstruction of the input data X .\nIt is worth noting that the shape of the input and output of SDL are same. Thus, SDL can be used as plug-in for any backbone models without changing the input/output shape of different layers in the backbone network. With the simple SDL plug-in, backbone models can be trained from scratches." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate the performance of our SDL on both classification tasks and RL tasks. For classification, we employ both DenseNet-100 (Huang et al., 2017) and ResNet-34 (He et al., 2016) as backbone. For RL tasks, we employ deep PPO models1 (Schulman et al., 2017) as backbone. For all the tasks, we test the performance of backbone models with and without our SDL when adding Gaussian noise or Laplace noise. In all the experiments, we plug SDL as the first layer of deep models. We set the\n1https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail/\nstandard deviation of input noise as {0, 0.1, 0.2, 0.3}, respectively. (The input noise is added after input normalization) . We keep all the hyperparameters of the backbone models same, the only difference is whether plugging SDL. The parameter β for the reconstruction loss is fixed as β = 100 in all the experiments. Classification Tasks: We test SDL on CIFAR10 and CIFAR100 dataset. We construct the structured matrix B ∈ R12×14 by Eq.(4). In this setting, the orthogonal matrix R corresponds to the convolution parameter of Conv2d(·) with kernelsize=2 × 2. We set the sparse parameter of our k-sparse SDL as k = 3 in all the classification experiments. The average test accuracy over five independent runs on CIFAR10 and CIFAR100 with Gaussian noise are shown in Fig. 3 and Fig. 4, respectively. We can observe that models with SDL obtain a similar performance compared with vanilla model on the clean input case. With an increasing variance of input noise, models with SDL outperform the vanilla models more and more significantly. The experimental results with Laplace noise are presented in Fig. 9 and Fig. 10 in the supplementary. The results on Laplace noise cases show the similar trends with Gaussian noise cases. We further test the performance of SDL under the fast gradient sign method (FGSM) attack (Goodfellow et al., 2015). The perturbation parameter epsilon is set to 8/256. Experimental results are shown in Fig. 2. We can observe that adding SDL plug-in can improve the adversarial robustness of the backbone models. More experiments on tiny-Imagenet dataset can be found in Appendix G.\nRL Tasks: We test deep PPO model with SDL on Atari games: KungFuMaster, Tennis and Seaquest. The deep PPO model concatenates four frames as the input state. The size of input\nstate is 84× 84× 4. We construct the structured matrix B ∈ R36×38 by Eq.(4). In this setting, the orthogonal matrix R corresponds to the convolution parameter of Conv2d(·) with kernelsize=3×3. The number of free parameters is 630. We set the sparse parameter of our k-sparse SDL as k = 4 in all the RL experiments.\nThe return of one episode is the sum of rewards over all steps during the whole episode. We present the average return over five independent runs on KungFuMaster and Tennis game with Gaussian noise and Laplace noise in Fig. 5 and Fig. 6, respectively. Results on Seaquest game are shown in Fig. 16 in the supplement due to the space limitation. We can see that models with SDL achieve a competitive average return on the clean cases. Moreover, models with SDL obtain higher average return than vanilla models when the input state is perturbed with noise." }, { "heading": "6 CONCLUSION", "text": "We proposed fast sparse coding algorithms for both k-sparse problem and l0-norm regularization problems. Our algorithms have a simple closed-form update. We proposed a sparse denoising layer as a lightweight plug-in for backbone models against noisy input perturbation based on this handy closed-form. Experiments on both ResNet/DenseNet classification model and deep PPO RL model showed the effeteness of our SDL against noisy input perturbation and adversarial perturbation." }, { "heading": "A PROOF OF PROPOSITION 1", "text": "Proof. Let ci ∈ C1×n be the ith row of matrix FΛ ∈ Cm×n in Eq.(5). Let vi ∈ R1×2n be the ith row of matrix B ∈ R2m×2n in Eq.(4). For 1 ≤ i, j ≤ m, i 6= j, we know that\nviv > i+m = 0, (33) vi+mv > j+m = viv > j = Re(cic ∗ j ), (34) vi+mv > j = −viv>j+m = Im(cic∗j ), (35)\nwhere ∗ denotes the complex conjugate, Re(·) and Im(·) denote the real and imaginary parts of the input complex number.\nFor a discrete Fourier matrix F , we know that\ncic ∗ j =\n1\nn n−1∑ k=0 e 2π(i−j)ki n = { 1, if i = j 0, otherwise\n(36)\nWhen i 6= j, from Eq.(36), we know cic∗j = 0. Thus, we have\nvi+mv > j+m = viv > j = Re(cic ∗ j ) = 0, (37) vi+mv > j = −viv>j+m = Im(cic∗j ) = 0, (38)\nWhen i = j, we know that vi+mv>i+m = viv > i = cic ∗ i = 1.\nPut two cases together, also note that d = 2m, we have BB> = Id.\nThe l2-norm of the column vector of B is given as\n‖bj‖22 = 1\nn m∑ i=1 ( sin2 2πkij n + cos2 2πkij n ) = m n (39)\nThus, we have ‖bj‖2 = √ m n for j ∈ {1, · · · ,M}" }, { "heading": "B PROOF OF THEOREM 1", "text": "Proof. Let ci ∈ Cm×1 be the ith column of matrix FΛ ∈ Cm×n in Eq.(5). Let bi ∈ R2m×1 be the ith column of matrix B ∈ R2m×2n in Eq.(4). For 1 ≤ i, j ≤ n, i 6= j, we know that\nb>i bi+n = 0, (40) b>i+nbj+n = b > i bj = Re(c ∗ i cj), (41) b>i+nbj = −b>i bj+n = Im(c∗i cj), (42)\nwhere ∗ denotes the complex conjugate, Re(·) and Im(·) denote the real and imaginary parts of the input complex number.\nIt follows that\nµ(B) ≤= max 1≤k,r≤2n,k 6=r |b>k br| ≤ max 1≤i,j≤n,i 6=j |c∗i cj | = µ(FΛ) (43)\nFrom the definition of FΛ in Eq.(5), we know that\nµ(FΛ) = max 1≤i,j≤n,i6=j |c∗i cj | = max 1≤i,j≤n,i6=j\n1\nm ∣∣∣∣∣∑ z∈Λ e 2πiz(j−i) n ∣∣∣∣∣ (44) = max\n1≤k≤n−1\n1\nm ∣∣∣∣∣∑ z∈Λ e 2πizk n ∣∣∣∣∣ (45)\nBecause Λ = {g0, g n−1m , g 2(n−1) m , · · · , g (m−1)(n−1) m } mod n , we know that Λ is a subgroup of the multiplicative group {g0, g1, · · · , gn−2} mod n. From Bourgain et al. (2006), we know that\nmax 1≤k≤n−1 ∣∣∣∣∣∑ z∈Λ e 2πizk n ∣∣∣∣∣ ≤ √n (46) Finally, we know that\nµ(B) ≤ µ(FΛ) ≤ √ n\nm . (47)" }, { "heading": "C PROOF OF COROLLARY 1", "text": "Proof. Since R>R = RR>=Id and D = R>B, we know that ‖dj‖2 = ‖bj‖2. From Theorem 1, we know that ‖bj‖2 = √ m n , ∀j ∈ {1, · · · ,M}. It follows that ‖dj‖2 = ‖bj‖2 = √ m n for ∀j ∈ {1, · · · ,M}. From the definition of mutual coherence µ(·), we know it is rotation invariant. Since D = R>B with R>R = RR>=Id, we know µ(D) = µ(B). From Theorem 1, we have µ(B) ≤ √ n m . Thus, we obtain µ(D) = µ(B) ≤ √ n m ." }, { "heading": "D EMPIRICAL CONVERGENCE OF OBJECTIVE FUNCTIONS", "text": "We test our fast dictionary learning algorithms on Lena with image patches (size 12×12). We present the empirical convergence result of our fast algorithms in Figure 7. It shows that the objective tends to converge less than fifty iterations." }, { "heading": "E DEMO OF DENOISED IMAGES", "text": "We show the denoised results of our fast sparse coding algorithm on some widely used testing images. The input images are perturbed by Gaussian noise with std σ = 100. The denoised results are presented in Figure 8. It shows that our algorithms can reduce the influence of the noisy perturbation of the images." }, { "heading": "F EXPERIMENTAL RESULTS ON CLASSIFICATION WITH LAPLACE NOISE", "text": "" }, { "heading": "G EXPERIMENTAL RESULTS ON TINY-IMAGENET DATASET", "text": "" }, { "heading": "H RESULTS OF RL ON SEAQUEST GAME", "text": "I PYTORCH IMPLEMENTATION OF THE SDL LAYER\nclass SparseDenoisingLayer(nn.Module): def __init__(self, sparseK, B,n):\nsuper(SparseDenoisingLayer, self).__init__() self.ksize = 2 # kernelsize of Conv2d self.channel = 3 # channel of input outplanes = self.channel*self.ksize*self.ksize self.B = torch.from_numpy(B).float().cuda() self.n = n\nself.outplanes = outplanes self.sparseK = sparseK\nself.register_parameter(name=’U’, param=torch.nn.Parameter(torch. randn(outplanes,outplanes). cuda() ) )\ndef forward(self, x): # Cayley Mapping to compute orthogonal matrix R KA = torch.triu(self.U,diagonal=1 ) tmpA = KA - KA.t() tmpB = torch.eye(self.outplanes,self.outplanes).cuda()-tmpA KU = torch.mm( (torch.eye(self.outplanes,self.outplanes).cuda()+\ntmpA ) , torch.inverse( tmpB ) ) # orthogonal matrix R\nweight = KU.view(self.outplanes,self.channel,self.ksize,self. ksize) #Reshape into the Conv2d parameter out = F.conv2d(x,weight, stride=1, padding = self.ksize-1)\nout = out.permute(0,2,3,1) out = torch.matmul(out,self.B)\n# (function f) k-max pooling w.r.t the absolute value index = torch.abs(out).topk(self.sparseK, dim = 3) mask = torch.zeros(out.shape).cuda() mask.scatter_(3, index[1], 1.) out = out* mask\n# # out = torch.matmul(out,torch.transpose(self.B, 0, 1)) out = out.permute(0,3,1,2) out = F.conv_transpose2d(out,weight, stride=1, padding = self.\nksize-1 )/(self.ksize*self. ksize)\nreturn out # reconstruction of the input" } ]
2,020
A SIMPLE SPARSE DENOISING LAYER FOR ROBUST DEEP LEARNING
SP:33673a515722e1d8288fd3014e7db507b7250b20
[ "The paper under review proposes a new model for multi-dimensional temporal Point processes, allowing efficient estimation of high order interactions. This new model, called additive Poisson process, relies on a log-linear structure of the intensity function that is motivated thanks to the Kolmogorov-Arnold theorem. Such structure is then linked to generalized additive models, a result that is used to devise an efficient estimation procedure with formal guarantees of convergence." ]
We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in point processes using lower dimensional projections. Our model combines the techniques in information geometry to model higher-order interactions on a statistical manifold and in generalized additive models to use lower-dimensional projections to overcome the effects from the curse of dimensionality. Our approach solves a convex optimization problem by minimizing the KL divergence from a sample distribution in lower dimensional projections to the distribution modeled by an intensity function in the point process. Our empirical results show that our model is able to use samples observed in the lower dimensional space to estimate the higher-order intensity function with extremely sparse observations.
[]
[ { "authors": [ "Alan Agresti" ], "title": "Categorical Data Analysis", "venue": "Wiley, 3 edition,", "year": 2012 }, { "authors": [ "S. Amari" ], "title": "Information geometry on hierarchy of probability distributions", "venue": "IEEE Transactions on Information Theory,", "year": 2001 }, { "authors": [ "Shun-Ichi Amari" ], "title": "Natural gradient works efficiently in learning", "venue": "Neural Computation,", "year": 1998 }, { "authors": [ "A. Barron", "N. Hengartner" ], "title": "Information theory and superefficiency", "venue": "The Annals of Statistics,", "year": 1998 }, { "authors": [ "Jürgen Braun" ], "title": "An application of Kolmogorov’s superposition theorem to function reconstruction in higher dimensions", "venue": "PhD thesis, Universitäts-und Landesbibliothek Bonn,", "year": 2009 }, { "authors": [ "Jürgen Braun", "Michael Griebel" ], "title": "On a constructive proof of Kolmogorov’s superposition theorem", "venue": "Constructive Approximation,", "year": 2009 }, { "authors": [ "Andreas Buja", "Trevor Hastie", "Robert Tibshirani" ], "title": "Linear smoothers and additive models", "venue": "The Annals of Statistics,", "year": 1989 }, { "authors": [ "Daryl J Daley", "David Vere-Jones" ], "title": "An Introduction to the Theory of Point Processes: Volume II: General Theory and Structure", "venue": null, "year": 2007 }, { "authors": [ "Brian A Davey", "Hilary A Priestley" ], "title": "Introduction to Lattices and Order", "venue": null, "year": 2002 }, { "authors": [ "Seth Flaxman", "Yee Whye Teh", "Dino Sejdinovic" ], "title": "Poisson intensity estimation with reproducing kernels", "venue": "Electronic Journal of Statistics,", "year": 2017 }, { "authors": [ "Jerome H Friedman", "Werner Stuetzle" ], "title": "Projection pursuit regression", "venue": "Journal of the American Statistical Association,", "year": 1981 }, { "authors": [ "Deniz Ilalan" ], "title": "A poisson process with random intensity for modeling financial stability", "venue": "The Spanish Review of Financial Economics,", "year": 2016 }, { "authors": [ "Andrei Nikolaevich Kolmogorov" ], "title": "On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition", "venue": "Doklady Akademii Nauk,", "year": 1957 }, { "authors": [ "Athanasios Kottas" ], "title": "Dirichlet process mixtures of beta distributions, with applications to density and intensity estimation", "venue": "In Workshop on Learning with Nonparametric Bayesian Methods, 23rd International Conference on Machine Learning (ICML),", "year": 2006 }, { "authors": [ "Simon Luo", "Mahito Sugiyama" ], "title": "Bias-variance trade-off in hierarchical probabilistic models using higher-order feature interactions", "venue": "In Proceedings of the 33rd AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Radford M Neal" ], "title": "Regression and classification using gaussian process priors", "venue": "Bayesian Statistics,", "year": 1999 }, { "authors": [ "Yosihiko Ogata" ], "title": "On Lewis’ simulation method for point processes", "venue": "IEEE Transactions on Information Theory,", "year": 1981 }, { "authors": [ "Murray Rosenblatt" ], "title": "Remarks on some nonparametric estimates of a density function", "venue": "The Annals of Mathematical Statistics,", "year": 1956 }, { "authors": [ "H Schäbe" ], "title": "Nonparametric estimation of intensities of nonhomogeneous poisson processes", "venue": "Statistical Papers,", "year": 1993 }, { "authors": [ "David W Scott" ], "title": "Multivariate density estimation: theory, practice, and visualization", "venue": null, "year": 2015 }, { "authors": [ "Mahito Sugiyama", "Hiroyuki Nakahara", "Koji Tsuda" ], "title": "Under review as a conference paper at ICLR", "venue": null, "year": 2021 }, { "authors": [ "Mahito Sugiyama", "Hiroyuki Nakahara", "Koji Tsuda" ], "title": "Tensor balancing on statistical manifold", "venue": "IEEE International Symposium on Information Theory,", "year": 2016 }, { "authors": [ "Mahito Sugiyama", "Hiroyuki Nakahara", "Koji Tsuda" ], "title": "Legendre decomposition for tensors", "venue": "Machine Learning Research,", "year": 2017 } ]
[ { "heading": null, "text": "We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in point processes using lower dimensional projections. Our model combines the techniques in information geometry to model higher-order interactions on a statistical manifold and in generalized additive models to use lower-dimensional projections to overcome the effects from the curse of dimensionality. Our approach solves a convex optimization problem by minimizing the KL divergence from a sample distribution in lower dimensional projections to the distribution modeled by an intensity function in the point process. Our empirical results show that our model is able to use samples observed in the lower dimensional space to estimate the higher-order intensity function with extremely sparse observations." }, { "heading": "1 INTRODUCTION", "text": "Consider two point processes which are correlated with arrival times for an event. For a given time interval, what is the probability of observing an event from both processes? Can we learn the joint intensity function by just using the observations from each individual processes? Our proposed model, the Additive Poisson Process (APP), provides a novel solution to this problem.\nThe Poisson process is a counting process used in a wide range of disciplines such as time-space sequence data including transportation (Zhou et al., 2018), finance (Ilalan, 2016), ecology (Thompson, 1955), and violent crime (Taddy, 2010) to model the arrival times for a single system by learning an intensity function. For a given time interval of the intensity function, it represents the probability of a point being excited at a given time. Despite the recent advances of modeling of the Poisson processes and its wide applicability, majority of the point processes model do not consider the correlation between two or more point processes. Our proposed approach learns the joint intensity function of the point process which is defined to be the simultaneous occurrence of two events.\nFor example in a spatial-temporal problem, we want to learn the intensity function for a taxi to pick up customers at a given time and location. For this problem, each point is multi-dimensional, that is (x, y, t)Ni=1, where a pair of x and y represents two spatial dimensions and t represents the time dimension. For any given location or time, we can only expect very few pick-up events occurring, therefore making it difficult for any model to learn the low valued intensity function.\nPrevious approaches such as Kernel density estimation (KDE) (Rosenblatt, 1956) are able to learn the joint intensity function. However, KDE suffers from the curse of dimensionality, which means that KDE requires a large size sample or a high intensity function to build an accurate model. In addition, the complexity of the model expands exponentially with respect to the number of dimensions, which makes it infeasible to compute. Bayesian approaches such as using a mixture of beta distributions with a Dirichlet prior (Kottas, 2006) and Reproducing Kernel Hilbert Space (RKHS) (Flaxman et al., 2017) have been proposed to quantify the uncertainty with a prior for the intensity function. However, these approaches are often non-convex, making it difficult to obtain the global optimal solution. In addition, if observations are sparse, it is hard for these approaches to learn a reasonable intensity function.\nAll previous models are unable to efficiently and accurately learn the intensity of the interaction between point processes. This is because the intensity of the joint process is often low, leading to sparse samples or, in an extreme case, no direct observations of the simultaneous event at all, making it difficult to learn the intensity function from the joint samples. In this paper, we propose a novel framework to learn the higher-order interaction effects of intensity functions in point processes. Our model combines the techniques introduced by Luo & Sugiyama (2019) to model higher-order interactions between point processes and by Friedman & Stuetzle (1981) in generalized additive models to learn the intensity function using samples in a lower dimensional space. Our proposed approach is to decompose a multi-dimensional point process into lower-dimensional representations. For example, in the x-dimension we have points (xi)Ni=1, in the y-dimension, we have points (yi) N i=1 and in the time dimension we have (ti)Ni=1. The data in these lower dimensional space can be used to improve the estimate of the joint intensity function. This is different from the traditional approach where we only use the simultaneous events to learn the joint intensity function.\nWe first show the connection between generalized additive models and Poisson processes. We then provide the connection between generalized additive models and the log-linear model (Agresti, 2012), which has a well-established theoretical background in information geometry (Amari, 2016). We draw parallels between the formulation of the generalized additive models and the binary loglinear model on a partially ordered set (poset) (Sugiyama et al., 2017). The learning process in our model is formulated as a convex optimization problem to arrive at a unique optimal solution using natural gradient, which minimizes the Kullback-Leibler (KL) divergence from the sample distribution in a lower dimensional space to the distribution modeled by the learned intensity function. This connection provides remarkable properties to our model: the ability to learn higher-order intensity functions using lower dimensional projections, thanks to the Kolmogorov-Arnold representation theorem. This property makes it advantageous to use our proposed approach for the cases where there are, no observations, missing samples, or low event rate. Our model is flexible because it can capture interaction between processes as a partial order structure in the log-linear model and the parameters of the model are fully customizable to meet the requirements for the application. Our empirical results show that our model effectively uses samples projected onto a lower dimensional space to estimate the higher-order intensity function. Our model is also robust to various sample sizes." }, { "heading": "2 FORMULATION", "text": "In this section we first introduce the technical background in the Poisson process and its extension to a multi-dimensional Poisson process. We then introduce the Generalized Additive Model (GAM) and its connection to the Poisson process. This is followed by presenting our novel framework, called Additive Poisson Process (APP), which is our main technical contribution and has a tight link to the Poisson process modelled by GAMs. We show that learning of APP can be achieved via convex optimization using natural gradient.\nThe Poisson process is characterized by an intensity function λ:RD → R, where we assume multiple D processes. An inhomogeneous Poisson process is a general type of processes, where the arrival intensity changes with time. The process with time-changing intensity λ(t) is defined as a counting process N(t), which has an independent increment property. For all time t ≥ 0 and changes in time δ ≥ 0, the probability p for the observations is given as p(N(t+ δ)−N(t) = 0) = 1− δλ(t) + o(δ), p(N(t + δ)− N(t) = 1) = δλ(t) + o(δ), and p(N(t + δ)− N(t) ≥ 2) = o(δ), where o(·) denotes little-o notation (Daley & Vere-Jones, 2007). Given a realization of timestamps t1, t2, . . . , tN with ti ∈ [0, T ]D from an inhomogeneous (multi-dimensional) Poisson process with the intensity λ. Each ti is the time of occurrence for the i-th event across D processes and T is the observation duration. The likelihood for the Poisson process (Daley & Vere-Jones, 2007) is given by\np ( {ti}Ni=1 | λ (t) ) = exp ( − ∫ λ (t) dt ) N∏ i=1 λ (ti) , (1)\nwhere t = [t(1), . . . , t(D)] ∈ RD. We define the functional prior on λ(t) as\nλ(t) := g (f(t)) = exp (f(t)) . (2)\nThe function g(·) is a positive function to guarantee the non-negativity of the intensity which we choose to be the exponential function, and our objective is to learn the function f(·). The log-\nlikelihood of the multi-dimensional Poisson process with the functional prior is described as log p ( {ti}Ni=1 | λ (t) ) = N∑ i=1 f (ti)− ∫ exp (f (t)) dt. (3)\nIn the following sections, we introduce generalized additive models and propose to model it by the log-linear model to learn f(t) and the normalizing term." }, { "heading": "2.1 GENERALIZED ADDITIVE MODEL", "text": "In this section we present the connection between Poisson processes with Generalized Additive Model (GAM) proposed by Friedman & Stuetzle (1981). GAM projects higher-dimensional features into lower-dimensional space to apply smoothing functions to build a restricted class of non-parametric regression models. GAM is less affected by the curse of dimensionality compared to directly using smoothing in a higher-dimensional space. For a given set of processes J ⊆ [D] = {1, . . . , D}, the traditional GAM using one-dimensional projections is defined as log λJ(t) = ∑ j∈J fj(t (j))− βJ with some smoothing function fj .\nIn this paper, we extend it to include higher-order interactions between features in GAM. The k-th order GAM is defined as log λJ(t) = ∑ j∈J f{j}(t (j)) + ∑ j1,j2∈J f{j1,j2}(t (j1), t(j2)) + · · ·+ ∑ j1,...,jk∈J f{j1,...,jk}(t (j1), . . . , t(jk))− βJ\n= ∑\nI⊆J, |I|≤k\nfI(t (I))− βJ , (4)\nwhere t(I) ∈ R|I| denotes the subvector (t(j))j∈I of t with respect to I ⊆ [D]. The function fI : R|I| → R is a smoothing function to fit the data, and the normalization constant βJ for the intensity function is obtained as βJ = ∫ λJ(t)dt = ∫ exp( ∑ I⊆J, |I|≤k fI(t\n(I)) )dt. The definition of the additive model is in the same form as Equation (3). In particular, if we compare Equation (3) and (4), we can see that the smoothing function f in (3) corresponds to the right-hand side of (4).\nLearning of a continuous function using lower dimensional projections is well known because of the Kolmogorov-Arnold representation theorem, which states as follows: Theorem 1 (Kolmogorov–Arnold Representation Theorem (Braun & Griebel, 2009; Kolmogorov, 1957)). Any multivariate continuous function can be represented as a superposition of one–dimensional functions, i.e., f (t1, . . . , tn) = ∑2n+1 q=1 fq (∑n p=1 gq,p (tp) ) .\nBraun (2009) showed that the GAM is an approximation to the general form presented in Kolmogorov-Arnold representation theorem by replacing the range q ∈ {1, . . . , 2n + 1} with I ⊆ J and the inner function gq,p by the identity if q = p and zero otherwise, yielding f(t) = ∑ I⊆J fI(t (I)).\nInterestingly, the canonical form for additive models in Equation (4) can be rearranged to be in the same form as Kolmogorov-Arnold representation theorem. By letting f(t) = ∑ I⊆J fI(t (I)) = g−1(λ(t)) and g(·) = exp(·), we have\nλJ(t) = 1\nexp (βJ) exp (∑ I⊆J fI ( t(I) )) ∝ exp (∑ I⊆J fI ( t(I) )) , (5)\nwhere we assume fI(t(I)) = 0 if |I| > k for the k-th order model and 1/ exp(βJ) is the normalization term for the intensity function. Based on the Kolmogorov-Arnold representation theorem, generalized additive models are able to learn the intensity of the higher-order interaction between point processes by using projections into lower dimensional space. The log-likelihood function for a kth-order model is obtained by substituting the Equation (4) into Equation (1),\nlog p ( {t}Ni=1|λ (t) ) = N∑ i=1 exp ∑ I⊆J, |I|≤k fI ( t(I) )− β′,\nwhere is a constant given by β′ = ∫ λ(t)dt + ∑ I⊆J βJ . In the following section we will detail a log-linear formulation that efficiently maximizes this log-likelihood equation." }, { "heading": "2.2 ADDITIVE POISSON PROCESS", "text": "We introduce our key technical contribution in this section, the log-linear formulation of the additive Poisson process, and draw parallels between higher-order interactions in the log-linear model and the lower dimensional projections in generalized additive models. In the following, we discretize the time window [0, T ] into M bins and treat each bin as a natural number τ ∈ [M ] = {1, 2, . . . ,M} for each process. We assume that M is predetermined by the user. First we introduce a structured space for the Poisson process to incorporate interactions between processes. Let Ω = { (J, τ)|J ∈ 2[D] \\ ∅, τ ∈ [M ] } ∪ {(⊥, 0)}. We define the partial order (Davey & Priestley, 2002) on Ω as\n(J, τ) (J ′, τ ′) ⇐⇒ J ⊆ J ′ and τ ≤ τ ′, for each ω = (J, τ), ω′ = (J ′, τ ′) ∈ Ω, (6)\nand (⊥, 0) (J, τ) for all (J, τ) ∈ Ω, which is illustrated in Figure 1. The relation J ⊆ J ′ is used to model any-order interactions between point processes (Luo & Sugiyama, 2019) (Amari, 2016, Section 6.8.4) and each τ in (J, τ) represents “time” in our model with ⊥ denoting the least element in the partial order structure. Note that the domain of τ can be generalized from [M ] to [M ]D to take different time stamps into account, while in the following we assume that observed time stamps are always the same across processes for simplicity. Our experiments in the next section demonstrates that we can still accurately estimate the\ndensity of processes. Our model can be applied to not only time-series data but any sequential data.\nOn any set equipped with a partial order, we can introduce a log-linear model (Sugiyama et al., 2016; 2017). Given a parameter domain S ⊆ Ω. For a partially ordered set (Ω, ), the log-linear model with parameters (θs)s∈S is introduced as\nlog p(ω; θ) = ∑\ns∈S 1[s ω]θs − ψ(θ) (7)\nfor each ω ∈ Ω, where 1[·] = 1 if the statement in [·] is true and 0 otherwise, and ψ(θ) ∈ R is the partition function uniquely obtained as ψ(θ) = log ∑ ω∈Ω exp( ∑ s∈S 1[s ω]θs ) = −θ(⊥,0). A special case of this formulation coincides with the density function of the Boltzmann machines (Sugiyama et al., 2018; Luo & Sugiyama, 2019).\nHere we have a clear correspondence between the log-linear formulation and that in the form of Kolmogorov-Arnold representation theorem in Equation (5) if we rewrite Equation (7) as\np(ω; θ) = 1\nexpψ(θ) exp (∑ s∈S 1[s ω]θs ) ∝ exp (∑ s∈S 1[s ω]θs ) . (8)\nWe call this model with (Ω, ) defined in Equation (6) the additive Poisson process, which represents the intensity λ as the joint distribution across all possible states. The intensity λ of the multi-dimensional Poisson process given via the GAM in Equation (5) is fully modeled (parameterized) by Equation (7) and each intensity fI(·) is obtained as θ(I,·). To consider the k-th order model, we consistently use the parameter domain S given as S = { (J, τ) ∈ Ω | |J | ≤ k }, where k is an input parameter to the model that specifies the upper bound of the order of interactions. This means that θs = 0 for all s /∈ S. Note that our model is well-defined for any subset S ⊆ Ω and the user can use arbitrary domain in applications.\nFor a given J and each bin τ with ω = (J, τ), the empirical probability p̂(ω), which corresponds to the input observation, is given as\np̂(ω) = 1\nZ ∑ I⊆J σI(τ ), Z = ∑ ω∈Ω p̂(ω), and σI(τ ) := 1 NhI N∑ i=1 K\n( τ (I) − t(I)i\nhI\n) (9)\nfor each discretized state ω = (J, τ), where τ = (τ, . . . , τ) ∈ RD. The function σI performs smoothing on time stamps t1, . . . , tN , which is the kernel smoother proposed by Buja et al. (1989). The functionK is a kernel and hI is the bandwidth for each projection I ⊆ [D]. We use the Gaussian kernel as K to ensure that probability is always nonzero, meaning that the definition of the kernel smoother coincides with the kernel estimator of the intensity function proposed by Schäbe (1993)." }, { "heading": "2.3 OPTIMIZATION", "text": "Given an empirical distribution p̂ defined in Equation (9), the task is to learn the parameter (θs)s∈S such that the distribution via the log-linear model in Equation (7) is close to p̂ as much as possible. Let us define SS = {p | θs = 0 if s 6∈ S}, which is the set of distributions that can be represented by the log-linear model using the parameter domain S. Then the objective function is given as minp∈SS DKL(p̂, p), where DKL(p̂, p) = ∑ ω∈Ω p̂ log(p̂/p) is the KL divergence from p̂ to p. In this optimization, let p∗ be the learned distribution from the sample with infinitely large sample size and p be the learned distribution for each sample. Then we can lower bound the uncertainty (variance) E[DKL(p∗, p)] by |S|/2N (Barron & Hengartner, 1998). Thanks to the well developed theory of information geometry (Amari, 2016) for the log-linear model (Amari, 2001), it is known that this problem can be solved by e-projection, which coincides with the maximum likelihood estimation, and it is always convex optimization (Amari, 2016, Chapter 2.8.3). The gradient with respect to each parameter θs is obtained by (∂/∂θs)DKL(p̂, p) = ηs − η̂s, where ηs = ∑ ω∈Ω 1[ω s]p(ω). The value ηs is known as the expectation parameter (Sugiyama et al., 2017) and η̂s is obtained by replacing p with p̂ in the above equation. If η̂s = 0 for some s ∈ S, we remove s from S to ensure that the model is well-defined. Let S = {s1, . . . , s|S|} and θ = [θs1 , . . . , θs|S| ]T , η = [ηs1 , . . . , ηs|S| ]T . We can always use the natural gradient (Amari, 1998) as the closed form solution of the Fisher information matrix is always available (Sugiyama et al., 2017). The update step is θnext = θ −G−1(η − η̂), where the Fisher information matrix G is obtained as\ngij = ∂\n∂θsi∂θsj DKL(p̂, p) = ∑ ω∈Ω 1[ω si]1[ω sj ]p(ω)− ηsiηsj . (10)\nAlgorithm 1 Additive Poisson Process (APP) 1: Function APP({ti}Ni=1, S, M , h): 2: Initialize Ω with the number M of bins 3: Apply Gaussian Kernel with bandwidth h\non {ti}Ni=1 to compute p̂ 4: Compute η̂ = (η̂s)s∈S from p̂ 5: Initialize θ = (θs)s∈S (randomly or θs = 0)\n6: repeat 7: Compute p using the current θ =\n(θs)s∈S 8: Compute η = (ηs)s∈S from p 9: ∆η ← η − η̂\n10: Compute the Fisher information matrix G using Equation (10) 11: θ ← θ −G−1∆η 12: until convergence of θ = (θs)s∈S 13: End Function\nTheoretically the Fisher information matrix is numerically stable to perform a matrix inversion. However, computationally floating point errors may cause the matrix to become indefinite. To overcome this issue, a small positive value is added along the main diagonal of the matrix. This technique is known as jitter and it is used in areas like Gaussian processes to ensure that the covariance matrix is computationally positive semidefinite (Neal, 1999).\nThe pseudocode for APP is shown in Algorithm 1. The time complexity of computing line 7 is O(|Ω||S|). This means when implementing the model using gradient descent, the time complexity of the model is O(|Ω||S|2) to update the parameters in S for each iteration. For natural gradient the cost of inverting the Fisher information matrix G is O(|S|3), therefore the time complexity to update the parameters in S is O(|S|3 + |Ω||S|) for each iteration. The time complexity for natural gradient is significantly higher to invert the fisher information matrix, if the number of parameter is small, it is more efficient to use natural gradient because it requires significantly less iterations. However, if the number of parameters is large, it is more efficient to use gradient descent." }, { "heading": "3 EXPERIMENTS", "text": "We perform experiments using two dimensional synthetic data, higher dimensional synthetic data, and rea-world data to evaluate the performance of our proposed approach. Our code is implemented on Python 3.7.5 with NumPy version 1.8.2 and the experiments are run on Ubuntu 18.04 LTS with an Intel i7-8700 6c/12t with 16GB of memory 1. In experiments of synthetic data, we simulate\n1The code is available in the supplementary material and will be publicly available online after the peer review process.\nrandom events using Equation (1). We generate an intensity function using a mixture of Gaussians, where the mean is drawn from a uniform distribution and the covariance is drawn from an inverted Wishart distribution. The intensity function is then the density function multiplied by the sample size. The synthetic data is generated by directly drawing a sample from the probability density function . An arbitrary number of samples is drawn from the mixture of Gaussians. We then run our models and compare with Kernel Density Estimation (KDE) (Rosenblatt, 1956), an inhomogeneous Poisson process whose intensity is estimated by a reproducing kernel Hilbert space formulation (RKHS) (Flaxman et al., 2017), and a Dirichlet process mixture of Beta distributions (DP-beta) (Kottas, 2006). The hyper-parameters M and h in our proposed model are selected using grid search and cross-validation. For situations where a validation set is not available, then h could be selected using a rule of thumb approach such as Scott’s Rule (Scott, 2015) and M could be selected empirically from the input data by computing the time interval of the joint observation." }, { "heading": "3.1 EXPERIMENTS ON TWO-DIMENSIONAL PROCESSES", "text": "For our experiment, we use 20 Gaussian components and simulate a dense case with 100,000 observations and a sparse case with 1,000 observations within the time frame of 10 seconds. We consider that a joint event occurs if the two events occur 0.1 seconds apart. Figure 2a and Figure 2b compares the KL divergence between the first- and second-order models and Figure 3 are the corresponding intensity functions. In the first-order processes, both first- and second-order models have the same performance. This is expected as both of the model can treat first-order interactions and is able to learn the empirical intensity function exactly which is the superposition of the one-dimensional projection of the Gaussian kernels on each observation. For the second-order process, the second-order model performs better than the first-order model because it is able to directly learn the intensity function from the projection onto the two-dimensional space. In contrast, the first-order model must approximate the second-order process using the observations from the first order-processes. In the sparse case, the second-order model performs better when the correct bandwidth is selected.\nTable 1 compares our approach APP with other state-of-the-art approaches. APP performs the best for first-order processes in both the sparse and dense experiments. Experiments for RKHS and DP-beta were unable to complete running within 2 days for the dense experiment. In the secondorder process our approach was outperformed by KDE, while both the second-order APP is able to outperform both RKHS and DP-beta process for both sparse and dense experiments. Figure 2a and Figure 2b show that KDE is sensitive to changes in bandwidth, which means that, for any practical implementation of the model, second-order APP with a less sensitive bandwidth is more likely to learn a more accurate intensity function when the ground truth is unknown." }, { "heading": "3.2 EXPERIMENTS ON HIGHER-DIMENSIONAL PROCESSES", "text": "We generate a fourth-order process to simulate the behaviour of the model in higher dimensions. The model is generalizable to higher dimensions, however it is difficult to demonstrate results for processes higher than fourth-order. For our experiment, we generate an intensity function using 50 Gaussian components and draw a sample with the size of 107 for the dense case and that with the size of 105 for the sparse case. We consider the joint event to be the time frame of 0.1 seconds.\nWe were not able to run comparison experiments with other models because they are unable to learn when there are no or few direct observations in third- and fourth-order processes. In addition, the time complexity is too high to learn from direct observations in first- and second-order processes because all the other models have their time complexity proportional to the number of observations. The time complexity for KDE isO(ND) for the dimensionality withD, while DP-beta isO(N2K), whereK is the number of clusters, and RKHS isO(N2) for each iteration with respect to the sample size N , where DP-beta and RKHS are applied to a single dimension as they cannot directly treat multiple dimensions. KDE is able to make an estimation of the intensity function when there are no direct observations of the simultaneous event, however, it was too computationally expensive to complete running the experiment. Differently, our model is more efficient because the time complexity is proportional to the number of bins in our model. The time complexity of APP for each iteration is O(|Ω||S|), where |Ω| = MD and |S| = ∑k c=1 ( D c ) . Our model scales combinatorially\nTable 2: Negative test log-likelihood for the New York Taxi data. Single processes ([T] and [W]) and joint process of them ([T,W]). APP-# represents the order of the Additive Poisson Process.\nProcess APP-1 APP-2 KDE RKHS DP-beta\nJan [T] 714.07 714.07 713.77 728.13 731.01 [W] 745.60 745.60 745.23 853.42 790.04\n[T,W] 249.60 246.05 380.22 259.29 260.30\nFeb [T] 713.43 713.43 755.71 795.61 765.76 [W] 738.66 738.66 773.65 811.34 792.10\n[T,W] 328.84 244.21 307.86 334.31 326.52\nMar [T] 716.72 716.72 733.74 755.48 741.28 [W] 738.06 738.06 816.99 853.33 832.43\n[T,W] 291.20 246.19 289.69 328.47 300.36\nwith respect to the number of dimensions. However, this is unavoidable for any model which directly takes into account the high-order interactions. For practical applications, the number of dimensions D and the order of the model k is often small, making it feasible to compute.\nIn Figure 4a we observe similar behaviour in the model, where the first-order processes fit precisely to the empirical distribution generated by the Gaussian kernels. The third-order model is able to period better on the fourth-order process. This is because the observation shown in Figure 5a is largely sparse and learning from the observations directly may overfit. A lower dimensional approximation is able to provide a better result in the third-order model. Similar trends can be seen in the sparse case as shown in Figure 4b, where a second-order model is able to produce better estimation in thirdand fourth-order processes. The observations are extremely sparse as seen in Figure 5b, where there are only a few observations or no observations at all to learn the intensity function." }, { "heading": "3.3 UNCOVERING COMMON PATTERNS IN THE NEW YORK TAXI DATASET", "text": "We demonstrate the capability of our model on the 2016 Green Taxi Trip dataset2. We are interested in finding the common pick up patterns across Tuesday and Wednesday. We define a common pick up time to be within 1 minute intervals of each other between the two days. We have chosen to learn an intensity function using the Poisson process for Tuesday and Wednesday and a joint process for both of them. The joint process uncovers the common pick up patterns between the two days. We have selected to use the first two weeks of Tuesday and Wednesday in January 2016 as our training and validation sets and Tuesday and Wednesday of the third week of January 2016 as our testing set. We repeat the same experiment for February and March.\nWe show our results in Table 2, where we use the negative test log-likelihood as an evaluation measure. APP-2 has consistently outperformed all the other approaches for the joint process between Tuesday and Wednesday. In addition, for the individual process, APP-1 and -2 also showed the best result for February and March. These results demonstrate the effectiveness of our model in capturing higher-order interactions between processes, which is difficult for the other existing approaches." }, { "heading": "4 CONCLUSION", "text": "We have proposed a novel framework, called Additive Poisson Process (APP), to learn the intensity function of the higher-order interaction between point processes using samples from lower dimensional projections. We formulated our proposed model using the the log-linear model and optimize it using information geometric structure of the distribution space. We drew parallels between our proposed model and generalized additive model and showed the ability to learn from lower dimensional projections via the Kolmogorov-Arnold representation theorem. Our empirical results show the superiority of our method when learning the higher-order interactions between point processes when there are no or extremely sparse direct observations, and our model is also robust to varying sample sizes. Our approach provides a novel formulation to learn the joint intensity function which typically has extremely low intensity. There is enormous potential to apply APP to real-world applications, where higher order interaction effects need to be model such as in transportation, finance, ecology, and violent crimes.\n2https://data.cityofnewyork.us/Transportation/2016-Green-Taxi-Trip-Data/hvrh-b6nb" }, { "heading": "A ADDITIONAL EXPERIMENTS", "text": "A.1 BANDWIDTH SENSITIVITY ANALYSIS\nOur first experiment is to demonstrate the ability for our proposed model to learn an intensity function from samples. We generate a Bernoulli process with probably of p = 0.1 to generate samples for every 1 seconds for 100 seconds to create a toy problem for our model. This experiment is to observe the behaviour of varying the bandwidth in our model. In Figure 6a, we observe that applying no kernel, we learn the deltas of each individual observation. When we apply a Gaussian kernel, the output of the model for the intensity function is much more smooth. Increasing the bandwidth of the kernel will provide a wider and much smoother function. Between the 60 seconds and 80 seconds mark, it can be seen when two observations have overlapping kernels, the intensity function becomes larger in magnitude.\nA.2 ONE DIMENSIONAL POISSON PROCESS\nA one dimensional experiment is simulated using Ogata’s thinning algorithm (Ogata, 1981). We generate two experiments use the standard sinusoidal benchmark intensity function with a frequency of 20π. The dense experiment has troughs with 0 intensity and peaks at 201 and the sparse experiment has troughs with 0 intensity and peaks at 2. Figure 6d shows the experimental results of the dense case, our model has no problem learning the intensity function. We compare our results using KL divergence between the underlying intensity function used to generate the samples to the intensity function generated by the model. Figure 6b shows that the optimal bandwidth is h = 1.\nAlgorithm 2 Thinning Algorithm for non-homogenous Poisson Process 1: Function Thinning Algorithm (λ (t), T ): 2: n = m = 0, t0 = s0 = 0, λ̄ = sup0≤t≤Tλ (t) 3: repeat 4: u ∼ uniform (0, 1) 5: w = − 1\nλ̄ lnu {w ∼ exponential(λ̄)}\n6: sm+1 = sm + w 7: D ∼ uniform (0, 1) 8: if D ≤ λ(sm+1)\nλ̄ then\n9: tn+1 = sm+1 10: n = n+ 1 11: else 12: m = m+ 1 13: end if 14: if tn ≤ T then 15: return {tk}k=1,2,...,n 16: else 17: return {tk}k=1,2,...,n−1 18: end if 19: until sm ≤ T 20: End Function\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 6 10 − 5 10 − 4 10 − 3\nKL Divergence\nPr oc\ne : [ 1]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 6 10 − 5 10 − 4 10 − 3\nKL Divergence\nPr oc\ne : [ 2]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 6 10 − 5 10 − 4 10 − 3\nKL Divergence\nPr oc\ne : [ 3]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 6 10 − 5 10 − 4 10 − 3\nKL Divergence\nPr oc\ne : [ 4]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 4 10 − 3 10 − 2 KL Divergence\nPr oc\ne : [ 1, 2 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 4 10 − 3 10 − 2 KL Divergence\nPr oc\ne : [ 1, 3 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 4 10 − 3 10 − 2\nKL Divergence\nPr oc\ne : [ 1, 4 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 4 10 − 3 10 − 2\nKL Divergence\nPr oc\ne : [ 2, 3 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 4 10 − 3 10 − 2\nKL Divergence\nPr oc\ne : [ 2, 4 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 4 10 − 3 10 − 2 KL Divergence\nPr oc\ne : [ 3, 4 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 3 10 − 2\nKL Divergence\nPr oc\ne : [ 1, 2 , 3\n]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 3 10 − 2 10 − 1\nKL Divergence\nPr oc\ne : [ 1, 2 , 4\n]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 3 10 − 2 10 − 1\nKL Divergence\nPr oc\ne : [ 1, 3 , 4\n]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 3 10 − 2 10 − 1\nKL Divergence\nPr oc\ne : [ 2, 3 , 4\n]\nO rd\ner : 1\nO rd\ner : 2\nO rd\ner : 3\nO rd\ner : 4\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 2 10 − 1\nKL Divergence\nPr oc\ne : [ 1, 2 , 3\n, 4 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B an\ndw id th\n10 − 2 10 − 1 10 0\nKL Divergence\nTo ta l K\nL D iv er ge\nnc e\n(a )D\nen se\nob se\nrv at\nio ns\n.\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 4 10 − 3\nKL Divergence\nPr oc\nes s:\n[ 1]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 4 10 − 3 KL Divergence\nPr oc\nes s:\n[ 2]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 4 10 − 3 KL Divergence\nPr oc\nes s:\n[ 3]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 4 10 − 3 KL Divergence\nPr oc\nes s:\n[ 4]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 3 10 − 2\nKL Divergence\nPr oc\nes s:\n[ 1,\n2 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 3 10 − 2\nKL Divergence\nPr oc\nes s:\n[ 1,\n3 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 3 10 − 2 KL Divergence\nPr oc\nes s:\n[ 1,\n4 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 3 10 − 2 KL Divergence\nPr oc\nes s:\n[ 2,\n3 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 3 10 − 2 KL Divergence\nPr oc\nes s:\n[ 2,\n4 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 2\nKL Divergence\nPr oc\nes s:\n[ 3,\n4 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 2 10 − 1\nKL Divergence\nPr oc\nes s:\n[ 1,\n2 , 3\n]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 1\nKL Divergence\nPr oc\nes s:\n[ 1,\n2 , 4\n]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 2 10 − 1\nKL Divergence\nPr oc\nes s:\n[ 1,\n3 , 4\n]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 1 KL Divergence\nPr oc\nes s:\n[ 2,\n3 , 4\n]\nO rd\ner : 1\nO rd\ner : 2\nO rd\ner : 3\nO rd\ner : 4\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 1 10 0 KL Divergence\nPr oc\nes s:\n[ 1,\n2 , 3\n, 4 ]\n0. 00\n0. 25\n0. 50\n0. 75\n1. 00\n1. 25\n1. 50\n1. 75\n2. 00\nKe rn\nel B\nan dw\nid h\n10 − 1 10 0\nKL Divergence\nTo a\nl K L\nD iv\ner ge\nnc e\n(b )S\npa rs\ne ob\nse rv\nat io\nns .\nFi gu\nre 7:\nK L\nD iv\ner ge\nnc e\nfo rf\nou r-\nor de\nrP oi\nss on\npr oc\nes s.\n0 2\n4 6\n8 10\nTi m e\n0\n10 00\n00 Intensity Pr oc\nes s: [ 1]\n0 2\n4 6\n8 10\nTi m e\n0\n10 00\n00\nIntensity\nPr oc\nes s: [ 2]\n0 2\n4 6\n8 10\nTi m e\n0\n10 00\n00\nIntensity\nPr oc\nes s: [ 3]\n0 2\n4 6\n8 10\nTi m e\n0\n50 00\n0\n10 00\n00\nIntensity\nPr oc\nes s: [ 4]\n0 2\n4 6\n8 10\nTi m e\n0 20 00 Intensity\nPr oc\nes s: [ 1, 2 ]\n0 2\n4 6\n8 10\nTi m e\n0 20 00 40 00 Intensity\nPr oc\nes s: [ 1, 3 ]\n0 2\n4 6\n8 10\nTi m e\n0 10 00 20 00 Intensity\nPr oc\nes s: [ 1, 4 ]\n0 2\n4 6\n8 10\nTi m e\n0 10 00 20 00\nIntensity\nPr oc\nes s: [ 2, 3 ]\n0 2\n4 6\n8 10\nTi m e\n0 10 00 20 00 Intensity\nPr oc\nes s: [ 2, 4 ]\n0 2\n4 6\n8 10\nTi m e\n0 10 00 20 00\nIntensity\nPr oc\nes s: [ 3, 4 ]\n0 2\n4 6\n8 10\nTi m e\n05010 0\nIntensity\nPr oc\nes s: [ 1, 2 , 3\n]\n0 2\n4 6\n8 10\nTi m e\n02550 Intensity\nPr oc\nes s: [ 1, 2 , 4\n]\n0 2\n4 6\n8 10\nTi m e\n050 Intensity\nPr oc\nes s: [ 1, 3 , 4\n]\n0 2\n4 6\n8 10\nTi m e\n050 Intensity\nPr oc\nes s: [ 2, 3 , 4\n]\n0 2\n4 6\n8 10\nTi m e\n024 Intensity\nPr oc\nes s: [ 1, 2 , 3\n, 4 ]\nO rd er : 1 O rd er : 2 O rd er : 3\nO rd er : 4 G ro un\nd Tr ut h\n(a )D\nen se\nob se\nrv at\nio ns\n.\n0 2\n4 6\n8 10\nTi m e\n0 10 00 Intensity\nPr oc\nes s: [ 1]\n0 2\n4 6\n8 10\nTi m e\n0 10 00 Intensity\nPr oc\nes s: [ 2]\n0 2\n4 6\n8 10\nTi m e\n0 10 00 Intensity\nPr oc\nes s: [ 3]\n0 2\n4 6\n8 10\nTi m e\n0 50 0 10 00 Intensity\nPr oc\nes s: [ 4]\n0 2\n4 6\n8 10\nTi m e\n020 Intensity\nPr oc\nes s: [ 1, 2 ]\n0 2\n4 6\n8 10\nTi m e\n02040 Intensity\nPr oc\nes s: [ 1, 3 ]\n0 2\n4 6\n8 10\nTi m e\n01020 Intensity\nPr oc\nes s: [ 1, 4 ]\n0 2\n4 6\n8 10\nTi m e\n01020 Intensity\nPr oc\nes s: [ 2, 3 ]\n0 2\n4 6\n8 10\nTi m e\n01020 Intensity\nPr oc\nes s: [ 2, 4 ]\n0 2\n4 6\n8 10\nTi m e\n01020 Intensity\nPr oc\nes s: [ 3, 4 ]\n0 2\n4 6\n8 10\nTi m e\n0. 0 0. 5 Intensity Pr oc\nes s: [ 1, 2 , 3\n]\n0 2\n4 6\n8 10\nTi m e\n0. 0 0. 2 0. 4 Intensity\nPr oc\nes s: [ 1, 2 , 4\n]\n0 2\n4 6\n8 10\nTi m e\n0. 00 0. 25 0. 50 Intensity\nPr oc\nes s: [ 1, 3 , 4\n]\n0 2\n4 6\n8 10\nTi m e\n0. 0 0. 5 1. 0\nIntensity\nPr oc\nes s: [ 2, 3 , 4\n]\n0 2\n4 6\n8 10\nTi m e\n0. 00 0. 05 Intensity\nPr oc\nes s: [ 1, 2 , 3\n, 4 ]\nO rd er : 1 O rd er : 2 O rd er : 3\nO rd er : 4 G ro un\nd Tr ut h\n(b )S\npa rs\ne ob\nse rv\nat io\nns .\nFi gu\nre 8:\nIn te\nns ity\nfu nc\ntio n\nof hi\ngh er\ndi m\nen si\non al\npr oc\nes se\ns. D\not s\nre pr\nes en\nto bs\ner va\ntio ns\n." } ]
2,020
null
SP:e6e46c0563e852189839b2f923788165800a0f17
[ "This paper provides an approach for treatment effect estimation when the observational data is longitudinal (with irregular time stamps) and consists of temporal confounding variables. The proposed method can be categorized under the matching methods, in which, in order to estimate the counterfactual outcomes, a subset of the subjects in the opposite treatment arm (i.e., contributors) is selected and weighted. The proposed method is designed such that it achieves explainability (by identifying a few contributors) and trustworthiness (by checking if the estimated outcome is reliable)." ]
Estimating causal treatment effects using observational data is a problem with few solutions when the confounder has a temporal structure, e.g. the history of disease progression might impact both treatment decisions and clinical outcomes. For such a challenging problem, it is desirable for the method to be transparent — the ability to pinpoint a small subset of data points that contribute most to the estimate and to clearly indicate whether the estimate is reliable or not. This paper develops a new method, SyncTwin, to overcome temporal confounding in a transparent way. SyncTwin estimates the treatment effect of a target individual by comparing the outcome with its synthetic twin, which is constructed to closely match the target in the representation of the temporal confounders. SyncTwin achieves transparency by enforcing the synthetic twin to only depend on the weighted combination of few other individuals in the dataset. Moreover, the quality of the synthetic twin can be assessed by a performance metric, which also indicates the reliability of the estimated treatment effect. Experiments demonstrate that SyncTwin outperforms the benchmarks in clinical observational studies while still being transparent.
[]
[ { "authors": [ "Alberto Abadie" ], "title": "Using synthetic controls: Feasibility, data requirements, and methodological aspects", "venue": "Journal of Economic Literature,", "year": 2019 }, { "authors": [ "Alberto Abadie", "Javier Gardeazabal" ], "title": "The economic costs of conflict: A case study of the basque country", "venue": "American economic review,", "year": 2003 }, { "authors": [ "Alberto Abadie", "Alexis Diamond", "Jens Hainmueller" ], "title": "Synthetic control methods for comparative case studies: Estimating the effect of california’s tobacco control program", "venue": "Journal of the American statistical Association,", "year": 2010 }, { "authors": [ "Jaap H Abbring", "Gerard J Van den Berg" ], "title": "The nonparametric identification of treatment effects in duration models", "venue": null, "year": 2003 }, { "authors": [ "Ahmed M Alaa", "Mihaela van der Schaar" ], "title": "Bayesian inference of individualized treatment effects using multi-task gaussian processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Muhammad Amjad", "Devavrat Shah", "Dennis Shen" ], "title": "Robust synthetic control", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Alejandro Barredo Arrieta", "Natalia Dı́az-Rodrı́guez", "Javier Del Ser", "Adrien Bennetot", "Siham Tabik", "Alberto Barbado", "Salvador Garcı́a", "Sergio Gil-López", "Daniel Molina", "Richard Benjamins" ], "title": "Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai", "venue": "Information Fusion,", "year": 2020 }, { "authors": [ "Susan Athey", "Mohsen Bayati", "Nikolay Doudchenko", "Guido Imbens", "Khashayar Khosravi" ], "title": "Matrix completion methods for causal panel data models", "venue": "Technical report, National Bureau of Economic Research,", "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Jushan Bai", "Serena Ng" ], "title": "Large dimensional factor analysis", "venue": "Now Publishers Inc,", "year": 2008 }, { "authors": [ "Ioana Bica", "Ahmed M Alaa", "James Jordon", "Mihaela van der Schaar" ], "title": "Estimating counterfactual treatment outcomes over time through adversarially balanced representations", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "CM Booth", "IF Tannock" ], "title": "Randomised controlled trials and population-based observational research: partners in the evolution of medical evidence", "venue": "British journal of cancer,", "year": 2014 }, { "authors": [ "Zhengping Che", "Sanjay Purushotham", "Kyunghyun Cho", "David Sontag", "Yan Liu" ], "title": "Recurrent neural networks for multivariate time series with missing values", "venue": "Scientific reports,", "year": 2018 }, { "authors": [ "Denver Dash" ], "title": "Restructuring dynamic causal systems in equilibrium", "venue": "In AISTATS. Citeseer,", "year": 2005 }, { "authors": [ "Barbra A Dickerman", "Xabier Garcı́a-Albéniz", "Roger W Logan", "Spiros Denaxas", "Miguel A Hernán" ], "title": "Avoidable flaws in observational analyses: an application to statins and cancer", "venue": "Nature Medicine,", "year": 2019 }, { "authors": [ "Natalie A DiPietro" ], "title": "Methods in epidemiology: observational study", "venue": "designs. Pharmacotherapy: The Journal of Human Pharmacology and Drug Therapy,", "year": 2010 }, { "authors": [ "Dumitru Erhan", "Pierre-Antoine Manzagol", "Yoshua Bengio", "Samy Bengio", "Pascal Vincent" ], "title": "The difficulty of training deep architectures and the effect of unsupervised pre-training", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Dumitru Erhan", "Aaron Courville", "Yoshua Bengio", "Pascal Vincent" ], "title": "Why does unsupervised pre-training help deep learning", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Demiana William Faltaos", "Saı̈k Urien", "Valérie Carreau", "Marina Chauvenet", "Jean Sebastian Hulot", "Philippe Giral", "Eric Bruckert", "Philippe Lechat" ], "title": "Use of an indirect effect model to describe the ldl cholesterol-lowering effect by statins in hypercholesterolaemic patients", "venue": "Fundamental & clinical pharmacology,", "year": 2006 }, { "authors": [ "Steven E Finkel" ], "title": "Causal analysis with panel data", "venue": null, "year": 1995 }, { "authors": [ "Jared C Foster", "Jeremy MG Taylor", "Stephen J Ruberg" ], "title": "Subgroup identification from randomized clinical trial data", "venue": "Statistics in medicine,", "year": 2011 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N Dauphin" ], "title": "Convolutional sequence to sequence learning", "venue": "arXiv preprint arXiv:1705.03122,", "year": 2017 }, { "authors": [ "Harshad Hegde", "Neel Shimpi", "Aloksagar Panny", "Ingrid Glurich", "Pamela Christie", "Amit Acharya" ], "title": "Mice vs ppca: Missing data imputation in healthcare", "venue": "Informatics in Medicine Unlocked,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Saurav Kadavath", "Dawn Song" ], "title": "Using self-supervised learning can improve model robustness and uncertainty", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Emily Herrett", "Arlene M Gallagher", "Krishnan Bhaskaran", "Harriet Forbes", "Rohini Mathur", "Tjeerd Van Staa", "Liam Smeeth" ], "title": "Data resource profile: clinical practice research datalink (cprd)", "venue": "International journal of epidemiology,", "year": 2015 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "Fredrik Johansson", "Uri Shalit", "David Sontag" ], "title": "Learning representations for counterfactual inference", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Fredrik D Johansson", "Nathan Kallus", "Uri Shalit", "David Sontag" ], "title": "Learning weighted representations for generalization across designs", "venue": "arXiv preprint arXiv:1802.08598,", "year": 2018 }, { "authors": [ "Nathan Kallus" ], "title": "Deepmatch: Balancing deep covariate representations for causal inference using adversarial training", "venue": "arXiv preprint arXiv:1802.05664,", "year": 2018 }, { "authors": [ "Jimyon Kim", "Byung-Jin Ahn", "Hong-Seok Chae", "Seunghoon Han", "Kichan Doh", "Jeongeun Choi", "Yong K Jun", "Yong W Lee", "Dong-Seok Yim" ], "title": "A population pharmacokinetic–pharmacodynamic model for simvastatin that predicts low-density lipoprotein-cholesterol reduction in patients with primary hyperlipidaemia", "venue": "Basic & clinical pharmacology & toxicology,", "year": 2011 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Cheng-Xian Steven Li", "Benjamin M Marlin" ], "title": "Learning from irregularly-sampled time series: A missing data perspective", "venue": "In International Conference on Machine Learning. PMLR,", "year": 2020 }, { "authors": [ "Sheng Li", "Yun Fu" ], "title": "Matching on balanced nonlinear representations for treatment effects estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Bryan Lim", "Ahmed M Alaa", "Mihaela van der Schaar" ], "title": "Forecasting treatment responses over time using recurrent marginal structural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Christos Louizos", "Uri Shalit", "Joris M Mooij", "David Sontag", "Richard Zemel", "Max Welling" ], "title": "Causal effect inference with deep latent-variable models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Chris J Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "arXiv preprint arXiv:1611.00712,", "year": 2016 }, { "authors": [ "Bruce M Psaty", "Thomas D Koepsell", "Danyu Lin", "Noel S Weiss", "David S Siscovick", "Frits R Rosendaal", "Marco Pahor", "Curt D Furberg" ], "title": "Assessment and control for confounding by indication in observational studies", "venue": "Journal of the American Geriatrics Society,", "year": 1999 }, { "authors": [ "Paul M Ridker", "Nancy R Cook" ], "title": "Statins: new american guidelines for prevention of cardiovascular disease", "venue": "The Lancet,", "year": 2013 }, { "authors": [ "Jason Roy", "Kirsten J Lum", "Michael J Daniels" ], "title": "A bayesian nonparametric approach to marginal structural models for point treatments and a continuous or survival outcome", "venue": null, "year": 2017 }, { "authors": [ "Yulia Rubanova", "Ricky TQ Chen", "David K Duvenaud" ], "title": "Latent ordinary differential equations for irregularly-sampled time series", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Donald B Rubin" ], "title": "Randomization analysis of experimental data: The fisher randomization test comment", "venue": "Journal of the American Statistical Association,", "year": 1980 }, { "authors": [ "Donald B Rubin" ], "title": "Causal inference using potential outcomes: Design, modeling, decisions", "venue": "Journal of the American Statistical Association,", "year": 2005 }, { "authors": [ "Peter Schulam", "Suchi Saria" ], "title": "Reliable decision support using counterfactual models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Uri Shalit", "Fredrik D Johansson", "David Sontag" ], "title": "Estimating individual treatment effect: generalization bounds and algorithms", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Elizabeth A Stuart" ], "title": "Matching methods for causal inference: A review and a look forward", "venue": "Statistical science: a review journal of the Institute of Mathematical Statistics,", "year": 2010 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Timo Teräsvirta", "Dag Tjøstheim", "Clive William John Granger" ], "title": "Modelling nonlinear economic time series", "venue": null, "year": 2010 }, { "authors": [ "Robert Tibshirani" ], "title": "Regression shrinkage and selection via the lasso", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1996 }, { "authors": [ "Madeleine Udell", "Alex Townsend" ], "title": "Nice latent variable models have log-rank", "venue": "arXiv preprint arXiv:1705.07474,", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Anne M. Archibald", "Antônio H. Ribeiro", "Fabian Pedregosa", "Paul van" ], "title": "Mulbregt, and SciPy 1. 0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python", "venue": "Nature Methods,", "year": 2020 }, { "authors": [ "Yiqing Xu" ], "title": "Generalized synthetic control method: Causal inference with interactive fixed effects models", "venue": "Political Analysis,", "year": 2017 }, { "authors": [ "Liuyi Yao", "Sheng Li", "Yaliang Li", "Mengdi Huai", "Jing Gao", "Aidong Zhang" ], "title": "Representation learning for treatment effect estimation from observational data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Koutaro Yokote", "Hideaki Bujo", "Hideki Hanaoka", "Masaki Shinomiya", "Keiji Mikami", "Yoh Miyashita", "Tetsuo Nishikawa", "Tatsuhiko Kodama", "Norio Tada", "Yasushi Saito" ], "title": "Multicenter collaborative randomized parallel group comparative study of pitavastatin and atorvastatin in japanese hypercholesterolemic patients: collaborative study on hypercholesterolemia drug intervention and their benefits for atherosclerosis prevention", "venue": "(chiba study). Atherosclerosis,", "year": 2008 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela van der Schaar" ], "title": "Ganite: Estimation of individualized treatment effects using generative adversarial nets", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Lim" ], "title": "on validation set using grid search before applied to the testing data. Counterfactual Recurrent Network and Recurrent Marginal Structural Network. We used the implementations by the authors Bica et al", "venue": null, "year": 2018 }, { "authors": [ "Dickerman" ], "title": "2019) to make sure the selection process does not increase the confounding bias. The summary statistics of the treatment and control groups are listed below. We can clearly see a selection bias as the treatment group contains a much higher proportion of male and people with previous cardiovascular or renal diseases", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Estimating the causal individual treatment effect (ITE) on patient outcomes using observational data (observational studies) has become a promising alternative to clinical trials as large-scale electronic health records become increasingly available (Booth & Tannock, 2014). Figure 1 illustrates a common setting in medicine and it will be the focus of this work (DiPietro, 2010): an individual may start the treatment at some observed time (black dashed line) and we want to estimate the ITE on the outcomes over time after the treatment starts (shaded area). The key limitation of observational studies is that treatment allocation is not randomised but typically influenced by prior measurable static covariates (e.g. gender, ethnicity) and temporal covariates (e.g. all historical medical diagnosis and conditions, squares in Figure 1). When the covariates also modulate the patient outcomes, they lead to the confounding bias in the direct estimation of the ITE (Psaty et al., 1999).\nAlthough a plethora of methods overcome the confounding bias by adjusting for the static covariates (Yoon et al., 2018; Yao et al., 2018; Louizos et al., 2017; Shalit et al., 2017; Li & Fu, 2017; Alaa & van der Schaar, 2017; Johansson et al., 2016), few existing works take advantage of the temporal covariates that are measured irregularly over time (Figure 1) (Bica et al., 2020; Lim et al., 2018; Schulam & Saria, 2017; Roy et al., 2017). Overcoming the confounding bias due to temporal covariates is especially important for medical research as clinical treatment decisions are often based on the temporal progression of a disease. Transparency is highly desirable in such a challenging problem.\nAlthough transparency is a general concept, we will focus on two specific aspects (Arrieta et al., 2020). (1) Explainability: the method\nshould estimate the ITE of any given individual (the target individual) based on a small subset of other individuals (contributors) whose amount of contribution can be quantified (e.g using a weight between 0 and 1). Although the estimate of different target individuals may depend on different contributors, the method can always shortlist the few contributors for the expert to understand the\nrationale for each estimate. (2) Trustworthiness: the method should identify the target individuals whose ITE cannot be reliably estimated due to violation of assumptions, lack of data, or other failure modes. Being transparent about what the method cannot do improves the overall trustworthiness because it guides the experts to only use the method when it is deemed reliable.\nInspired by the well-established Synthetic Control method in Statistics and Econometrics (Abadie et al., 2010; Abadie, 2019), we propose SyncTwin, a transparent ITE estimation method which deals with temporal confounding. Figure 2 A illustrates the schematics of SyncTwin. SyncTwin starts by encoding the irregularly-measured temporal covariates as representation vectors. For each treated target individual, SyncTwin selects and weights few contributors from the control group based on their representation vectors and the sparsity constraint. SyncTwin proceeds to construct a synthetic twin whose representation vector and outcomes are the weighted average of the contributors. Finally, the ITE is estimated as the difference between the target individual’s and the Synthetic Control’s outcomes after treatment. The difference in their outcomes before treatment indicates the quality of the synthetic twin and whether the model assumptions hold. If the target individual and synthetic twin do not match in pre-treatment outcomes, the estimated ITE should not be considered trustworthy.\nTransparency of SyncTwin. SyncTwin achieves explainability by selecting only a few contributors for each target individual. It achieves trustworthiness because it quantifies the confidence one should put into the estimated ITE as the difference between the target and synthetic pre-treatment outcomes." }, { "heading": "2 PROBLEM SETTING", "text": "We consider a clinical observational study with N individuals indexed by i ∈ [N ] = {1, . . . , N}. Let ai ∈ {0, 1} be the treatment indicator with ai = 1 if i started to receive the treatment at some time and ai = 0 if i never initiated the treatment. We realign the time steps such that all treatments were initiated at time t = 0. Let I1 = {i ∈ [N ] | ai = 1} and I0 = {i ∈ [N ] | ai = 0} be the set of the treated and the control respectively. Denote N0 = |I0| and N1 = |I1| as the sizes of the groups. The time t = 0 is of special significance because it marks the initiation of the treatment (black dashed line in Figure 1). We call the period t < 0 the pre-treatment period and the period t ≥ 0 the treatment period (shaded area in Figure 1).\nTemporal covariates are observed during the pre-treatment period only and may influence the treatment decision and the outcome. Let Xi = [xis]s∈[Si] be the sequence of covariates xis ∈ RD, which includes Si ∈ N observations taken at times t ∈ Ti = {tis}s∈[Si], where all tis ∈ R and tis < 0. Note that xis may also include static covariates whose values are constant over time. To allow the covariates to be sampled at different frequencies, let mis ∈ {0, 1}D be the masking vector with misd = 1 indicating the dth element in xis is sampled.\nThe outcome of interest is observed both before and after the treatment. In many cases, the researchers are interested in the outcomes measured at regular time intervals (e.g. the monthly average blood pressure). Hence, let T − = {−M, . . . ,−1} and T + = {0, . . . ,H − 1} be the observation times\nbefore and after treatment initiation. In this work, we focus on real-valued outcomes yit ∈ R observed at t ∈ T − ∪ T +. We arrange the outcomes after treatment into a H-dimensional vector denoted as yi = [yit]t∈T + ∈ RH . Similarly define pre-treatment outcome vector y − i = [yit]t∈T − ∈ RM .\nUsing the potential outcome framework (Rubin, 2005), let yit(ai) ∈ R denote the potential outcome at time t in a world where i received the treatment as indicated by ai. Let yi(1) = [yit(1)]t∈T + ∈ RH , and y−i (1) = [yit(1)]t∈T − ∈ RM , similarly for yi(0) and y − i (0). The individual treatment effect (ITE) is defined as τi = yi(1) − yi(0) ∈ RH . Under the consistency assumption (discussed later in details), the factual outcome is observed yi(ai) = yi, which means for any i ∈ [N ] only the unobserved counterfactual outcome yi(1− ai) needs to be estimated in order to estimate the ITE. To simplify the notations, we focus on estimating the ITE for the treated, i.e. τ̂i = yi(1) − ŷi(0) for i ∈ I1, though the same approach applies to the control i ∈ I0 and new units i /∈ [N ] without loss of generality (A.5).\nSyncTwin relies on the following assumptions. (1) Consistency, also known as Stable Unit Treatment Value Assumption (Rubin, 1980): yit(ai) = yit, ∀i ∈ [N ], t ∈ T − ∪ T +. (2) No anticipation, also known as causal systems (Abbring & Van den Berg, 2003; Dash, 2005): yit = yit(1) = yit(0), ∀t ∈ T −, i ∈ [N ]. (3) Data generating model: the assumed directed acyclic graph is visualized in Figure 2 B (Pearl, 2009), where we introduce two variables ci ∈ RK and vi ∈ RU in addition to the previously defined ones. The latent variable ci is the common cause of yit(0) and xis, and it indirectly influences ai through xis. As we show later, SyncTwin tries to learn and construct a synthetic twin that has the same ci as the target. The variable vi is an unobserved confounder. Although SyncTwin, like all other ITE methods, works better without unobserved confounders (i.e. vi = 0, ∀i ∈ [N ]), we develop a unique checking procedure in Equation (4) to validate if there exists vi 6= 0. We also demonstrate that under certain favourable conditions, SyncTwin can overcome the impact of the vi. To establish the theoretical results, we further assume yit(0) follows a latent factor model with ci, vi as the latent “factors”(Bai & Ng, 2008):\nyit(0) = q > t ci + u > t vi + ξit, ∀t ∈ T − ∪ T +, (1)\nwhere qt ∈ RK , ut ∈ RU are weight vectors and ξit is the white noise. We require the weight vectors to have ||qt|| = 1, ∀t ∈ T −∪T + (Xu, 2017), which does not reduce the expressiveness of the model. We further require the dimensionality of the latent factor to be smaller than the number of time steps before or after treatment, i.e. K < min(M,H). Furthermore, let Q− = [qt]t∈T − ∈ RM×K and Q = [qt]t∈T + ∈ RH×K denote the matrices that stack all the weight vectors q’s before and after treatment as rows respectively. The latent factor model assumption may seem restrictive but as we show in Appendix A.4 it is applicable to many scenarios. In the simulation study (5.1) we further show SyncTwin performs well even when the data is not generated using model (1) but instead using a set of differential equations. We compare our assumptions with those used in the related works in Appendix A.3." }, { "heading": "3 RELATED WORK", "text": "" }, { "heading": "3.1 SYNTHETIC CONTROL", "text": "Similar to SyncTwin, Synthetic control (SC) (Abadie, 2019) and its extensions (Athey et al., 2018; Amjad et al., 2018) estimate ITE based on Synthetic Control outcomes. However, when applied to temporal confounding, SC will flatten the temporal covariates [xis]s∈[Si] into a fixed-sized (highdimensional) vector xi and use it to construct the twin. As a result, SC does not allow the covariates to be variable-length or sampled at different frequencies (otherwise xi’s dimensionality will vary across individuals). In contrast, SyncTwin can gracefully handle these irregularities because it constructs the twin using the representation vectors. Moreover, the covariates xi may contain observation noise and other sources of randomness that do not relate to the outcome or the treatment. Enforcing the target and the twin to have similar xi will inject these irrelevant noise to the twin, a situation we call over-match (because it resembles over-fit). Over-match undermines ITE estimation as we show in the simulation study in Section 5.1. Finally, SC assumes yit(0) = q > t xi + u > t vi + ξit, i.e. the flattened covariates xi linearly predicts yit(0), which is a special case of our assumption (1) and unlikely to hold for many medical applications." }, { "heading": "3.2 COVARIATE ADJUSTMENT WITH DEEP LEARNING", "text": "In the static setting, the covariate adjustment methods fit two functions (deep neural networks) to predict the outcomes with and without treatment i.e. ŷi(0) = f0(xi) and ŷi(1) = f1(xi) (Johansson et al., 2016; Shalit et al., 2017; Yao et al., 2018; Yoon et al., 2018). The ITE is then estimated as τ̂i = f1(xi) − f0(xi). Under this framework, various methods have been proposed to address temporal confounding (Lim et al., 2018; Bica et al., 2020). However, these methods generally lack transparency because the black-box neural networks cannot easily pinpoint the contributors for each prediction. Moreover, the prediction accuracy before treatment cannot directly measure the confidence or trustworthiness for the predictions after treatment because the network is very nonlinear and non-stationary. Lastly, Bica et al. (2020) and Lim et al. (2018) are applicable to a more general setting where the treatment can be turned on and off over time whereas SyncTwin assumes the outcomes will continue to be influenced by the treatment after the treatment starts.\nWorks with similar terminology. Several works in the literature use similar terms such as “twin” while most of them are not related to SyncTwin. We discuss these works in Appendix A.6." }, { "heading": "4 TRANSPARENT ITE ESTIMATION VIA SYNCTWIN", "text": "To explain when and why SyncTwin gives a valid ITE estimate, let us assume that we have learned a representation c̃i that approximates the latent variable ci, ∀i ∈ [N ] in Equation 1. For a target individual i ∈ I1, let bi = [bij ]j∈I0 ∈ RN0 be a vector of weights, each associated with a control individual. A synthetic twin can be generated using bi as\nĉi = ∑ j∈I0 bij c̃j , ŷit(0) = ∑ j∈I0 bijyjt(0) = ∑ j∈I0 bijyjt, ∀t ∈ T − ∪ T +, (2)\nwhere ĉi is the synthetic representation and ŷit(0) is the synthetic outcome under no treatment. The last equality follows from the consistency assumption. Let ŷi(0) = [ŷit(0)]t∈T + be the posttreatment synthetic outcome vector, and similarly ŷ−i = [ŷit(0)]t∈T − . The ITE of i can be estimated as\nτ̂i = yi(1)− ŷi(0) = yi − ∑ j∈I0 bijyj , (3)\nwhere again the last equality follows from the consistency assumption. We should highlight that yi and yj , ∀j ∈ I0 in the equation above are the observed outcomes. Hence, bi is the only free parameter that influences the ITE estimator τ̂i. The following two distances are central to the training and inference procedure:\ndci = ‖ĉi − c̃i‖, d y i = ‖ŷ − i − y − i ‖1, (4)\nwhere || · || is the vector `2-norm and || · ||1 is the vector `1-norm. Minimizing dci to construct synthetic twins. d c i indicates how well the synthetic twin matches the target individual in representations. Intuitively, we should seek to construct a twin who closely matches the target by minimizing dci . This intuition is verified in Proposition 1 (proved in A.1.1). Proposition 1 (Bias bound on ITE with no unobserved confounders). Suppose that vi = 0, ∀i ∈ [N ] and dci = 0 for some i ∈ I1 (vi and dci are defined in Equation 1 and 4 respectively), the absolute value of the expected difference in the true and estimated ITE of i is bounded by:\n|E[τ̂i]− E[τi]| ≤ |T +|‖ ∑ j∈I0 bijcj − ci‖ ≤ |T +| ( ∑ j∈I0 ‖cj − c̃j‖+ ‖ci − c̃i‖ ) . (5)\nHere we show that when dci is minimized at zero and there is no unobserved confounder, the bias on the ITE estimate only depends on how close the learned representation c̃ is to the true latent variable c. We will use use representation learning to uncover the latent variable c in the next section.\nUsing dyi to measure trustworthiness. By definition, d y i indicates how well the synthetic pretreatment outcomes match the target individual’s outcomes. Intuitively, matching the outcomes before treatment is a prerequisite for a good estimate of the ITE after treatment (Equation 2 and 3). We formalize this intuition in Proposition 2, which is proved in Appendix A.1.1.\nProposition 2 (Trustworthiness of SyncTwin under no hidden confounders). Suppose that all the outcomes are generated by the model in Equation 1 with the unobserved confounders equal to zero s.t. vi = 0, ∀i ∈ [N ], and that we reject the estimate τ̂i if the pre-treatment error dyi on T − is larger than δ|T −|/|T +|, the post-treatment ITE estimation error on T + is below δ.\nHere we show that if we would like to ensure the ITE error to fall below a pre-specified threshold δ, we should reject the estimate τ̂i when the distance d y i > δ|T −|/|T +| assuming no unobserved confounder. In other words, dyi can be used as an evaluation metric to access whether the estimated ITE is trustworthy.\nSituation with unobserved confounders. In presence of the unobserved confounders vi 6= 0, SyncTwin cannot guarantee to correctly estimate the ITE. However, dyi can still indicate whether vi has a significant impact on the pre-treatment outcomes, i.e. the unobserved confounders may exist but only weakly influence the outcomes before treatment. We discuss unobserved confounders in detail in Appendix A.1.2." }, { "heading": "4.1 LEARNING TO REPRESENT TEMPORAL COVARIATES", "text": "In this section, we show how SyncTwin learns the representation c̃i as a proxy for the latent variable ci using a sequence-to-sequence architecture as depicted in Figure 3 (A) and discussed below.\nArchitecture. SyncTwin is agnostic to the exact choice of architecture as long as the network translates the covariates into a single representation vector (encode) and reconstructs the covariates from that representation (decode). For this reason, we use the well-proven sequence-to-sequence architecture (Seq2Seq) (Sutskever et al., 2014) with an encoder similar to the one proposed in Bahdanau et al. (2015) and a standard LSTM decoder (Hochreiter & Schmidhuber, 1997).\nThe encoder first obtains a sequence of representations at each time step using a recurrent neural network. Instead of using the bi-directional LSTM as in Bahdanau et al. (2015), we use a GRU-D network because it is designed to encode irregularlly-sampled temporal observations (Che et al., 2018). This gives us the sequence his = GRU-D(hi,s−1,xis,mis, tis), ∀s ∈ [Si]. Since our goal is to obtain a single representation vector rather than a sequence of representations, we aggregate the sequence of his using the same attentive pooling method as in Bahdanau et al. (2015). The final representation vector c̃i is obtained as: c̃i = ∑ s∈[Si] αishis, where αis = r >his/ √ K is the attention weight and r ∈ RK is the attention parameter (Vaswani et al., 2017). The decoder uses the representation c̃i to reconstruct xis at time tis ∀s ∈ [Si]. Since the timing information tis may be lost in c̃i due to aggregation, we reintroduce it to the decoder by first obtaining a sequence of time representations ois = k0 + w>0 tis, where ois,k0,w0 ∈ RK , and then concatenating each with c̃i to obtain: eis = c̃i ⊕ ois ∈ R2K . Reintroducing timing information during decoding is a standard practice in Seq2Seq models for irregular time-series (Rubanova et al., 2019; Li & Marlin, 2020). Furthermore, using time representation ois instead of time values tis is inspired by the success of positional encoding in the self-attention architecture (Vaswani et al., 2017; Gehring et al., 2017). The decoder then applies a LSTM autoregressively on the time-aware representations eis to decode gis = LSTM(gi,s−1, eis), ∀s ∈ [Si], where gis ∈ RK . Finally, it uses a linear layer to obtain the reconstructions: x̃is = k1 + W1gis, where k1 ∈ RD, W1 ∈ RD×K . Loss functions. We train the networks with the weighted sum of the supervised loss Ls and the reconstruction loss Lr (Figure 3 A): Ls(D0) =\n∑ i∈D0 ||Q̃ · c̃i − yi(0)||, Lr(D0,D1) = ∑ i∈D0∪D1 ∑ s∈[Si] ||(x̃is − xis) mis||, (6)\nwhere D0 ⊆ I0, D1 ⊆ I1, mis is the masking vector (Section 2), represents element-wise product and Q̃ ∈ RH×K is a trainable parameter and || · || is the L2 norm. Intuitively, the supervised loss Ls ensures that the learned representation c̃i to be a linear predictor of the outcomes under no treatment yi(0). Here a linear function ỹi(0) := Q̃ · c̃i is used to be consistent with the data generating model (1). Using a nonlinear function here might lead to smaller Ls, but it will not uncover the latent variable ci as desired. We justify the supervised loss in Proposition 3 below and present the proof and detailed discussions in Appendix A.1.1. Proposition 3 (Error bound on the learned representations). Suppose that vi = 0, ∀i ∈ [N ] (vi is defined in Equation 1), the total error on the learned representations for the control, i.e., the first term\nin the upper bound of the absolute value of the expected difference in the true and estimated ITE (R.H.S of Equation 5), is bounded as follows:∑\nj∈I0 ‖cj − c̃j‖ ≤ βLs + ∑ j∈I0 ‖ξj‖, (7)\nwhere Ls is the supervised loss in Equation 6 and ξj is the white noise in Equation 1." }, { "heading": "4.2 CONSTRUCTING SYNTHETIC TWINS", "text": "Constraints. We require the weights bi in Equation 2 to satisfy two constraints (1) positivity: bij ≥ 0 ∀i ∈ [N ], j ∈ I0, and (2) sum-to-one: ∑ j∈I0 bij = 1, ∀i ∈ [N ]. The constraints are needed for three reasons. (1) The constraints reduce the solution space of bi and serve as a regularizer. Regularizing is vital because the dimensionality of bi ∈ RN0 can easily exceed ten thousand in observational studies. (2) The constraints encourage the solution to be sparse by fixing the `1-norm of bi to be one i.e. ||bi||1 = 1 (Tibshirani, 1996). Better sparsity leads to fewer contributors and better transparency. (3) Finally, the constraints ensure that the synthetic twin in Equation 2 is the weighted average of the contributors. Therefore the weight bij directly translates into the “contribution” or “importance” of j to i, further improving the transparency.\nMatching loss. The matching loss finds weight bi so that the synthetic twin and the target individual match in representations, as depicted in Figure 3 (B).\nLm(D0,D1) = ∑ i∈D1 ||c̃i − ∑ j∈D0 bij c̃j ||22, (8)\nwhere again D0 ⊆ I0 and D1 ⊆ I1. We use the Gumbel-Softmax reparameterization detailed in Appendix A.9 to optimize Lm under the constraints (Jang et al., 2016; Maddison et al., 2016)." }, { "heading": "4.3 TRAINING, VALIDATION AND INFERENCE", "text": "As is standard in machine learning, we perform model training, validation and inference (testing) on three disjoint datasets. On a high level, we train the encoder and decoder on the training data using the loss functions described in Section 4.1. The validation data is then used to validate and tune the hyper-parameters of the encoder and decoder. Finally, we fix the encoder and optimize the matching loss Lm on the testing data to find the weight bi, which leads to the ITE estimate using Equation 3. The detailed procedure is described in A.8. The hyperparamter sensitivity is studied in A.13." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 SIMULATION STUDY", "text": "In this simulation study, we evaluate SyncTwin on the task of estimating the LDL cholesterol-lowering effect of statins, a common drug prescribed to hypercholesterolaemic patients. We simulate the ground truth ITE using the widely adopted Pharmacokinetic-Pharmacodynamic model in the literature (Faltaos et al., 2006; Yokote et al., 2008; Kim et al., 2011).\ndpt dt = kint − k · pt; ddt dt = at − h · dt; dyt dt = k · pt − dt dt + d50 k · yt. (9)\nwhere yt is the LDL cholesterol level (outcome) and at is the indicator of statins treatment. The interpretation of all other variables involved are presented in Appendix A.10.\nData generation. Following our convention, the individuals are enrolled at t = 0, the covariates are observed in T = [−S, 0), where S ∈ {15, 25, 45}, and the ITE is to be estimated in the period T + = [0, 4]. We start by generating kint for each individual from the following mixture distribution:\nkinit = g > i ft; gi = δiei1 + (1− δi)ei2; δi iid∼ Bern(p); ein iid∼ N(µn,Σn), n = 1, 2 (10)\nwhere ft ∈ R6 are the Chebyshev polynomials, Bern(p) is the Bernoulli distribution with success probability p and N(µn,Σn) is the Gaussian distribution. To introduce confounding, we vary p for the treated and the control: p = p0, ∀i ∈ I0 and p = 1, ∀i ∈ I1, where p0 controls the degree of confounding bias. After that, the variables pt, dt, yt are obtained by solving Equation 9 using scipy (Virtanen et al., 2020) and adding independent white noise ∼ N(0, 0.1) to the solution. The temporal variables defined above give us the covariates xt = {kint , yt, pt, dt}. Finally, we introduce irregular sampling by creating masks mit∼Bern(m), where probability m ∈ {0.3, 0.5, 0.7, 1}.\nBenchmarks. From the Synthetic Control literature, we considered the original Synthetic Control method (SC) (Abadie et al., 2010), Robust Synthetic Control (RSC) (Amjad et al., 2018) and MC-NNM (Athey et al., 2018). From the deep learning literature, we compared against Counterfactual Recurrent Network (CRN) (Bica et al., 2020) and Recurrent Marginal Structural Network (RMSN) (Lim et al., 2018), which are the state-of-the-art methods to estimate ITE under temporal confounding. In addition, we included a modified version of the CFRNet, which was originally developed for the static setting (Shalit et al., 2017). To allow the CFRNet to handle temporal co-\nvariates, we replaced its fully-connected encoder with the encoder architecture used by SyncTwin (Section 4.1). We also included the counterfactual Gaussian Process (CGP) (Schulam & Saria, 2017) and One-nearest Neighbour Matching (1NN) (Stuart, 2010) as baselines. The implementation details of all benchmarks are available in Appendix A.7. We also included two ablated versions of SyncTwin. SyncTwin-Lr is trained only with reconstruction loss and SyncTwin-Ls only with supervised loss.\nMain results. We evaluate the mean absolute error (MAE) on ITE estimation: 1N1 ∑N1 i=1 ||τi − τ̂i||1. In table 6 the parameter p0 controls the level of confounding bias (smaller p0, larger bias). Additional results for different sequence length S and sampling irregularity m are shown in Appendix A.11. SyncTwin achieves the best or equally-best performance in all cases. The full SyncTwin with both loss functions also consistently outperforms the versions trained only with Lr or Ls. As discussed in Section 4 (and Appendix A.1.1), training with only reconstruction loss Lr leads to significant performance degradation. It is worth highlighting that the data generating model used in this simulation (9) is not the same as SyncTwin’s assumed latent factor model (1). This implies that SyncTwin may still achieve good performance when the assumed model (1) does not exactly hold.\nSC, RSC and MC-NNM underperform because their assumption that the flattened covariates xi linearly predict the outcome is violated (Section 3). Furthermore, Table 2 shows the synthetic twin created by SC matches the target covariates xi consistently better than SyncTwin, yet produces worse ITE estimates. This suggests that matching covariates better may not lead to better ITE estimate\nTable 1: Mean absolute error on ITE under different levels of confounding bias p0. m = 1 and S = 25 are used. Estimated standard deviations are shown in the parentheses. The best performer is in bold.\nMethod N0 = 200 N0 = 1000\np0 = 0.1 p0 = 0.25 p0 = 0.5 p0 = 0.1 p0 = 0.25 p0 = 0.5\nSyncTwin-Full 0.324 (.038) 0.144 (.012) 0.119 (.008) 0.141 (.012) 0.106 (.006) 0.093 (.005) SyncTwin-Lr 0.353 (.039) 0.170 (.015) 0.139 (.010) 0.256 (.026) 0.145 (.012) 0.101 (.006) SyncTwin-Ls 0.336 (.039) 0.170 (.015) 0.120 (.008) 0.144 (.012) 0.113 (.007) 0.127 (.010) SC 0.340 (.041) 0.151 (.024) 0.149 (.018) 0.258 (.050) 0.166 (.034) 0.214 (.036) RSC 0.837 (.044) 0.360 (.020) 0.321 (.018) 0.310 (.016) 0.298 (.014) 0.302 (.014) MC-NNM 1.160 (.059) 0.612 (.031) 0.226 (.011) 0.527 (.029) 0.159 (.008) 0.124 (.006) CFRNet 0.895 (.077) 0.411 (.037) 0.130 (.007) 0.411 (.038) 0.175 (.013) 0.106 (.007) CRN 1.045 (.064) 0.546 (.039) 0.360 (.024) 0.864 (.052) 0.767 (.040) 0.357 (.021) RMSN 0.390 (.031) 0.362 (.028) 0.332 (.026) 0.447 (.041) 0.386 (.034) 0.385 (.032) CGP 0.660 (.043) 0.610 (.039) 0.561 (.035) 0.826 (.056) 0.693 (.047) 0.602 (.038) 1NN 1.866 (.099) 1.721 (.091) 1.614 (.078) 2.446 (.131) 1.746 (.106) 1.384 (.083)\nTable 2: Mean absolute error between the observed covariates xi and synthetic twin’s covariates x̂i. SC matches the covariates better yet produces worse ITE estimate (Table 1), suggesting it is over-matching. The average distance between any two individuals is 0.95, much larger than all values reported in the table.\nMethod N0 = 200 N0 = 1000\np0 = 0.1 p0 = 0.25 p0 = 0.5 p0 = 0.1 p0 = 0.25 p0 = 0.5\nSyncTwin-Full 0.343 (.029) 0.203 (.014) 0.179 (.011) 0.469 (.037) 0.223 (.015) 0.175 (.012) SyncTwin-Lr 0.321 (.028) 0.192 (.015) 0.182 (.015) 0.250 (.019) 0.190 (.013) 0.195 (.013) SC 0.236 (.027) 0.117 (.014) 0.111 (.011) 0.155 (.025) 0.110 (.019) 0.128 (.020)\nbecause the covariates are noisy and the method might over-match (Section 3). In addition, Figure 4 visualizes the weights bi of SyncTwin and SC in a heatmap. We can clearly see that SyncTwin produces sparser weights because SC needs to use more contributors to construct the twin that (over-)matches xi. Quantitative evaluation of the sparsity is provided in Appendix A.12. These findings verify our belief that constructing twins in the representation space (SyncTwin) rather than in the high-dimensional observation space (SC) leads to better performance and transparency." }, { "heading": "5.2 EXPERIMENT ON REAL DATA", "text": "Purpose of study. We present an clinical observational study using SyncTwin to estimate the LDL Cholesterol-lowering effect of statins in the first year after treatment (Ridker & Cook, 2013).\nData Source. We used medical records from English National Health Service general practices that contributed anonymised primary care electronic health records to the Clinical Practice Research Datalink (CPRD), covering approximately 6.9 percent of the UK population (Herrett et al., 2015). CPRD was linked to secondary care admissions from Hospital Episode Statistics, and national mortality records from the Office for National Statistics. We defined treatment initiation as the date of first CPRD prescription and the outcome of interest was measured LDL cholesterol (LDL). Known risk factors for LDL were selected as temporal covariates measured before treatment initiation: HDL Cholesterol, Systolic Blood Pressure, Diastolic Blood Pressure, Body Mass Index, Pulse, Creatinine, Triglycerides and smoking status. Our analysis is based on a subset of 125,784 individuals (Appendix A.15) which was split into three equally-sized subsets for training, validation and inference, each with 17,371 treated and 24,557 controls.\nEvaluation. We evaluate our models using the average treatment effect on the treated group (ie, ATT = E(τi|ai = 1)) to directly correspond to the reported treatment effect in randomised clinical trials, e.g. The Heart Protection Study reported an a change of -1.26 mmol/L (SD=0.06) in LDL cholesterol for participants randomised to statins versus placebo (Group et al., 2007; 2002). We use the sample average on the testing set to estimate the ATT as ∑ i∈Dte1\nτ̂it/|Dte1 |, where Dte1 are the individuals in the testing set who received the treatment. SyncTwin estimates the ATT to be -1.25 mmol/L (SD 0.01), which is very close to the results from the clinical trial. In comparison, CRN and RMSN estimate the ATT to be -0.72 mmol/L (SD 0.01) and -0.83 mmol/L (SD 0.01) respectively. Other benchmark methods either cannot handle irregularly-measured covariates or do not scale to the size of the dataset. Our result suggests SyncTwin is able to overcome the confounding bias in the complex real-world datasets.\nTransparent ITE estimation. For each individual, we can visualize the outcomes before and after the treatment and compare them with the synthetic twin in order to sense-check the estimate. The individual shown in Figure 5 (top) has a sensible ITE estimate because the synthetic twin matches its pre-treatment outcomes closely over time. In addition to visualization, we can calculate the distance dy (Equation 4) to quantify the difference between the pre-treament outcomes. From Figure 5 (bottom left) we can see in most cases the distance is small with a median of 0.24 mmol/L (compared to the population average distance 0.76 mmol/L). This means if the expert can only tolerate an error of 0.24 mmol/L on ITE estimation, half of the estimates (those with dy ≤ 0.24 mmol/L) can be accepted (Section 4). The estimates are also explainable due to the sparsity of SyncTwin. As shown in Figure 5 (bottom right) on average only 15 (out of 24,557) individuals contribute to the synthetic twin." }, { "heading": "6 CONCLUSION", "text": "In this work, we present SyncTwin, an transparent ITE estimation method that deals with temporal confounding and has a broad range of applications in clinical observational studies and beyond. Combining the Synthetic Control method and deep representation learning, SyncTwin achieves transparency and strong performance in both simulated and real data experiments." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 THEORETICAL RESULTS", "text": "" }, { "heading": "A.1.1 SITUATION WITH NO UNOBSERVED CONFOUNDERS", "text": "Proposition 1 Bias bound on ITE with no unobserved confounders. Suppose that vi = 0, ∀i ∈ [N ] and dci = 0 for some i ∈ I1 (vi and dci are defined in Equation 1 and 4 respectively), the absolute value of the expected difference in the true and estimated ITE of i is bounded by:\n|E[τ̂i]− E[τi]| ≤ |T +|‖ ∑ j∈I0 bijcj − ci‖ ≤ |T +| ( ∑ j∈I0 ‖cj − c̃j‖+ ‖ci − c̃i‖ ) . (11)\nProof. We start the proof by observing |E[τ̂i]− E[τi]| = ∑ t∈T + |E[ŷit(0)]− E[yit(0)]|\n= ∑ t∈T + |q>t ( ∑ j∈I0 bijcj − ci)|\n≤ ∑ t∈T + ||qt|| · || ∑ j∈I0 bijcj − ci||\n= |T +||| ∑ j∈I0 bijcj − ci||\n(12)\nwhere the first equation follows from the definition of ITE in Section 4. The second equation follows from Equation 1 and 2 together with the fact that vi = 0, ∀i ∈ [N ]. The third line follows from Cauchy–Schwarz inequality. The fourth line uses the fact that ||qt|| = 1. By definition, dci = 0 implies ∑ bij c̃j = c̃i. Continuing the proof,\n|| ∑ j∈I0 bijcj − ci|| = || ∑ j∈I0 bij(cj − c̃j)− (ci − c̃i)||\n≤ ∑ j∈I0 bij ||cj − c̃j ||+ ||ci − c̃i||\n≤ ∑ j∈I0 ||cj − c̃j ||+ ||ci − c̃i||,\n(13)\nwhere the second line follows from the triangular inequality and the third line relies on ∑ j∈I0 bij = 1 and bij ≥ 0, ∀j ∈ I0. Combining inequality 12 and 13, we prove the inequalities in Equation 11.\nJustification for the matching loss and dci . Proposition 1 presents a justification for minimizing dci (or the matching loss Lm). Essentially, when the synthetic representations are matched with the target (dci = 0), the bias in ITE estimate is controlled by how close the learned representations c̃ is to the true latent variable c up to an arbitrary linear transformation Λ. An important implication is that the learned representation c̃ does not need to be equal to c, instead the learning algorithm only needs to identify c up to a linear transformation. Of course, Proposition 1 also implies that |E[τ̂i]− E[τi]| ≤ ∑ j∈I0 ||cj − c̃j ||+ ||ci − c̃i|| when Λ is taken to be the identity matrix instead of the minimizer.\nProposition 2 Trustworthiness of SyncTwin under no hidden confounders. Suppose that all the outcomes are generated by the model in Equation 1 with the unobserved confounders equal to zero s.t. vi = 0, ∀i ∈ [N ], and that we reject the estimate τ̂i if the pre-treatment error dyi on T − is larger than δ|T −|/|T +|, the post-treatment ITE estimation error on T + is below δ.\nProof. As a reminder, Q− = [qt]t∈T − and Q = [qt]t∈T + denote the matrix that stacks all the weight vectors q’s before and after treatment as rows respectively where each qt satisfies that ‖qt‖ = 1 in\nEquation 1. The error dyi in Equation 4 can be decomposed into a representation error and a white noise error,\ndyi = ‖ŷ − i − y − i ‖1 = ‖ ∑ j∈I0 bijy − j − y − i ‖1\n= ‖ ∑ j∈I0 bij(Q −cj + ξj)− (Q−ci + ξi)‖1\n= ‖Q− ( ∑ j∈I0 bijcj − ci ) ‖1 + ‖ ∑ j∈I0 bijξj − ξi ) ‖1\n≤ ∑ t∈T − ‖qt‖‖ ∑ j∈I0 bijcj − ci‖+ ‖ ∑ j∈I0 bijξj − ξi‖1\n≤ |T −|‖ ∑ j∈I0 bijcj − ci‖+ ‖ ∑ j∈I0 bijξj − ξi‖\n(14)\nWe can not estimate the error from the representation and white noise on the last line of Equation 14. Conservatively, we can say the representation error itself is larger or equal to dyi such that\n|T −|‖ ∑ j∈I0 bijcj − ci‖ ≥ dyi ,\ni.e., ‖ ∑ j∈I0 bijcj − ci‖ ≥ dyi /|T −|. (15)\nThe post-treatment error is upper bounded as follows,\n|E[τ̂i]− E[τi]| = |E[ŷit(0)]− E[yit(0)]| = ∑ t∈T + |q>t ( ∑ j∈I0 bijcj − ci)|\n≤ |T +|‖ ∑ j∈I0 bijcj − ci‖\n:= sup τ̂i\n|E[τ̂i]− E[τi]|.\nUsing Equation (15), we have\nsup τ̂i\n|E[τ̂i]− E[τi]| ≥ dyi |T +|/|T −|.\nConservatively, we reject the estimate τ̂i if supτ̂i |E[τ̂i]− E[τi]| is larger than δ. That is when\ndyi > δ|T −|/|T +|.\nWhy does dyi indicate the trustworthiness of the estimation? Proposition 2 shows that we can control the estimation error to be below a certain threshold δ by rejecting the estimate if its error dyi during the pre-treatment period is larger than δ|T −|/|T +|. Alternatively, we can rank the estimation trustworthiness for the individuals based on dyi alone. This is helpful when the user is willing to accept a percentage of estimations which are deemed most trustworthy. We note that this proposition only holds under the assumption that the outcomes over time are generated by the model stated in Equation 1. The outcomes generated by such a model can be nonlinear and complicated due to the representation. However, the model assumes that the outcomes over time are linear functions of the same representation. This is the reason why the pre-treatment error can be used to assess the post-treatment error. We parameterize our neural network model according to Equation 1. If it is a not good fit to the data, the model should have a large estimation error before treatment. The users should also use their domain knowledge to check if the model holds for their data, i.e., if there is any factor starting to affect the outcomes in halfway and causes the representation to change over time.\nProposition 3 Error bound on the learned representations. Suppose that vi = 0, ∀i ∈ [N ] (vi is defined in Equation 1), the total error on the learned representations for the control, i.e., the first term in the upper bound of the absolute value of the expected difference in the true and estimated ITE (R.H.S of Equation 11), is bounded as follows:∑\nj∈I0 ‖cj − c̃j‖ ≤ βLs + ∑ j∈I0 ‖ξj‖, (16)\nwhere Ls is the supervised loss in Equation 6 and ξj is the white noise in Equation 1.\nProof. We start the proof from the definition of the supervised loss. Ls = ∑ j∈I0 ‖Q̃c̃j − yj(0)‖\n= ∑ j∈I0 ‖Q̃c̃j − (Qcj + ξj)‖ ≥ ∑ j∈I0 (∑ t∈T − [ c̃>j ,−c>j ] [q̃t qt ] [ q̃>t ,q > t ] [ c̃j −cj ]) 12 − ∑ j∈I0 ||ξj‖2\n≥ β̃ √ |T −| ∑ j∈I0 ‖c̃j − cj‖ − ∑ j∈I0 ‖ξj‖\n(17)\nwhere β̃ denotes the square root of the element of the matrices [ q̃t qt ] [ q̃>t ,q > t ] , ∀t ∈ T−, with the smallest absolute value. The first and second equations follow from Equation 6 and 1. Let β denotes the constant 1/(β √ |T −|). Arranging the terms in inequality 17 and we prove Proposition 3.\nJustification for the supervised loss. Proposition 3 provide a justification for the supervised loss Ls. By optimizing the supervised loss, SyncTwin learns the representation c̃i that is close to the latent variable ci, which also reduces the bias bound on ITE in Proposition 1.\nRationale for the reconstruction loss. Although the bias bounds we developed so far do not include the reconstruction loss Lc, we believe it is useful in real applications. Our reasoning follows from the fact that unsupervised or semi-supervised loss often improve the performance of deep neural networks (Erhan et al., 2009; 2010; Hendrycks et al., 2019). In addition, the reconstruction loss ensures the representation c̃ retains the information from the temporal covariates as required in the DAG (Figure 2). In our simulations (Section 5.1), we found that ablating the reconstruction loss leads to consistently worse performance (though the magnitude is somewhat marginal).\nCan we estimate the ITE as τ̂i = yi(1) − Q̃ · c̃i? No, this is because Ls is based on the factual outcome yi(0) of the control group i ∈ I0 only. For treated individuals i ∈ I1, the predictor Q̃ · c̃i can be biased for their counterfactual outcomes yi(0). Hence, Ls is only used to learn a good representation c̃i for downstream procedures, and not to directly predict counterfactual outcomes." }, { "heading": "A.1.2 SITUATION WITH UNOBSERVED CONFOUNDERS", "text": "In general, the unobserved confounders make it hard to provide good estimates for the ITE. The matching in pre-enrollment outcomes dyi (Equation 4) validates if the unobserved confounders vi create significant error in the pre-treatment period. Using the same derivation of Theorem 1, we can see that:\ndyi = ||Q −( ∑ j∈I0 bijcj − ci) + U−( ∑ j∈I0 bijvj − vi) + ξ||, (18)\nwhere Q− and U− are unknown but fixed matrices relating to the data generating process and ξ is a term only depending on the white noise.\nAs shown in Proposition 1, the matching in representations encourages the first term involving ci to be small. Hence, a large value in dyi implies that the remaining term involving the unmeasured confounders vi is big, which leads to a large estimation error. It is worth pointing out that a small\nvalue of dyi does not guarantee there is no unobserved confounders — a hypothesis we cannot test empirically. For instance, consider the weights U− = 0. It follows that the second term in Equation 18 will always be zero even if vi 6= 0 — there exists unobserved confounders but they do not impact the outcomes before treatment (Equation 1). In summary, dyi does not prove or disprove the existence of unobserved confounders; it only indicates their impact on the pre-treatment outcomes. Our assumption is a small relaxation of the standard no unmeasured confounders assumption by allowing a linear effect from some unmeasured confounders. More conservatively, we can assume there is no unmeasured confounders by setting all the vi to 0, ∀i ∈ [N ] in Equation 1." }, { "heading": "A.2 COMPARISON OF THE TEMPORAL COVARIATES ALLOWED IN THE RELATED WORKS", "text": "As introduced in Section 2, SyncTwin is able to handle temporal covariates sampled at different frequencies, i.e. the set of observation times Ti and a mask mit can be different for different individuals. In comparison, Synthetic Control (Abadie et al., 2010), robust Synthetic Control (Amjad et al., 2018), and MC-NNM (Athey et al., 2018) are only able to handle regularly-sampled covariates, i.e. Ti = {−1,−2, . . . ,−L} ∀i ∈ [N ], and mit = 1 ∀i ∈ [N ], t ∈ Ti. In other words, the temporal covariates [xis]s∈[Si] = Xi ∈ RD×Si has a matrix form. The deep learning methods including CRN (Bica et al., 2020) and RMSN (Lim et al., 2018) have the potential to handle irregularly-measured variable-length covariates when a suitable architecture is used. However, the architectures proposed in the original papers only apply to regularly-sampled case and no simulation or real data experiments were conducted for the more general irregular cases." }, { "heading": "A.3 COMPARISON OF THE CAUSAL ASSUMPTIONS IN THE RELATED WORKS", "text": "" }, { "heading": "A.3.1 SYNTHETIC CONTROL", "text": "As shown in Table 3, Synthetic control (Abadie et al., 2010; Abadie, 2019) and its variants (Athey et al., 2018; Amjad et al., 2018) rely on two causal assumptions: (1) consistency: yit(ait) = yit and (2) data generating assumption (linear factor model):\nyit(0) = q > t xi + u > t vi + ξit ∀i ∈ [N ], t ∈ T − ∪ T +. (19)\nwhere xi = vec(Xi) ∈ RD×L, vec is the vectorization operation; ut ∈ RU and qt ∈ RD×L are time-varying variables and vi ∈ RU is a latent variable. ξit is an error term that has mean zero and satisfies ξit ⊥⊥ ars, xr, us, vr for ∀ k, r, s, t. It is worth highlighting that the data generating assumption of Synthetic Control is a special case of the more general assumption of SyncTwin in Equation 1. To see this, let ci = xi = vec(Xi) in Equation 1, i.e. we use the flattened temporal covariates directly as the representation. Further let φθ(ci, tis) = ci[Ds : D(s+ 1)] and εis = 0, where c[a : b] takes a slice of vector c between index a and b. The result is exactly Equation 19.\nWhy does Synthetic Control tend to over-match? Both SyncTwin and Synthetic Control estimate the treatment effects using a weighted combination of control outcomes (Equation 3). However, Synthetic Control finds weight bij in a different way by directly minimizing\nLx = ||xi − ∑ j bijxj ||.\nSince xi contains the observation noise and other random components that do not relate to the outcomes, the weights bij that minimize Lx tend to over-match, i.e. they capture the irrelevant randomness in xi. In contrast, SyncTwin finds bij based on the learned representations c̃i rather than xi (Lm, Equation 6). Since c̃i has much lower dimensionality than xi, the reconstruction loss Lr encourages the Seq2Seq network to learn a c̃i that only retains the signal in xi but not the noise. Meanwhile, the supervised loss encourages c̃i to only retain the information that predicts the outcomes. As a consequence, we expect the weights based on c̃i to be less prone to over-match. Moreover, since the relationship between c̃i and xi is nonlinear (as captured by the decoder network), the weights bij that minimize Lm will generally not minimize the Synthetic Control objective Lx, therefore avoiding over-match." }, { "heading": "A.3.2 COUNTERFACTUAL RECURRENT NEURAL NETWORKS", "text": "As shown in Table 3, CRN (Bica et al., 2020) and RMSN (Lim et al., 2018) makes the following three causal assumptions. (1) Consistency: yit(ait) = yit. (2) Sequential overlap (aka. positivity): Pr(ait = 1|ai,t−1, xit) > 0 whenever Pr(ai,t−1, xit) 6= 0. (3) No unobserved confounders: yit(0), yit(1) ⊥⊥ ait | xit, ai,t−1. In summary, CRN makes the same consistency assumption as SyncTwin. However, SyncTwin does not assume sequential overlap or no unobserved confounders while CRN does not make assumptions on the data generating model.\nThe sequential overlap assumption means that the individuals should have non-zero probability to change treatment status at any time t ≥ 0 given the history. This assumption is violated in the clinical observational study setting we outlined in Section 1, where the treatment group will continue to be treated and cannot switch to the control group after they are enrolled (and similarly for the control group). While the sequential overlap assumption allows these methods to handle more general situations where treatment switching do occur, their performance is negatively impacted in the “special” (yet still widely applicable) setting we consider in this work.\nWhile CRN makes strict no-unobserved-confounder assumption, SyncTwin allows certain types of unobserved confounders to occur. In particular, the latent factor vi in Equation 1 can be unobserved confounders. Being less reliant on no-unobserved-confounder assumption is important for medical applications because it is hard to assume the dataset captures all aspects of the patient health status. SyncTwin ’s ability to handle unobserved confounders vi relies on the validity of its data generating assumption, which we discuss next.\nWhy does SyncTwin not explicitly require overlap? The overlap assumption is commonly made in treatment effect estimation methods. We first give a very brief review of why two importance classes of methods need overlap. (1) For methods that rely on propensity scores, overlap makes sure that the propensity scores are not zero, which enables various forms of propensity weighting. (2) For methods that rely on covariate adjustment, overlap ensures that the conditional expectation E[yi|Xi, ai] is well-defined, i.e. the conditioning variables (Xi, ai) have non-zero probability. In comparison, SyncTwin relies on neither the propensity scores nor the explicit adjustment of covariates, and hence it does not make overlap assumption explicitly. However, as discussed in Proposition 1, SyncTwin requires the synthetic twin to match the representations dci ≈ 0, which implies c̃i ≈ ∑ j∈I0 bij c̃i for some bij — the target individual should be in or close to the convex hull formed by the controls in the representation space. This condition has a similar spirit to overlap (but very different mathematically). When overlap is satisfied there tend to be control individuals in the neighbourhood of the treated individual, making it easier to construct matching twins. Conversely, if overlap is violated, the controls will tend to far away from the treated individual, making it harder to construct a good twin." }, { "heading": "A.4 THE GENERALITY OF THE ASSUMED DATA GENERATING MODEL", "text": "SyncTwin assumes that the outcomes are generated by a latent factor model (Teräsvirta et al., 2010) with the latent factors ci learnable from covariates Xi and the latent factors vi that are unobserved confounders. We assume the dimensionality of ci and vi to be low compared with the number of time steps. Despite its seemingly simple form, the assumed latent factor model is very flexible because the factors are in fact latent variables.\nThe latent factor model is widely studied in Econometrics. In many real applications, the temporally observed variables naturally have a low-rank structure, thus can be described as a latent factor model (Abadie & Gardeazabal, 2003; Abadie et al., 2010). The latent factor model also captures many\nof well-studied scenarios as special cases (Finkel, 1995) such as the conventional additive unit and time fixed effects (yit(0) = qt + ci). Last but not least, It has also been shown that the low-rank latent factor models can well approximate many nonlinear latent variable models (Udell & Townsend, 2017).\nLatent factor models in the static setting are very familiar in the deep learning literature. Consider a deep feed-forward neural network that uses a linear output layer to predict some real-valued outcomes y ∈ RD in the static setting (notations used in this example are not related to the ones used in the rest of the paper). Denote the last layer of the neural network as h−1 ∈ RK ; it is easy to see that the neural network corresponds to a latent factor model i.e. y = Ah−1 + b, where h−1 is the latent factor. Note that this holds true for arbitrarily complicated feed-forward networks as long as the output layer is linear." }, { "heading": "A.5 ESTIMATING ITE FOR CONTROL AND NEW INDIVIDUALS", "text": "We have been focusing on estimating ITE for a treated individual i ∈ I1. The same approach can estimate the ITE for a control individual without loss of generality. After obtaining the representation c̃i for i ∈ I0, SyncTwin can use the treatment group j ∈ I1 to construct the synthetic twin by optimizing the matching loss Equation 8. The checking and estimation procedure remains the same.\nSyncTwin can also estimate the effect of a new individual i /∈ [N ]. The same idea still applies, but this time we need to construct two synthetic twins: one from the control group and one from the treatment group. The ITE estimation can be obtain using the difference between the two twins.\nSyncTwin also easily generalizes to the situation where there are A > 1 treatment groups each receiving a different treatment. In this case, the treatment indicator ai ∈ [0, 1, . . . , A]. For a target individual in any of the treatment groups, SyncTwin can construct its twin using the control group I0. The remaining steps are the same as the single treatment group case." }, { "heading": "A.6 UNRELATED WORKS WITH SIMILAR TERMINOLOGY", "text": "Several recent works in the deep learning ITE literature employ similar terminologies such as “matching” (Johansson et al., 2018; Kallus, 2018). However, they are fundamentally different from SyncTwin because they only work for static covariates and they try to match the overall distribution of the treated and control group rather than constructing a synthetic twin that matches one particular treated individual.\nThe Virtual Twin method (Foster et al., 2011) is designed for randomized clinical trials where there is no confounding (temporal or static). As a result, it cannot overcome the confounding bias when the problem is to estimate causal treatment effect from observational data.\nA.7 IMPLEMENTATION DETAILS OF THE BENCHMARK ALGORITHMS\nSynthetic control. We used the implementation of Synthetic Control in the R package Synth (1.1-5). The package is available at https://CRAN.R-project.org/package=Synth.\nRobust Synthetic Control. We used the implementation accompanied with the original paper (Amjad et al., 2018) at https://github.com/SucreRouge/synth_control. We optimized the hyperparameters on the validation set using the method described in Section 3.4.3 Amjad et al. (2018). The best hyperparameter setting was then applied to the test set.\nMC-NNM. We used the implementation in the R package SoftImpute (1.4) available at https: //CRAN.R-project.org/package=softImpute. The regularization strength λ is tuned on validation set using grid search before applied to the testing data.\nCounterfactual Recurrent Network and Recurrent Marginal Structural Network. We used the implementations by the authors Bica et al. (2020); Lim et al. (2018) at https://bitbucket. org/mvdschaar/mlforhealthlabpub/src/master/. The networks were trained on the training dataset. We experimented different hyper-parameter settings on the validation dataset, and applied the best setting to the testing data. We also found that the results are not sensitive to the hyperparameters.\nCounterfactual Gaussian Process. We used the implementation with GPy (GPy, since 2012), which is able to automatically optimize the hyperparameters such as the kernel width using the validation data.\nOne-nearest neighbour. We used our own implementation. Since no parameters need to be learned or tuned, the algorithm was directly applied on the testing dataset.\nSearch range of hyper-parameters\n1. Synthetic control: hyperparameters are optimized by Synth directly.\n2. Robust Synthetic control: num sc ∈ {1, 2, 3, 4, 5} 3. MC-NNM: C ∈ {3, 4, 5, 8, 10} 4. Counterfactual Recurrent Network: max alpha ∈ {0.1, 0.5, 0.8, 1}, hidden dimension H ∈ {32, 64, 128}\n5. Recurrent Marginal Structural Network: hidden dimension H ∈ {32, 64, 128} 6. Counterfactual Gaussian Process: hyperparameters are optimized by GPy directly." }, { "heading": "A.8 DETAILED TRAINING, VALIDATION AND INFERENCE PROCEDURE", "text": "As is standard in machine learning, we perform model training, validation and inference (testing) on three disjoint datasets, Dtr, Dva and Dte. We use Dtr0 and Dtr1 to denote the control and the treated in the training data and use similar notations for validation and testing data.\nTraining. On the training dataset Dtr0 , we learn the representation networks by optimizing Ltr = λrLr + λpLs, where Lr and Ls are the loss functions defined in Equation 6. The hyperparameter λr and λp controls the relative importance between the two losses. We provide an ablation study in Section 5.1 and perform detailed analysis on hyperparameter importance in Appendix A.13. The objective Ltr can be optimized using stochastic gradient descent. In particular, we used the ADAM algorithm with learning rate 0.001 (Kingma & Ba, 2014).\nValidation. Since we never observe the true ITE, we cannot evaluate the error of ITE estimation, ||τi − τ̂i||22. As a standard practice (Bica et al., 2020), we rely on the factual loss on observed outcomes: Lva = ∑ j∈Dva0\n||yi(0)− ŷi(0)||22, where ŷi(0) is defined as in Equation 2 and obtained as follows. We obtain the c̃i for all i ∈ Dva and then optimize the matching loss Lm(Dva0 ,Dva1 ) to find weights bvai . It is important to keep the encoder fixed throughout the optimization; otherwise it might overfit to Dva. Finally, ŷi(0) = ∑ j∈Dva0 bvaij yj(0).\nInference. The first steps of the inference procedure are the same as validation. We start by obtaining the representation c̃i for all i ∈ Dte and then obtain weights btei by optimizing the matching loss Lm(Dte0 ,Dte1 ) while keeping the encoder fixed. Using weights btei , the ITE for any i ∈ Dte1 can be estimated as τ̂i = yi(1)− ∑ j∈Dte0\nbteijyj(0) according to Equation 3. Similarly, we obtain ĉi, ŷit(0) according to in Equation 2. The expert can check dyi to evaluate the trustworthiness of τ̂i." }, { "heading": "A.9 OPTIMIZING THE MATCHING LOSS", "text": "Here we present a way to optimize the matching lossLm in Equation 8. To ensure the three constraints discussed in Section 4.2 while also allowing gradient-based learning algorithm, we reparameterize bi = Gumbel-Softmax(fm(zi), τ), where zi ∈ RN0 , fm(·) is a masking function that sets the element zii = −Inf to satisfy constraint (3). Gumbel-Softmax(·, τ) is the Gumbel softmax function\nAlgorithm 1: SyncTwin training procedure. Input: Training data set: Dtr0 , Dtr1 Input: Hyperparameters: λr, λp Input: Encoder, Decoder, Q̃ Input: Training iteration max itr, batch size batch size, Optimizer Output: Trained Encoder, Decoder and Q̃ Randomly initialize Encoder and Decoder; set Q̃ = 0 for itr ∈ (0,max itr] do\nRandomly draw a mini-batch of control units D0 ⊂ Dtr0 with batch size samples. Randomly draw a mini-batch of treated units D1 ⊂ Dtr1 with batch size samples. Evaluate training loss Ltr(D0,D1) = λrLr(D0,D1) + λpLs(D0) (defined in Equation 6) Calculate the gradient of Ltr(D0,D1) via back propagation. Update all encoder, decoder parameters and Q̃ using the Optimizer\nAlgorithm 2: SyncTwin inference procedure. Input: Testing data set: Dte0 , Dte1 Input: Trained Encoder Input: Training iteration max itr, batch size batch size, Optimizer Output: Estimated ITE τ̂i, ∀i ∈ Dte1 Initialize a size |Dte1 | by |Dte0 | matrix B = 0 as the weight matrix. Use Encoder to get representation c̃i, ∀i ∈ Dte0 ∪ Dte1 for itr ∈ (0,max itr] do\nRandomly draw a mini-batch of treated units D1 ⊂ Dtr1 with batch size samples. Evaluate matching loss Lm(Dte0 ,D1) (defined in Equation 8) Calculate the gradient of Lm(Dte0 ,D1) via back propagation. Update B using the Optimizer while keeping the Encoder fixed.\nUse weight matrix B to obtain τ̂i, ∀i ∈ Dte1 using Equation 3.\nwith temperature hyper-parameter τ (Jang et al., 2016). It is straightforward to verify that bk satisfies the three constraints while the loss Lm remains differentiable with respect to zk. We use the Gumbel softmax function instead of the standard softmax function because Gumbel softmax tend to produce sparse vector bk, which is highly desirable as we discussed in Section 4.\nThe memory footprint to directly optimize Lm is O ( (|D0| + |D1|) × |D0| ) , which can be further\nreduced to O ( |DB | × |D0| ) if we use stochastic gradient decent with a mini-batch DB ⊆ D0 ∪ D1." }, { "heading": "A.10 THE SIMULATION MODEL", "text": "In Equation 9, Rt is the LDL cholesterol level (outcome) and It is the dosage of statins. For each individual in the treatment group, one dose of statins (10 mg) is administered daily after the treatment starts, which gives dosage It = 0 if t ≤ t0 and It = 1 otherwise. K, H and D50 are constants fixed to the values reported in Faltaos et al. (2006). Kint ∈ R is a individual-specific time varying variable that summarizes a individual’s physiological status including serum creatinine, uric acid, serum creatine phosphokinase (CPK), and glycaemia. Pt and Dt are two intermediate temporal variables both affecting Rt." }, { "heading": "A.11 ADDITIONAL SIMULATION RESULTS", "text": "Table 5 shows the results under irregularly-measured covariates with varying degree of irregularity m (smaller m, more irregular and fewer covariates are observed). For methods that are unable to\ndeal with irregular covariates, we first impute the unobserved values using Probabilistic PCA before applying the algorithms (Hegde et al., 2019). SyncTwin achieves the best performance in all cases. Furthermore, SyncTwin’s performance deteriorates more slowly than the benchmarks when sampling becomes more irregular (larger m). This suggests that the encoder network in SyncTwin is able to learn good representations even from highly irregularly-measured sequences. Table 6 shows the results under various lengths of the observed covariates S (smaller S, shorter sequences are observed). Again SyncTwin achieves the best performance in all cases. As expected, SyncTwin makes smaller error when the observed sequence is longer. Note that this is not the case of CRN and RMSN — their performance deteriorates when the observed sequence is longer. This might indicate that these two methods are less able to learn good balancing representations (or balancing weights) when the sequence is longer." }, { "heading": "A.12 SPARSITY COMPARED WITH SYNTHETIC CONTROL", "text": "In Figure 4 we have shown visualy that SyncTwin produces sparser solution than SC. To quantify the differences, we report the Gini index ( ∑ ij bij(1 − bij)/N1), entropy ( ∑ ij −bij log(bij)/N1) and\nthe number of contributors used to construct the twin ( ∑ ij 1{bij > 0}/N1) in the simulation study. All three metrics reflect the sparsity of the learned weight vector (smaller more sparse). Table 7 shows that SyncTwin achieve sparser results that SC in all metrics considered. The full and ablated versions of SyncTwin have similar sparsity because the sparsity is regulated in the matching loss, which all versions share. It is worth pointing out that RSC and MC-NNM do not produce sparse weights and the weights do not need to be positive and sum to one (Amjad et al., 2018; Athey et al., 2018)." }, { "heading": "A.13 SENSITIVITY OF HYPER-PARAMETERS", "text": "It is beneficial to understand the network’s sensitivity to each hyper-parameter so as to effectively optimize them during validation. In addition to the standard hyper-parameters in deep learning (e.g. learning rate, batch size, etc.), SyncTwin also includes the following specific hyper-parameters: (1) τ , the temperature of the Gumbel-softmax function Appendix A.9, (2) λp in the training loss Ltr (since only the ratio between λp and λr matters, we keep λr = 1 and search different values of λp) , and (3) H , the dimension of the representation c̃i.\nHere we present a sensitivity analysis on the hyper-parameters H , λp and τ using the simulation framework detailed in Section 5.1. Here we present the results for N0 = 2000 and S = 15 although these results generalize to all the simulation settings we considered. The results are presented in Figure 6, where we can derive two insights.\nFirstly, the hyper-parameter τ is very important to the performance and need to be tuned carefully during validation. This is understandable because τ is the temperature parameter of the Gumbel softmax function and it directly controls the sparsity of matrix B. In comparison, hyper-parameter H and λp do not impact the performance in significant way. Therefore we recommend to use H = 40 and λp = 1 as the default.\nSecondly, we observe that the validation loss Lva closely tracks the error on ITE estimation (which is not directly observable in reality). These results support the use of Lva to validate models and perform hyper-parameter optimization.\nA.14 COMPUTATION TIME\nIn figure 7 we present the wall-clock computation time (in seconds) of SyncTwin under various simulation conditions — with the control group size N0 = (200, 1000, 2000) and the length of pre-enrollment period S = (15, 25, 45). The simulations were performed on a server with a Intel(R) Core(TM) i5-8600K CPU @ 3.60GHz and a Nvidia(R) GeForce(TM) RTX 2080 Ti GPU. All simulations finished within 10 mins. As we expect, the computation time increases with respect to N0 and S as more data need to be processed. However, a 10-fold increase in N0 only approximately doubled the computation time, suggesting that SyncTwin scales well with sample size. In comparison, S seems to affect the computation time more because the encoder and decoder need to be trained on longer sequences." }, { "heading": "A.15 ADDITIONAL RESULTS IN THE CPRD STUDY", "text": "We the treatment and the control group in the CPRD experiment are selected based on the selection criterion in Figure 8. We have followed all the guidelines listed in Dickerman et al. (2019) to make sure the selection process does not increase the confounding bias. The summary statistics of the treatment and control groups are listed below. We can clearly see a selection bias as the treatment group contains a much higher proportion of male and people with previous cardiovascular or renal diseases.\nA.16 COHORT SELECTION CRITERION IN THE CPRD STUDY" } ]
2,020
SYNCTWIN: TRANSPARENT TREATMENT EFFECT ESTIMATION UNDER TEMPORAL CONFOUNDING
SP:8997ab419d35acd51ef50ef6265e5c37c468a2ac
[ "This paper proposes a method for obtaining probably-approximately correct (PAC) predictions given a pre-trained classifier. The PAC intervals are connected to calibration, and take the form of confidence intervals given the bin a prediction falls in. They demonstrate and explore two use cases: applying this technique to get faster inference in deep neural networks, and using the PAC predictor to do safe planning. Experiments in both of these cases show improvements in speed-accuracy or safety-accuracy tradeoffs, as compared to baselines." ]
A key challenge for deploying deep neural networks (DNNs) in safety critical settings is the need to provide rigorous ways to quantify their uncertainty. In this paper, we propose a novel algorithm for constructing predicted classification confidences for DNNs that comes with provable correctness guarantees. Our approach uses Clopper-Pearson confidence intervals for the Binomial distribution in conjunction with the histogram binning approach to calibrated prediction. In addition, we demonstrate how our predicted confidences can be used to enable downstream guarantees in two settings: (i) fast DNN inference, where we demonstrate how to compose a fast but inaccurate DNN with an accurate but slow DNN in a rigorous way to improve performance without sacrificing accuracy, and (ii) safe planning, where we guarantee safety when using a DNN to predict whether a given action is safe based on visual observations. In our experiments, we demonstrate that our approach can be used to provide guarantees for state-of-the-art DNNs.
[ { "affiliations": [], "name": "Sangdon Park" }, { "affiliations": [], "name": "Shuo Li" }, { "affiliations": [], "name": "Insup Lee" }, { "affiliations": [], "name": "Osbert Bastani" } ]
[ { "authors": [ "A.K. Akametalu", "J.F. Fisac", "J.H. Gillula", "S. Kaynama", "M.N. Zeilinger", "C.J. Tomlin" ], "title": "Reachability-based safe learning with gaussian processes", "venue": "In 53rd IEEE Conference on Decision and Control,", "year": 2014 }, { "authors": [ "Mohammed Alshiekh", "R. Bloem", "R. Ehlers", "Bettina Könighofer", "S. Niekum", "U. Topcu" ], "title": "Safe reinforcement learning via shielding", "venue": "ArXiv,", "year": 2018 }, { "authors": [ "Osbert Bastani" ], "title": "Safe planning via model predictive shielding, 2019", "venue": null, "year": 2019 }, { "authors": [ "Tolga Bolukbasi", "Joseph Wang", "Ofer Dekel", "Venkatesh Saligrama" ], "title": "Adaptive neural networks for efficient inference", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Lawrence D Brown", "T Tony Cai", "Anirban DasGupta" ], "title": "Interval estimation for a binomial proportion", "venue": "Statistical science,", "year": 2001 }, { "authors": [ "Maxime Cauchois", "Suyash Gupta", "Alnur Ali", "John C. Duchi" ], "title": "Robust validation: Confident predictions even when distributions shift, 2020", "venue": null, "year": 2020 }, { "authors": [ "Charles J Clopper", "Egon S Pearson" ], "title": "The use of confidence or fiducial limits illustrated in the case of the binomial", "venue": null, "year": 1934 }, { "authors": [ "Sarah Dean", "Nikolai Matni", "Benjamin Recht", "Vickie Ye" ], "title": "Robust guarantees for perception-based control, 2019", "venue": null, "year": 2019 }, { "authors": [ "Morris H DeGroot", "Stephen E Fienberg" ], "title": "The comparison and evaluation of forecasters", "venue": "Journal of the Royal Statistical Society: Series D (The Statistician),", "year": 1983 }, { "authors": [ "Andre Esteva", "Brett Kuprel", "Roberto A Novoa", "Justin Ko", "Susan M Swetter", "Helen M Blau", "Sebastian Thrun" ], "title": "Dermatologist-level classification of skin cancer", "venue": null, "year": 2017 }, { "authors": [ "Michael Figurnov", "Maxwell D Collins", "Yukun Zhu", "Li Zhang", "Jonathan Huang", "Dmitry Vetrov", "Ruslan Salakhutdinov" ], "title": "Spatially adaptive computation time for residual networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "J.F. Fisac", "A.K. Akametalu", "M.N. Zeilinger", "S. Kaynama", "J. Gillula", "C.J. Tomlin" ], "title": "A general safety framework for learning-based control in uncertain robotic systems", "venue": "IEEE Transactions on Automatic Control,", "year": 2019 }, { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": "The Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Varun Gulshan", "Lily Peng", "Marc Coram", "Martin C Stumpe", "Derek Wu", "Arunachalam Narayanaswamy", "Subhashini Venugopalan", "Kasumi Widner", "Tom Madams", "Jorge Cuadros" ], "title": "Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus", "venue": "photographs. Jama,", "year": 2016 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "arXiv preprint arXiv:1706.04599,", "year": 2017 }, { "authors": [ "HO Hartley", "ER Fitch" ], "title": "A chart for the incomplete beta-function and the cumulative binomial distribution", "venue": null, "year": 1951 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Kris M Kitani", "Brian D Ziebart", "James Andrew Bagnell", "Martial Hebert" ], "title": "Activity forecasting", "venue": "In European Conference on Computer Vision,", "year": 2012 }, { "authors": [ "Ananya Kumar", "Percy S Liang", "Tengyu Ma" ], "title": "Verified uncertainty calibration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "S. Li", "O. Bastani" ], "title": "Robust model predictive shielding for safe reinforcement learning with stochastic dynamics", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2020 }, { "authors": [ "Shuo Li", "Osbert Bastani" ], "title": "Robust model predictive shielding for safe reinforcement learning with stochastic dynamics", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2020 }, { "authors": [ "Peng Liao", "Kristjan Greenewald", "Predrag Klasnja", "Susan Murphy" ], "title": "Personalized heartsteps: A reinforcement learning algorithm for optimizing physical activity", "venue": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies,", "year": 2020 }, { "authors": [ "Allan H Murphy" ], "title": "Scalar and vector partitions of the probability score: Part i. two-state situation", "venue": "Journal of Applied Meteorology,", "year": 1972 }, { "authors": [ "Mahdi Pakdaman Naeini", "Gregory Cooper", "Milos Hauskrecht" ], "title": "Obtaining well calibrated probabilities using bayesian binning", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Sangdon Park", "Osbert Bastani", "Nikolai Matni", "Insup Lee" ], "title": "Pac confidence sets for deep neural networks via calibrated prediction", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Sangdon Park", "Osbert Bastani", "James Weimer", "Insup Lee" ], "title": "Calibrated prediction with covariate shift via unsupervised domain adaptation", "venue": "In The 23rd International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "John Platt" ], "title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", "venue": "Advances in large margin classifiers,", "year": 1999 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Itay Safran", "Ohad Shamir" ], "title": "Spurious local minima are common in two-layer relu neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Manolis Savva", "Abhishek Kadian", "Oleksandr Maksymets", "Yili Zhao", "Erik Wijmans", "Bhavana Jain", "Julian Straub", "Jia Liu", "Vladlen Koltun", "Jitendra Malik", "Devi Parikh", "Dhruv Batra" ], "title": "Habitat: A Platform for Embodied AI Research", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Surat Teerapittayanon", "Bradley McDanel", "Hsiang-Tsung Kung" ], "title": "Branchynet: Fast inference via early exiting from deep neural networks", "venue": "In 2016 23rd International Conference on Pattern Recognition (ICPR),", "year": 2016 }, { "authors": [ "Ryan J Tibshirani", "Rina Foygel Barber", "Emmanuel Candes", "Aaditya Ramdas" ], "title": "Conformal prediction under covariate shift", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Leslie G Valiant" ], "title": "A theory of the learnable", "venue": "Communications of the ACM,", "year": 1984 }, { "authors": [ "Kim Wabersich", "Melanie Zeilinger" ], "title": "Linear model predictive safety certification for learningbased control", "venue": null, "year": 2018 }, { "authors": [ "Xin Wang", "Fisher Yu", "Zi-Yi Dou", "Trevor Darrell", "Joseph E Gonzalez" ], "title": "Skipnet: Learning dynamic routing in convolutional networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Bianca Zadrozny", "Charles Elkan" ], "title": "Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers", "venue": "Proceedings of the Eighteenth International Conference on Machine Learning. Citeseer,", "year": 2001 }, { "authors": [ "Bianca Zadrozny", "Charles Elkan" ], "title": "Transforming classifier scores into accurate multiclass probability estimates", "venue": "In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2002 } ]
[ { "heading": "1 INTRODUCTION", "text": "Due to the recent success of machine learning, there has been increasing interest in using predictive models such as deep neural networks (DNNs) in safety-critical settings, such as robotics (e.g., obstacle detection (Ren et al., 2015) and forecasting (Kitani et al., 2012)) and healthcare (e.g., diagnosis (Gulshan et al., 2016; Esteva et al., 2017) and patient care management (Liao et al., 2020)).\nOne of the key challenges is the need to provide guarantees on the safety or performance of DNNs used in these settings. The potential for failure is inevitable when using DNNs, since they will inevitably make some mistakes in their predictions. Instead, our goal is to design tools for quantifying the uncertainty of these models; then, the overall system can estimate and account for the risk inherent in using the predictions made by these models. For instance, a medical decision-making system may want to fall back on a doctor when its prediction is uncertain whether its diagnosis is correct, or a robot may want to stop moving and ask a human for help if it is uncertain to act safely. Uncertainty estimates can also be useful for human decision-makers—e.g., for a doctor to decide whether to trust their intuition over the predicted diagnosis.\nWhile many DNNs provide confidences in their predictions, especially in the classification setting, these are often overconfident. This phenomenon is most likely because DNNs are designed to overfit the training data (e.g., to avoid local minima (Safran & Shamir, 2018)), which results in the predicted probabilities on the training data being very close to one for the correct prediction. Recent work has demonstrated how to calibrate the confidences to significantly reduce overconfidence (Guo et al., 2017). Intuitively, these techniques rescale the confidences on a held-out calibration set. Because they are only fitting a small number of parameters, they do not overfit the data as was the case in the original DNN training. However, these techniques do not provide theoretical guarantees on their correctness, which can be necessary in safety-critical settings to guarantee correctness.\nWe propose a novel algorithm for calibrated prediction in the classification setting that provides theoretical guarantees on the predicted confidences. We focus on on-distribution guarantees— i.e., where the test distribution is the same as the training distribution. In this setting, we can build on ideas from statistical learning theory to provide probably approximately correctness (PAC) guarantees (Valiant, 1984). Our approach is based on a calibrated prediction technique called histogram\nbinning (Zadrozny & Elkan, 2001), which rescales the confidences by binning them and then rescaling each bin independently. We use Clopper-Pearson bounds on the tails of the binomial distribution to obtain PAC upper/lower bounds on the predicted confidences.\nNext, we study how it enables theoretical guarantees in two applications. First, we consider the problem of speeding up DNN inference by composing a fast but inaccurate model with a slow but accurate model—i.e., by using the accurate model for inference only if the confidence of the inaccurate one is underconfident (Teerapittayanon et al., 2016). We use our algorithm to obtain guarantees on accuracy of the composed model. Second, for safe planning, we consider using a DNN to predict whether or not a given action (e.g., move forward) is safe (e.g., do not run into obstacles) given an observation (e.g., a camera image). The robot only continues to act if the predicted confidence is above some threshold. We use our algorithm to ensure safety with high probability. Finally, we evaluate the efficacy of our approach in the context of these applications.\nRelated work. Calibrated prediction (Murphy, 1972; DeGroot & Fienberg, 1983; Platt, 1999) has recently gained attention as a way to improve DNN confidences (Guo et al., 2017). Histogram binning is a non-parametric approach that sorts the data into finitely many bins and rescales the confidences per bin (Zadrozny & Elkan, 2001; 2002; Naeini et al., 2015). However, traditional approaches do not provide theoretical guarantees on the predicted confidences. There has been work on predicting confidence sets (i.e., predict a set of labels instead of a single label) with theoretical guarantees (Park et al., 2020a), but this approach does not provide the confidence of the most likely prediction, as is often desired. There has also been work providing guarantees on the overall calibration error (Kumar et al., 2019), but this approach does not provide per-prediction guarantees.\nThere has been work speeding up DNN inference (Hinton et al., 2015). One approach is to allow intermediate layers to be dynamically skipped (Teerapittayanon et al., 2016; Figurnov et al., 2017; Wang et al., 2018), which can be thought of as composing multiple models that share a backbone. Unlike our approach, they do not provide guarantees on the accuracy of the composed model.\nThere has also been work on safe learning-based control (Akametalu et al., 2014; Fisac et al., 2019; Bastani, 2019; Li & Bastani, 2020; Wabersich & Zeilinger, 2018; Alshiekh et al., 2018); however, these approaches are not applicable to perception-based control. The most closely related work is Dean et al. (2019), which handles perception, but they are restricted to known linear dynamics." }, { "heading": "2 PAC CONFIDENCE PREDICTION", "text": "In this section, we begin by formalizing the PAC confidence coverage prediction problem; then, we describe our algorithm for solving this problem based on histogram binning.\nCalibrated prediction. Let x ∈ X be an example and y ∈ Y be one of a finite label set, and let D be a distribution over X × Y . A confidence predictor is a model f̂ : X → PY , where PY is the space of probability distributions over labels. In particular, f̂(x)y is the predicted confidence that the true label for x is y. We let ŷ : X → Y be the corresponding label predictor—i.e., ŷ(x) := arg maxy∈Y f̂(x)y—and let p̂ : X → R≥0 be corresponding top-label confidence predictor— i.e., p̂(x) := maxy∈Y f̂(x)y . While traditional DNN classifiers are confidence predictors, a naively trained DNN is not reliable—i.e., predicted confidence does not match to the true confidence; recent work has studied heuristics for improving reliability (Guo et al., 2017). In contrast, our goal is to construct a confidence predictor that comes with theoretical guarantees.\nWe first introduce the definition of calibration (DeGroot & Fienberg, 1983; Zadrozny & Elkan, 2002; Park et al., 2020b)—i.e., what we mean for a predicted confidence to be “correct”. In many cases, the main quantity of interest is the confidence of the top prediction. Thus, we focus on ensuring that the top-label predicted confidence p̂(x) is calibrated (Guo et al., 2017); our approach can easily be extended to providing guarantees on all confidences predicted using f̂ . Then, we say a confidence predictor f̂ is well-calibrated with respect to distribution D if\nP(x,y)∼D [y = ŷ(x) | p̂(x) = t] = t (∀t ∈ [0, 1]).\nThat is, among all examples x such that the label prediction ŷ(x) has predicted confidence t = p̂(x), ŷ(x) is the correct label for exactly a t fraction of these examples. Using a change of variables (Park\net al., 2020b), f̂ being well-calibrated is equivalent to\np̂(x) = c∗ f̂ (x) := P(x′,y′)∼D [y ′ = ŷ(x′) | p̂(x′) = p̂(x)] (∀x ∈ X ). (1)\nThen, the goal of well-calibration is to make p̂ equal to c∗ f̂ . Note that f̂ appears on both sides of the equation p̂(x) = c∗\nf̂ (x)—implicitly in p̂—which is what makes it challenging to satisfy.\nIndeed, in general, it is unlikely that (1) holds exactly for all x. Instead, based on the idea of histogram binning (Zadrozny & Elkan, 2001), we consider a variant where we partition the data into a fixed number of bins and then construct confidence coverages separately for each bin. In particular, consider K bins B1, . . . , BK ⊆ [0, 1], where B1 = [0, b1] and Bk = (bk−1, bk] for k > 1. Here, K and 0 ≤ b1 ≤ · · · ≤ bK−1 ≤ bK = 1 are hyperparameters. Given f̂ , let κf̂ : X → {1, . . . ,K} to denote the index of the bin that contains p̂(x)—i.e., p̂(x) ∈ Bκf̂ (x).\nDefinition 1 We say f̂ is well-calibrated for a distribution D and bins B1, . . . , BK if\np̂(x) = cf̂ (x) := P(x′,y′)∼D\n[ y′ = ŷ(x′) ∣∣∣ p̂(x′) ∈ Bκf̂ (x)] (∀x ∈ X ), (2) where we refer to cf̂ (x) as the true confidence. Intuitively, this definition “coarsens” the calibration problem across the bins—i.e., rather than sorting the inputs x into a continuum of “bins” p̂(x) = t for each t ∈ [0, 1] as in (1), we sort them into a finite number of bins p̂(x) ∈ Bk; intuitively, we have c∗\nf̂ ≈ cf̂ if the bin sizes are close to zero. It may not be obvious what downstream guarantees\ncan be obtained based on this definition; we provide examples in Sections 3 & 4.\nProblem formulation. We formalize the problem of PAC calibration. We focus on the setting where the training and test distributions are identical—e.g., we cannot handle adversarial examples or changes in covariate distribution (e.g., common in reinforcement learning). Importantly, while we assume a pre-trained confidence predictor f̂ is given, we make no assumptions about f̂—e.g., it can be uncalibrated or heuristically calibrated. If f̂ performs poorly, then the predicted confidences will be close to 1/|Y|—i.e., express no confidence in the predictions. Thus, it is fine if f̂ is poorly calibrated; the important property is that the confidence predictor f̂ have similar true confidences.\nThe challenge in formalizing PAC calibration is that quantifying over all x in (2). One approach is to provide guarantees in expectation over x (Kumar et al., 2019); however, this approach does not provide guarantees for individual predictions.\nInstead, our goal is to find a set of predicted confidences that includes the true confidence with high probability. Of course, we could simply predict the interval [0, 1], which always contains the true confidence; thus, simultaneously want to make the size of the interval small. To this end, we consider a confidence coverage predictor Ĉ : X → 2R, where cf̂ (x) ∈ Ĉ(x) with high probability. In particular, Ĉ(x) outputs an interval [c, c] ⊆ R, where c ≤ c, instead of a set. We only consider a single interval (rather than disjoint intervals) since one suffices to localize the true confidence cf̂ .\nWe are interested in providing theoretical guarantees for an algorithm used to construct confidence coverage predictor Ĉ given a held-out calibration set Z ⊆ X × Y . In addition, we assume the algorithm is given a pretrained confidence predictor f̂ . Thus, we consider Ĉ as depending on Z and f̂ , which we denote by Ĉ(·; f̂ , Z). Then, we want Ĉ to satisfy the following guarantee:\nDefinition 2 Given δ ∈ R>0 and n ∈ N, Ĉ is probably approximately correct (PAC) if for any D,\nPZ∼Dn [ ∧ x∈X cf̂ (x) ∈ Ĉ(x; f̂ , Z) ] ≥ 1− δ. (3)\nNote that cf̂ depends on D. Here, “approximately correct” technically refers to the mean of\nĈ(x; f̂ , Z), which is an estimate of cf̂ (x); the interval captures the bound on the error of this estimate; see Appendix A for details. Furthermore, the conjunction over all x ∈ X may seem strong. We can obtain such a guarantee due to our binning strategy: the property cf̂ (x) ∈ Ĉ(x; f̂ , Z) only depends on the bin Bκf̂ (x), so the conjunction is really only over bins k ∈ {1, ...,K}.\nAlgorithm. We propose a confidence coverage predictor that satisfies the PAC property. The problem of estimating the confidence interval Ĉ(x) of the binned true confidence cf̂ (x) is closely related to the binomial proportion confidence interval estimation; consider a Bernoulli random variable b ∼ B := Bernoulli(θ) for any θ ∈ [0, 1], where b = 1 denotes a success and b = 0 denotes a failure, and θ is unknown. Given a sequence of observations b1:n := (b1, . . . , bn) ∼ Bn, the goal is to construct an interval Θ̂(b1:n) ⊆ R that includes θ with high probability—i.e.,\nPb1:n∼Bn [ θ ∈ Θ̂(b1:n) ] ≥ 1− α, (4)\nwhere α ∈ R>0 is a given confidence level. In particular, the Clopper-Pearson interval\nΘ̂CP(b1:n;α) := [ inf θ { θ ∣∣∣ Pθ [S ≥ s] ≥ α 2 } , sup\nθ\n{ θ ∣∣∣ Pθ [S ≤ s] ≥ α\n2\n}] ,\nguarantees (4) (Clopper & Pearson, 1934; Brown et al., 2001), where s = ∑n i=1 bi is the number of observed successes, n is the number of observations, and S is a Binomial random variable S ∼ Binomial(n, θ). Intuitively, the interval is constructed such that the number of observed success falls in the region with high-probability for any θ in the interval. The following expression is equivalent due to the relationship between the Binomial and Beta distributions (Hartley & Fitch, 1951; Brown et al., 2001)—i.e., Pθ[S ≥ s] = Iθ(s, n− s+ 1), where Iθ is the CDF of Beta(s, n− s+ 1):\nΘ̂CP(b1:n;α) = [α\n2 quantile of Beta(s, n− s+ 1),\n( 1− α\n2\n) quantile of Beta(s+ 1, n− s) ] .\nNow, for each of the K bins, we apply Θ̂CP with confidence level α = δK—i.e.,\nĈ(x; f̂ , Z, δ) := Θ̂CP ( Wκf̂ (x); δ\nK\n) where Wk := { 1(ŷ(x) = y) ∣∣∣ (x, y) ∈ Z s.t. κf̂ (x) = k} . Here, Wk is the set of observations of successes vs. failures corresponding to the subset of labeled examples (x, y) ∈ Z such that p̂(x) falls in the bin Bk, where a success is defined to be a correct prediction ŷ(x) = y. We note that for efficiency, the confidence interval for each of the K bins can be precomputed. Our construction of Ĉ satisfies the following; see Appendix B for a proof:\nTheorem 1 Our confidence coverage predictor Ĉ is PAC for any δ ∈ R>0 and n ∈ N.\nNote that Clopper-Pearson intervals are exact, ensuring the size of Ĉ for each bin is small in practice. Finally, an important special case is when there is a single bin B = [0, 1]—i.e.,\nĈ0(x; f̂ , Z ′, δ) := Θ̂CP(W ; δ) where W := {1(ŷ(x′) = y′) | (x′, y′) ∈ Z ′}.\nNote that Ĉ0 does not depend on x, so we drop it—i.e., Ĉ0(f̂ , Z ′, δ) := Θ̂CP(W ; δ)—i.e., Ĉ0 computes the Clopper-Pearson interval over Z ′, which is a subset of the original set Z." }, { "heading": "3 APPLICATION TO FAST DNN INFERENCE", "text": "A key application of predicted confidences is to perform model composition to improve the running time of DNNs without sacrificing accuracy. The idea is to use a fast but inaccurate model when it is confident in its prediction, and switch to an accurate but slow model otherwise (Bolukbasi et al., 2017); we refer to the combination as the composed model. To further improve performance, we can have the two models share a backbone—i.e., the fast model shares the first few layers of the slow model (Teerapittayanon et al., 2016). We refer to the decision of whether to skip the slow model as the exit condition; then, our goal is to construct confidence thresholds for exit conditions in a way that provides theoretical guarantees on the overall accuracy.\nProblem setup. The early-stopping approach for fast DNN inference can be formalized as a sequence of branching classifiers organized in a cascading way—i.e.,\nŷC(x; γ1:M−1) :=\n{ ŷm(x) if ∧m−1 i=1 (p̂i(x) < γi) ∧ (p̂m(x) ≥ γm) (∀m ∈ {1, ...,M − 1})\nŷM (x) otherwise,\nwhere M is the number of branches, f̂m is the confidence predictor, and ŷm and p̂m are the associated label and top-label confidence predictor, respectively. For conciseness, we denote the exit condition of the mth branch by dm (i.e., dm(x) := 1( ∧m−1 i=1 (p̂i(x) < γi) ∧ (p̂m(x) ≥ γm))) with thresholds γ1, . . . , γm ∈ [0, 1]. The f̂m share a backbone and are trained in the standard way; see Appendix F.4 for details. Figure 1 illustrates the composed model for M = 4; the gray area represents the shared backbone. We refer to an overall model composed in this way as a cascading classifier.\nDesired error bound. Given ξ ∈ R>0, our goal is to choose γ1:M−1 := (γ1, . . . , γM−1) so the error difference of the cascading classifier ŷC and the slow classifier ŷM is at most ξ—i.e.,\nperr := P(x,y)∼D [ŷC(x) 6= y]−P(x,y)∼D [ŷM (x) 6= y] ≤ ξ. (5) To obtain the desired error bound, an example x exits at the mth branch if ŷm is likely to classify x correctly, allowing for at most ξ fraction of errors total. Intuitively, if the confidence of ŷm on x is sufficiently high, then ŷm(x) = y with high probability. In this case, ŷM either correctly classifies or misclassifies the same example; if the example is misclassified, it contributes to decrease perr; otherwise, we have ŷm(x) = y = ŷM (x) with high probability, which contributes to maintain perr.\nFast inference. To minimize running time, we prefer to allow higher error rates at the lower branches—i.e., we want to choose γm as small as possible at lower branches m.\nAlgorithm. Our algorithm takes prediction branches f̂m (for m ∈ {1, . . . ,M}), the desired relative error ξ ∈ [0, 1], a confidence level δ ∈ [0, 1], and a calibration set Z ⊆ X × Y , and outputs γ1:M−1 so that (5) holds with probability at least 1 − δ. It iteratively chooses the thresholds from γ1 to γM−1; at each step, it chooses γm as small as possible subject to perr ≤ ξ. Note that γm implicitly appears in perr in the constraint due to the dependence of dm(x) on γm. The challenge is enforcing the constraint since we cannot evaluate it. To this end, let\nem := P(x,y)∼D [ŷm(x) 6= y ∧ ŷm(x) 6= ŷM (x) ∧ dm(x) = 1] e′m := P(x,y)∼D [ŷM (x) 6= y ∧ ŷm(x) 6= ŷM (x) ∧ dm(x) = 1] ,\nthen it is possible to show that perr = ∑M−1 m=1 em − e′m (see proof of Theorem 2 in Appendix C). Then, we can compute bounds on em and e′m using the following:\nP [ŷm(x) = y | ŷm(x) 6= ŷM (x) ∧ dm(x) = 1] ∈ [cm, c̄m] := Ĉ0 ( f̂m, Zm,\nδ\n3(M − 1) ) P [ŷM (x) = y | ŷm(x) 6= ŷM (x) ∧ dm(x) = 1] ∈ [c′m, c̄′m] := Ĉ0 ( f̂M , Zm, δ\n3(M − 1) ) P [ŷm(x) 6= ŷM (x) ∧ dm(x) = 1] ∈ [rm, r̄m] := Θ̂CP ( Wm; δ\n3(M − 1)\n) ,\nwhere Zm := {(x, y) ∈ Z | ŷm(x) 6= ŷM (x) ∧ dm(x) = 1} Wm := {1(ŷm(x) 6= ŷM (x) ∧ dm(x) = 1) | (x, y) ∈ Z}. Thus, we have em ≤ c̄mr̄m and e′m ≥ c′mrm, in which case it suffices to sequentially solve\nγm = arg min γ∈[0,1] γ subj. to m∑ i=1 c̄ir̄i − c′iri ≤ ξ. (6)\nHere, c̄m, r̄m, cm, and rm are implicitly a function of γ, which we can optimize using line search. We have the following guarantee; see Appendix C for a proof:\nTheorem 2 We have perr ≤ ξ with probability at least 1− δ over Z ∼ Dn.\nMoreover, the proposed greedy algorithm (6) is actually optimal in reducing inference speed when M = 2. Intuitively, we are always better off in terms of inference time by classifying more examples using a faster model. In particular, we have the following theorem; see Appendix D for a proof:\nTheorem 3 If M = 2, γ∗1 minimizes (6), and the classifiers ŷm are faster for smaller m, then the resulting ŷC is the fastest cascading classifier among cascading classifiers that satisfy perr ≤ ξ." }, { "heading": "4 APPLICATION TO SAFE PLANNING", "text": "Robots acting in open world environments must rely on deep learning for tasks such as object recognition—e.g., detect objects in a camera image; providing guarantees on these predictions is critical for safe planning. Safety requires not just that the robot is safe while taking the action, but that it can safely come to a stop afterwards—e.g., that a robot can safely come to a stop before running into a wall. We consider a binary classification DNN trained to predict a probability f̂(x) ∈ [0, 1] of whether the robot is unsafe in this sense.1 If f̂(x) ≥ γ for some threshold γ ∈ [0, 1], then the robot comes to a stop (e.g., to ask a human for help). If the label 1(f̂(x) ≥ γ) correctly predicts safety, then this policy ensures safety as long as the robot starts from a safe state (Li & Bastani, 2020). We apply our approach to choose γ to ensure safety with high probability.\nProblem setup. Given a performant but potentially unsafe policy π̂ (e.g., a DNN policy trained to navigate to the goal), our goal is to override π̂ as needed to ensure safety. We assume that π̂ is trained in a simulator, and our goal is to ensure that π̂ is safe according to our model of the environment, which is already a challenging problem when π̂ is a deep neural network over perceptual inputs. In particular, we do not address the sim-to-real problem.\nLet x ∈ X be the states, Xsafe ⊆ X be the safe states (i.e., our goal is to ensure the robot stays in Xsafe), o ∈ O be the observations, u ∈ U be the actions, g : X × U → X be the (deterministic) dynamics, and h : X → O be the observation function. A state x is recoverable (denoted x ∈ Xrec) if the robot can use π̂ in state x and then safely come to a stop using a backup policy π0 (e.g., braking).\nThen, the shield policy uses π̂ if x ∈ Xrec, and π0 otherwise (Bastani, 2019). This policy guarantees safety as long as an initial state is recoverable—i.e., x0 ∈ Xrec. The challenge is determining whether x ∈ Xrec. When we observe x, we can use model-based simulation to perform this check. However, in many settings, we only have access to observations—e.g., camera images or LIDAR scans—so this approach does not apply. Instead, we propose to train a DNN to predict recoverability—i.e.,\nŷ(o) := { 1 (“un-recoverable”) if f̂(o) ≥ γ 0 (“recoverable”) otherwise\nwhere o = h(x),\nwith the goal that ŷ(o) ≈ y∗(x) := 1(x 6∈ Xrec), resulting in the following the shield policy πshield:\nπshield(o) := { π̂(o) if ŷ(o) = 0 π0(o) otherwise.\nSafety guarantee. Our main goal is to choose γ so that πshield ensures safety with high probability— i.e., given ξ ∈ R>0 and any distribution D over initial states X0 ⊆ Xrec, we have\npunsafe := Pζ∼Dπshield [ζ 6⊆ Xsafe] ≤ ξ, (7)\nwhere ζ(x0, π) := (x0, x1, . . . ) is a rollout from x0 ∼ D generated using π—i.e., xt+1 = g(xt, π(h(xt))).2 We assume the rollout terminates either once the robot reaches its goal, or once it switches to π0 and comes to a stop; in particular, the robot never switches from π0 back to π̂.\nSuccess rate. To maximize the success rate (i.e., the rate at which the robot achieves its goal), we need to minimize how often πshield switches to π0, which corresponds to maximizing γ.\n1Since |Y| = 2, f̂ can be represented as a map f̂ : X → [0, 1]; the second component is simply 1− f̂(x). 2We can handle infinitely long rollouts, but in practice rollouts will be finite (but possibly arbitrarily long).\nand Dπshield is an induced distribution over rollouts ζ(x0, πshield).\nAlgorithm. Our algorithm takes the confidence predictor f̂ , desired bound ξ ∈ R>0 on the unsafety probability, confidence level δ ∈ [0, 1], calibration set W ⊆ X∞ of rollouts ζ ∼ Dπ̂ , and calibration set Z ⊆ O of samples from distribution D̃ described below; see Appendix F.5 for details on sampling ζ and constructing W , Z, and D̃. We want to maximize γ subject to punsafe ≤ ξ, where punsafe is implicitly a function of γ. However, we cannot evaluate punsafe, so we need an upper bound.\nTo this end, consider a rollout that the first unsafe state is encountered on step t (i.e., xt 6∈ Xsafe but xi ∈ Xsafe for all i < t), which we call an unsafe rollout, and denote the event that the unsafe rollout is encountered by Et; we exploit the unsafe rollouts to bound punsafe. In particular, let pt := Pζ∼Dπ̂ [Et], and let p̄ := ∑∞ t=0 pt be the probability that a rollout is unsafe. Then, consider a new\ndistribution D̃ overO with a probability density function pD̃(o) := ∑∞ t=0 pDπ̂ (o | Et) ·pt/p̄, where pDπ̂ is the original probability density function of Dπ̂; in particular, we can draw an observation o ∼ D̃ by sampling the observation of the first unsafe state from a rollout sample (and rejecting if the entire rollout is safe). Then, we can show the following (see a proof of Theorem 4 in Appendix E):\npunsafe ≤ Po∼D̃[ŷ(o) = 0] · p̄ =: p̄unsafe. (8)\nWe use our confidence coverage predictor Ĉ0 to compute bounds on p̄unsafe—i.e., Po∼D̃ [ŷ(o) = 1] ∈ [c, c̄] := Ĉ0 ( f̂ , Z ′, δ\n2\n) where Z ′ := {(o, 1) | o ∈ Z}\np̄ ∈ [r, r̄] := Θ̂CP ( W ′, δ\n2\n) where W ′ := {1(ζ 6⊆ Xsafe) | ζ ∈W}.\nHere, Z ′ is a labeled version of Z, where “1” denotes “un-recoverable”, n := |W |, and n′ := |Z|, where n ≥ n′. Then, we have p̄unsafe ≤ r̄ · (1− c), so it suffices to solve the following problem:\nγ := arg max γ′∈[0,1]\nγ′ subj. to r̄ · (1− c) ≤ ξ (9)\nHere, c is implicitly a function of γ′; thus, we use line search to solve this optimization problem. We have the following safety guarantee, see Appendix E for a proof:\nTheorem 4 We have punsafe ≤ ξ with probability at least 1− δ over W ∼ Dnπ̂ and Z ∼ D̃n ′ ." }, { "heading": "5 EXPERIMENTS", "text": "We demonstrate that how our proposed approach can be used to obtain provable guarantees in our two applications: fast DNN inference and safe planning. Additional results are in Appendix G." }, { "heading": "5.1 CALIBRATION", "text": "We illustrate the calibration properties of our approach using reliability diagrams, which show the empirical accuracy of each bin as a function of the predicted confidence (Guo et al., 2017). Ideally, the accuracy should equal the predicted confidence, so the ideal curve is the line y = x. To draw our predicted confidence intervals in these plots, we need to rescale them; see Appendix F.3.\nSetup. We use the ImageNet dataset (Russakovsky et al., 2015) and ResNet101 (He et al., 2016) for evaluation. We split the ImageNet validation set into 20, 000 calibration and 10, 000 test images.\nBaselines. We consider three baselines: (i) naïve softmax of f̂ , (ii) temperature scaling (Guo et al., 2017), and (iii) histogram binning (Zadrozny & Elkan, 2001); see Appendix F.2 for details. For histogram binning and our approach, we use K = 20 bins of the same size.\nMetric. We use expected calibration error (ECE) and reliability diagrams (see Appendix F.3).\nResults. Results are shown in Figure 2. The ECE of the naïve softmax baseline is 4.79% (Figure 2a), of temperature scaling enhances this to 1.66% (Figure 2b), and of histogram binning is 0.99% (Figure 2c). Our approach predicts an interval that include the empirical accuracy in all bins (solid red lines in Figure 2c); furthermore, the upper/lower bounds of the ECE over values in our bins is [0.0%, 3.76%], which includes zero ECE. See Appendix G.1 for additional results." }, { "heading": "5.2 FAST DNN INFERENCE", "text": "Setup. We use the same ImageNet setup along with ResNet101 as the calibration task. For the cascading classifier, we use the original ResNet101 as the slow network, and add a single exit branch (i.e., M = 2) at a quarter of the way from the input layer. We train the newly added branch using the standard procedure for training ResNet101.\nBaselines. We compare to naïve softmax and to calibrated prediction via temperature scaling, both using a threshold γ1 = 1 − ξ ′ , where ξ ′ is the sum of ξ and the validation error of the slow model; intuitively, this threshold is the one we would use if the predicted probabilities are perfectly calibrated. We also compare to histogram binning—i.e., our approach but using the means of each bin instead of the upper/lower bounds. See Appendix F.2 for details.\nMetrics. First, we measure test set top-1 classification error (i.e., 1 − accuracy), which we want to guarantee this lower than a desired error (i.e., the error of the slow model and desired relative error ξ). To measure inference time, we consider the average number of multiplication-accumulation operations (MACs) used in inference per example. Note that the MACs are averaged over all examples in the test set since the combined model may use different numbers of MACs for different examples.\nResults. The comparison results with the baselines are shown in Figure 3a. The original neural network model is denoted by “slow network”, our approach (6) by “rigorous”, and our baseline by “(1 − ξ′)-softmax”, “(1 − ξ′)-temp.”, and “hist. bin.”. For each method, we plot the classification error and time in MACs. The desired error upper bound is plotted as a dotted line; the goal is for the classification error to be lower than this line. As can be seen, our method is guaranteed to achieve the desired error, while improving the inference time by 32% compared to the slow model. On the other hand, the histogram baseline improves the inference time but fails to satisfy the desired error. Asymptotically, histogram binning is guaranteed to be perfectly calibrated, but it makes mistakes due to finite sample errors. The other baselines do not improve inference time. Next, Figure 3b shows the classification error as we vary the desired relative error ξ; our approach always achieves the desired error on the test set, and is often very close (which maximizes speed). However, the\nbaselines often violate the desired error bound. Finally, the MACs metric is only an approximation of the actual inference time. To complement MACs, we also measure CPU and GPU time using the PyTorch profiler. In Figure 3c, we show the inference times for each method, where trends are as before; our approach improves running time by 54%, while only reducing classification error by 1%. The histogram baseline is faster than our approach, but does not satisfy the error guarantee. These results include the time needed to compute the intervals, which is negligible." }, { "heading": "5.3 SAFE PLANNING", "text": "Setup. We evaluate on AI Habitat (Savva et al., 2019), an indoor robot simulator that provides agents with observations o = h(x) that are RGB camera images. The safety constraint is to avoid colliding with obstacles such as furniture in the environment. We use the learned policy π̂ available with the environment. Then, we train a recoverability predictor, trained using 500 rollouts with a horizon of 100. We calibrate this model on an additional n rollouts.\nBaselines. We compare to three baselines: (i) histogram binning—i.e., our approach but using the means of each bin rather than upper/lower bounds, (ii) a naïve approach of choosing γ = 0.5, and (iii) a naïve but adaptive approach of choosing γ = ξ, called “ξ-naïve”.\nMetrics. We measure both the safety rate and the success rate; in particular, a rollout is successful if the robot reaches its goal state, and a rollout is safe if it does not reach any unsafe states.\nResults. We show results in Figure 4a. The desire safety rate ξ is shown by the dotted line—i.e., we expect the safety rate to be above this line. As can be seen, our approach achieves the desired safety rate. While it sacrifices success rate, this is because the underlying learned policy π̂ is frequently unsafe; in particular, it is only safe in about 30% of rollouts. The naïve approach fails to satisfy the safety constraint. The ξ-naïve approach tends to be optimistic, and also fails when ξ = 0.03 (Figure 4b). The histogram baseline performs similarly to our approach. The main benefit of our approach is providing the absolute guarantee on safety, which the histogram baseline does not provide. Thus, in this case, our approach can provide this guarantee while achieving similar performance. Figure 4b shows the safety rate as we vary the desired safety rate ξ. Both our approach and the baseline satisfy the desired safety guarantee, whereas the naive approaches do not always do so." }, { "heading": "6 CONCLUSION", "text": "We have proposed a novel algorithm for calibrated prediction that provides PAC guarantees, and demonstrated how our approach can be applied to fast DNN inference and safe planning. There are many directions for future work—e.g., leveraging these techniques in more application domains and extending our approach to settings with distribution shift (see Appendix F.1 for a discussion)." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported in part by AFRL/DARPA FA8750-18-C-0090, ARO W911NF-20-1-0080, DARPA FA8750-19-2-0201, and NSF CCF 1910769. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Air Force Research Laboratory (AFRL), the Army Research Office (ARO), the Defense Advanced Research Projects Agency (DARPA), or the Department of Defense, or the United States Government." }, { "heading": "A CONNECTION TO PAC LEARNING THEORY", "text": "We explain the connection to PAC learning theory. First, note that we can represent Ĉ as a confidence interval around the empirical estimate of cf̂ (x)—i.e., ĉf̂ (x) := ∑ (x′,y′)∈Sx 1(y\n′ = ŷ(x′))/|Sx|, where Sx = {(x′, y′) | p̂(x′) ∈ Bκf̂ (x)}. Then, we can write\nĈ(x) = [ĉf̂ (x)− x, ĉf̂ (x) + ̄x].\nIn this case, (3) is equivalent to\nPZ∼Dn [ ∧ x∈X ĉf̂ (x)− κf̂ (x) ≤ cf̂ (x) ≤ ĉf̂ (x) + ̄κf̂ (x) ] ≥ 1− δ, (10)\nfor some 1, ̄1, . . . , K , ̄K . In this bound, “approximately” refers to the fact that the empirical estimate ĉf̂ (x) is within of the true value cf̂ (x), and “probably” refers to the fact that this error bound holds with high probability over the training data Z ∼ Dn. By abuse of terminology, we refer to the confidence interval predictor Ĉ as PAC rather than just ĉf̂ (x).\nAlternatively, we also have the following connection to PAC learning theory:\nDefinition 3 Given , δ ∈ R>0 and n ∈ N, Ĉ is probably approximately correct (PAC) if, for any distribution D, we have\nPZ∼Dn [ Px∼D [ cf̂ (x) ∈ Ĉ(x; f̂ , Z) ] ≥ 1− ] ≥ 1− δ. (11)\nThe following theorem shows that the proposed confidence coverage predictor Ĉ satisfies the PAC guarantee in Definition 3.\nTheorem 5 Our confidence coverage predictor Ĉ satisfies Definition 3 for all , δ ∈ R>0, and n ∈ N.\nProof. We exploit the independence of each bin for the proof. Let θκf̂ (x) := cf̂ (x), which is the parameter of the Binomial distribution of the κf̂ (x)th bin, the following holds:\nP Z∼Dn [ P x∼D [ cf̂ (x) ∈ Ĉ(x; f̂ , Z, δ) ] ≥ 1− ] = P Z∼Dn [ P x∼D [ cf̂ (x) ∈ Ĉ(x; f̂ , Z, δ) ∧ K∨ k=1 p̂(x) ∈ Bk ] ≥ 1− ]\n= P Z∼Dn [ K∑ k=1 P x∼D [ cf̂ (x) ∈ Ĉ(x; f̂ , Z, δ) ∧ p̂(x) ∈ Bk ] ≥ 1− ]\n= P Z∼Dn [ K∑ k=1 P x∼D [ cf̂ (x) ∈ Ĉ(x; f̂ , Z, δ) ∣∣∣ p̂(x) ∈ Bk] P x∼D [p̂(x) ∈ Bk] ≥ 1− ]\n= P Z∼Dn [ K∑ k=1 P x∼D [ θk ∈ Θ̂CP ( Wk; δ K ) ∣∣∣∣ p̂(x) ∈ Bk] P x∼D [p̂(x) ∈ Bk] ≥ 1− ]\n= P Z∼Dn [ K∑ k=1 1 [ θk ∈ Θ̂CP ( Wk; δ K )] P x∼D [p̂(x) ∈ Bk] ≥ 1− ]\n≥ P Z∼Dn [ K∑ k=1 1 [ θk ∈ Θ̂CP ( Wk; δ K )] P x∼D [p̂(x) ∈ Bk] ≥ 1−\n∧ K∧ k=1 1 [ θk ∈ Θ̂CP ( Wk; δ K )] = 1 ]\n= P Z∼Dn [ K∑ k=1 1 [ θk ∈ Θ̂CP ( Wk; δ K )] P x∼D [p̂(x) ∈ Bk] ≥ 1− ∣∣∣∣∣ K∧ k=1 1 [ θk ∈ Θ̂CP ( Wk; δ K )] = 1 ]\nP Z∼Dn [ K∧ k=1 1 [ θk ∈ Θ̂CP ( Wk; δ K )] = 1 ]\n= P Z∼Dn [ K∑ k=1 P x∼D [p̂(x) ∈ Bk] ≥ 1− ∣∣∣∣∣ K∧ k=1 1 [ θk ∈ Θ̂CP ( Wk; δ K )] = 1 ]\nP Z∼Dn [ K∧ k=1 1 [ θk ∈ Θ̂CP ( Wk; δ K )] = 1 ]\n= P Z∼Dn\n[ 1 ≥ 1− ∣∣∣∣∣ K∧ k=1 1 [ θk ∈ Θ̂CP ( Wk; δ K )] = 1 ] P Z∼Dn [ K∧ k=1 1 [ θk ∈ Θ̂CP ( Wk; δ K )] = 1 ]\n= P Z∼Dn [ K∧ k=1 1 [ θk ∈ Θ̂CP ( Wk; δ K )] = 1 ]\n= P Z∼Dn [ K∧ k=1 θk ∈ Θ̂CP ( Wk; δ K )] ≥ 1− δ,\nwhere the last inequality holds by the union bound." }, { "heading": "B PROOF OF THEOREM 1", "text": "We prove this by exploiting the independence of each bin. Recall that Ĉ(x) := [ĉf̂ (x)− x, ĉf̂ (x) + ̄x], and the interval is obtained by applying the Clopper-Pearson interval with confidence level δK at each bin. Then, the following holds due the Clopper-Pearson interval for all k ∈ {1, 2, . . . ,K}:\nP [ |cf̂ ,k − ĉf̂ ,k| > k ] ≤ δ K\nwhere cf̂ ,k := cf̂ (x) and ĉf̂ ,k := ĉf̂ (x) for x such that κf̂ (x) = k, and k := max( x, ̄x). By applying the union bound, the following also holds:\nP [ K∧ k=1 |cf̂ ,k − ĉf̂ ,k| > k ] ≤ δ,\nConsidering the fact that X is partitioned into K spaces due to the binning and the equivalence form (10) of the PAC criterion in Definition 3, the claimed statement holds." }, { "heading": "C PROOF OF THEOREM 2", "text": "We drop probabilities over (x, y) ∼ D. First, we decompose the error of a cascading classifier P [ŷC(x) 6= y] as follows:\nP [ŷC(x) 6= y] = P [ ŷC(x) 6= y ∧ ( ŷC(x) = ŷM (x) ∨ ŷC(x) 6= ŷM (x) )] = P [( ŷC(x) 6= y ∧ ŷC(x) = ŷM (x) ) ∨ ( ŷC(x) 6= y ∧ ŷC(x) 6= ŷM (x)\n)] = P [ŷC(x) 6= y ∧ ŷC(x) = ŷM (x)] +P [ŷC(x) 6= y ∧ ŷC(x) 6= ŷM (x)] ,\nwhere the last equality holds since the events of ŷC(x) = ŷM (x) and of ŷC(x) 6= ŷM (x) are disjoint. Similarly, for the error of a slow classifier P [ŷM (x) 6= y], we have:\nP [ŷM (x) 6= y] = P [ŷM (x) 6= y ∧ ŷC(x) = ŷM (x)] +P [ŷM (x) 6= y ∧ ŷC(x) 6= ŷM (x)] .\nThus, the error difference can be represented as follows:\nP [ŷC(x) 6= y]−P [ŷM (x) 6= y] = P [ŷC(x) 6= y ∧ ŷC(x) 6= ŷM (x)]−P [ŷM (x) 6= y ∧ ŷC(x) 6= ŷM (x)] . (12)\nTo complete the proof, we need to upper bound (12) by ξ. Define the following events:\nDm := m−1∧ i=1 (p̂i(x) < γi) ∧ p̂m(x) ≥ γm (∀m ∈ {1, ...,M − 1})\nDM := M−1∧ i=1 (p̂i(x) < γi)\nEC := ŷC(x) 6= ŷM (x) Em := ŷm(x) 6= ŷM (x) (∀m ∈ {1, ...,M − 1}) FC := ŷC(x) 6= y Fm := ŷm(x) 6= y (∀m ∈ {1, ...,M − 1}) G := ŷM (x) 6= y,\nwhere D1, D2, . . . , DM form a partition of a sample space. Then, we have: P [ŷC(x) 6= y ∧ ŷC(x) 6= ŷM (x)] = P [FC ∧ EC ]\n= P [ FC ∧ EC ∧\nM∨ m=1 Dm\n]\n= P [ M∨ m=1 (FC ∧ EC ∧Dm) ]\n= M∑ m=1 P [FC ∧ EC ∧Dm]\n= M∑ m=1 P [Fm ∧ Em ∧Dm]\n= M∑ m=1 P [Fm | Em ∧Dm] ·P [Em ∧Dm] ,\nSimilarly, we have:\nP [ŷM (x) 6= y ∧ ŷC(x) 6= ŷM (x)] = M∑ m=1 P [G | Em ∧Dm] ·P [Em ∧Dm] .\nThus, (12) can be rewritten as follows: P [ŷC(x) 6= y]−P [ŷM (x) 6= y]\n= M∑ m=1 (P [Fm | Em ∧Dm] ·P [Em ∧Dm]−P [G | Em ∧Dm] ·P [Em ∧Dm])\n= M∑ m=1 (em − e′m)\n= M−1∑ m=1 (em − e′m)\n≤ ξ, where the last equality holds since eM − e′M = 0, and the last inequality holds due to (6) with probability at least 1− δ, thus proves the claim." }, { "heading": "D PROOF OF THEOREM 3", "text": "Suppose there is γ ′ 1 which is different to γ ∗ 1 and produces a faster cascading classifier than the cascading classifier with γ∗1 . Since γ ∗ 1 is the optimal solution of (6), γ ′ 1 > γ ∗ 1 . This further implies that the less number of examples exits at the first branch of the cascading classifier with γ′1, but these examples are classified by the upper, slower branch. This means that the overall inference speed of the cascading classifier with γ′1 is slower then that with γ ∗ 1 , which leads to a contradiction." }, { "heading": "E PROOF OF THEOREM 4", "text": "For clarity, we use r to denote a state x is “recoverable” (i.e., y∗(x) = 0) and u to denote a state x is “un-recoverable” (i.e., y∗(x) = 1). Now, note that a rollout ζ(x0, πshield) := (x0, x1, . . . ) is unsafe if (i) at some step t, we have y∗(xt) = u (i.e., xt is not recoverable), yet ŷ(ot) = r, where ot = h(xt) (i.e., ŷ predicts xt is recoverable), and furthermore (ii) for every step i ≤ t − 1, y∗(xi) = ŷ(oi) = r—i.e.,\npunsafe = Pξ∼Dπshield [ ∞∨ t=0 ( t−1∧ i=0 ( y∗(xi) = r ∧ ŷ(oi) = r ) ∧ ( y∗(xt) = u ∧ ŷ(ot) = r ))] . (13)\nCondition (i) is captured by the second parenthetical inside the probability; intuitively, it says that ŷ(ot) is a false negative. Condition (ii) is captured by the first parenthetical inside the probability; intuitively, it says that ŷ(oi) is a true negative for any i ≤ t− 1. Next, let the event Et be\nEt := t−1∧ i=0 y∗(xi) = r ∧ y∗(xt) = u,\nthen the following holds:\nPξ∼Dπshield [ ∞∨ t=0 ( t−1∧ i=0 ( y∗(xi) = r ∧ ŷ(oi) = r ) ∧ ( y∗(xt) = u ∧ ŷ(ot) = r ))]\n= Pξ∼Dπshield [ ∞∨ t=0 ( t−1∧ i=0 ( y∗(xi) = r ∧ y∗(xt) = u ) ∧ ( t−1∧ i=0 ŷ(oi) = r ∧ ŷ(ot) = r ))]\n= Pξ∼Dπshield [ ∞∨ t=0 ( Et ∧ t−1∧ i=0 ŷ(oi) = r ∧ ŷ(ot) = r )]\n≤ Pξ∼Dπshield [ ∞∨ t=0 ( Et ∧ ŷ(ot) = r )] .\nRecall that p̄ := ∑∞ t=0Pξ∼Dπ̂ [Et] and pD̃(o) := ∑∞ t=0 pDπ̂ (o | Et) · Pξ∼Dπ̂ [Et]/p̄; then we can upper-bound (13) as follows:\npunsafe ≤ Pξ∼Dπshield [ ∞∨ t=0 ( Et ∧ ŷ(ot) = r )]\n= ∞∑ t=0 Pξ∼Dπshield [Et ∧ ŷ(ot) = r]\n= ∞∑ t=0 Pξ∼Dπshield [ŷ(ot) = r | Et] ·Pξ∼Dπshield [Et]\n= ∞∑ t=0 Pξ∼Dπshield [ŷ(o) = r | Et] ·Pξ∼Dπshield [Et]\n= ∞∑ t=0 ∫ 1(ŷ(o) = r) · pDπshield (o | Et) ·Pξ∼Dπshield [Et] do\n= ∫ 1(ŷ(o) = r) ∞∑ t=0 pDπshield (o | Et) ·Pξ∼Dπshield [Et] do\n≤ ∫ 1(ŷ(o) = r) ∞∑ t=0 pDπ̂ (o | Et) ·Pξ∼Dπ̂ [Et] do\n= ∫ 1(ŷ(o) = r)pD̃(o)p̄ do\n= p̄ ·Po∼D̃[ŷ(o) = r],\nwhere we use the fact that Et are disjoint by construction for the first equality, and we use o without time index t for the third equality since it clearly represents the last observation if it is conditioned on Et. Moreover, the last inequality holds due to the following: (i) pDπshield (o | Et) = pDπ̂ (o | Et), since Et implies that the backup policy of πshield isn’t activated up to the step t, so πshield = π̂, and (ii) Pξ∼Dπshield [Et] ≤ Pξ∼Dπ̂ [Et], since πshield is less likely to reach unsafe states by its design than π̂. Thus, the constraint in (9) implies punsafe ≤ ξ with probability at least 1− δ, so the claim follows." }, { "heading": "F ADDITIONAL DISCUSSION", "text": "F.1 LIMITATION TO ON-DISTRIBUTION SETTING\nOur PAC guarantees (i.e., Theorems 1 & 5) transfer to the test distribution if it is identical to the validation distribution. We believe that providing theoretical guarantees for out-of-distribution data is an important direction; however, we believe that our work is an important stepping stone towards this goal. In particular, to the best of our knowledge, we do not know of any existing work that provides theoretical guarantees on calibrated probabilities even for the in-distribution case. One possible direction is to use our approach in conjunction with covariate shift detectors—e.g., (Gretton et al., 2012). Alternatively, it may be possible to directly incorporate ideas from recent work on calibrated prediction with covariate shift (Park et al., 2020b) or uncertainty set prediction with covariate shift (Cauchois et al., 2020; Tibshirani et al., 2019). In particular, we can use importance weighting q(x)/p(x), where p(x) is the training distribution and q(x) is the test distribution, to reweight our training examples, enabling us to transfer our guarantees from the training set to the test distribution. The key challenge is when these weights are unknown. In this case, we can estimate them given a set of unlabeled examples from the test distribution (Park et al., 2020b), but we then need to account for the error in our estimates.\nF.2 BASELINES\nThe following includes brief descriptions on baselines that we used in experiments.\nHistogram binning. This algorithm calibrates the top-label confidence prediction of f̂ by sorting the calibration examples (x, y) into bins Bi based on their predicted top-label confidence—i.e., (x, y) is associated with Bi if p̂(x) ∈ Bi. Then, for each bin, it computes the empirical confidence p̂i := 1 |Si| ∑\n(x,y)∈Si 1(ŷ(x) = y), where Si is the set of labeled examples that are associated with bin Bi—i.e., the empirical counterpart of the true confidence in (2). Finally, during the test-time, it returns a predicted confidence p̂i for all future test examples x if p̂(x) ∈ Bi.\n(1 − ξ′)-softmax. In fast DNN inference, a threshold can be heuristically chosen based on the desired relative error ξ and the validation error of the slow model. In particular, when a cascading classifier consists of two branches—i.e., M = 2, the threshold of the first branch is chosen by γ1 = 1 − ξ ′ , where ξ ′ is the sum of ξ and the validation error of the slow model. We call this approach (1− ξ′)-softmax.\n(1 − ξ′)-temperature scaling. A more advanced approach is to first calibrate each branch to get better confidence. We consider using temperature scaling to do so—i.e., we first calibrate each branch using the temperature scaling, and then use the branch threshold by γ1 = 1 − ξ ′ when M = 2. We call this approach (1− ξ′)-temperature scaling.\nF.3 CALIBRATION: INDUCED INTERVALS FOR ECE AND RELIABILITY DIAGRAM\nECE. The expected calibration error (ECE), which is one way to measure calibration performance, is defined as follows:\nECE := J∑ j=1 |Sj | |S| ∣∣∣∣∣∣ 1|Sj | ∑\n(x,y)∈Sj\np̂(x)− 1 |Sj | ∑ (x,y)∈Sj 1(ŷ(x) = y) ∣∣∣∣∣∣ , where J is the total number of bins for ECE, S ⊆ X × Y is the evaluation set, and Sj is the set of labeled examples associated to the jth bin—i.e., (x, y) ∈ Sj if p̂(x) ∈ Bj .\nA confidence coverage predictor Ĉ(x) outputs an interval instead of a point estimate p̂(x) of the confidence. To evaluate the confidence coverage predictor, we remap intervals using the ECE formulation. In particular, we equivalently represents Ĉ(x) by a mean confidence ĉf̂ (x) and differences from the mean—i.e., Ĉ(x) = [c(x), c(x)] = [ĉf̂ (x) − x, ĉf̂ (x) + x] (see Appendix A for a description on this equivalent representation). Then, we sort each labeled example into\nbins using ĉf̂ (x) to form Sj . Next, we consider an interval instead of p̂(x) to compute ECE— i.e., ECEinduced := [ ECE,ECE ] , where\nECE := J∑ j=1 |Sj | |S| inf p̂j∈Confj ∣∣∣∣∣∣p̂j − 1|Sj | ∑\n(x,y)∈Sj\n1(ŷ(x) = y) ∣∣∣∣∣∣ , ECE :=\nJ∑ j=1 |Sj | |S| sup p̂j∈Confj ∣∣∣∣∣∣p̂j − 1|Sj | ∑\n(x,y)∈Sj\n1(ŷ(x) = y) ∣∣∣∣∣∣ , and Confj := [ Confj ,Confj ] := [ min\n(x,y)∈Sj c(x), max (x,y)∈Sj c(x)\n] .\nReliability diagram. This evaluation technique is a pictorial summary of the ECE, where the xaxis represents the mean confidence 1|Sj | ∑ (x,y)∈Sj p̂(x) for each bin, and the y-axis represents the\nmean accuracy 1|Sj | ∑\n(x,y)∈Sj 1(ŷ(x) = y) for each bin. If an interval from a confidence coverage predictor is given, then the mean confidence is replaced by Confj , resulting in visualizing an interval instead of a point.\nF.4 FAST DNN INFERENCE: CASCADING CLASSIFIER TRAINING\nWe describe a way to train a cascading classifier with M branches. Basically, we consider to independently train M different neural networks with a shared backbone. In particular, we first train the M th branch using a training set by minimizing a cross-entropy loss. Then, we train the (M − 1)th branch, which consists of two parts: a backbone part from the M th branch, and the head of the (M − 1)th branch. Here, the backbone part is already trained in the previous stage, so we do not update the backbone part and only train the head of this branch using the same training set by minimizing the same cross-entropy loss (with the same optimization hyperparameters). This step is done repeatedly down to the first branch.\nF.5 SAFE PLANNING: DATA COLLECTION FROM A SIMULATOR\nWe collect required data from a simulator, where a given policy π̂ is already learned over the simulator. We describe how we form the necessary data from rollouts sampled from the simulator.\nFirst, to sample a rollout ζ, we use π̂ from a random initial state x0 ∼ D; we denote the sequence of states visited as a rollout ζ(x0, π̂) := (x0, x1, . . . ). We denote the induced distribution over rollouts by ζ ∼ Dπ̂ . Note that the ζ contains unsafe states since π̂ is potentially unsafe. However, when constructing our recoverability classifier, we only use the sequence of safe states, followed by a single unsafe state. In particular, we let W be a set of i.i.d. sampled rollouts ζ ∼ Dπ̂ . Next, for a given rollout, we consider the observation in the first unsafe state in that rollout (if one exists); we denote the distribution over such observations by D̃. Finally, we take Z to be a set of sampled observations o ∼ D̃." }, { "heading": "G ADDITIONAL EXPERIMENTS", "text": "G.1 CALIBRATION\nG.2 FAST DNN INFERENCE\nG.3 SAFE PLANNING" } ]
2,021
PAC CONFIDENCE PREDICTIONS FOR DEEP NEURAL NETWORK CLASSIFIERS
SP:4c82d9d12ec6a9f171c4281739776da18bcc2906
[ "of contribution: The authors propose an interesting approach to address the sample-efficiency issue in Neural Architecture Search (NAS). Compared to other existing predictor based methods, the approach distinguishes itself by progressive shrinking the search space. The paper correctly identifies the sampling is an important aspect in using a predictor based NAS method;" ]
Neural Architecture Search (NAS) finds the best network architecture by exploring the architecture-to-performance manifold. It often trains and evaluates a large amount of architectures, causing tremendous computation cost. Recent predictorbased NAS approaches attempt to solve this problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor. Existing predictors attempt to model the performance distribution over the whole architecture space, which could be too challenging given limited samples. Instead, we envision that this ambitious goal may not be necessary if the final aim is to find the best architecture. We present a novel framework to estimate weak predictors progressively. Rather than expecting a single strong predictor to model the whole space, we seek a progressive line of weak predictors that can connect a path to the best architecture, thus greatly simplifying the learning task of each predictor. It is based on the key property of the predictors that their probabilities of sampling better architectures will keep increasing. We thus only sample a few well-performed architectures guided by the predictive model, to estimate another better weak predictor. By this coarse-to-fine iteration, the ranking of sampling space is refined gradually, which helps find the optimal architectures eventually. Experiments demonstrate that our method costs fewer samples to find the top-performance architectures on NAS-Benchmark-101 and NAS-Benchmark-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
[]
[ { "authors": [ "Thomas Chau", "Łukasz Dudziak", "Mohamed S Abdelfattah", "Royson Lee", "Hyeji Kim", "Nicholas D Lane" ], "title": "Brp-nas: Prediction-based nas using gcns", "venue": "arXiv preprint arXiv:2007.08668,", "year": 2020 }, { "authors": [ "Tianqi Chen", "Carlos Guestrin" ], "title": "Xgboost: A scalable tree boosting system", "venue": "In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Xin Chen", "Lingxi Xie", "Jun Wu", "Qi Tian" ], "title": "Progressive differentiable architecture search: Bridging the depth gap between search and evaluation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Nas-bench-201: Extending the scope of reproducible neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Zichao Guo", "Xiangyu Zhang", "Haoyuan Mu", "Wen Heng", "Zechun Liu", "Yichen Wei", "Jian Sun" ], "title": "Single path one-shot neural architecture search with uniform sampling", "venue": null, "year": 1904 }, { "authors": [ "Andrew Howard", "Mark Sandler", "Grace Chu", "Liang-Chieh Chen", "Bo Chen", "Mingxing Tan", "Weijun Wang", "Yukun Zhu", "Ruoming Pang", "Vijay Vasudevan" ], "title": "Searching for mobilenetv3", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Yiming Hu", "Yuding Liang", "Zichao Guo", "Ruosi Wan", "Xiangyu Zhang", "Yichen Wei", "Qingyi Gu", "Jian Sun" ], "title": "Angle-based search space shrinking for neural architecture", "venue": null, "year": 2004 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Renqian Luo", "Fei Tian", "Tao Qin", "Enhong Chen", "Tie-Yan Liu" ], "title": "Neural architecture optimization", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Renqian Luo", "Xu Tan", "Rui Wang", "Tao Qin", "Enhong Chen", "Tie-Yan Liu" ], "title": "Neural architecture search with gbdt", "venue": "arXiv preprint arXiv:2007.04785,", "year": 2020 }, { "authors": [ "Xuefei Ning", "Yin Zheng", "Tianchen Zhao", "Yu Wang", "Huazhong Yang" ], "title": "A generic graph-based neural architecture encoding scheme for predictor-based nas", "venue": "arXiv preprint arXiv:2004.01899,", "year": 2020 }, { "authors": [ "Ilija Radosavovic", "Raj Prateek Kosaraju", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Designing network design spaces", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the aaai conference on artificial intelligence,", "year": 2019 }, { "authors": [ "Julien Siems", "Lucas Zimmer", "Arber Zela", "Jovita Lukasik", "Margret Keuper", "Frank Hutter" ], "title": "Nasbench-301 and the case for surrogate benchmarks for neural architecture", "venue": null, "year": 2008 }, { "authors": [ "Linnan Wang", "Saining Xie", "Teng Li", "Rodrigo Fonseca", "Yuandong Tian" ], "title": "Sample-efficient neural architecture search by learning action space, 2019a", "venue": null, "year": 2019 }, { "authors": [ "Linnan Wang", "Yiyang Zhao", "Yuu Jinnai", "Yuandong Tian", "Rodrigo Fonseca" ], "title": "Alphax: exploring neural architectures with deep neural networks and monte carlo tree search", "venue": "arXiv preprint arXiv:1903.11059,", "year": 2019 }, { "authors": [ "Chen Wei", "Chuang Niu", "Yiping Tang", "Jimin Liang" ], "title": "Npenas: Neural predictor guided evolution for neural architecture search", "venue": "arXiv preprint arXiv:2003.12857,", "year": 2020 }, { "authors": [ "Wei Wen", "Hanxiao Liu", "Hai Li", "Yiran Chen", "Gabriel Bender", "Pieter-Jan Kindermans" ], "title": "Neural predictor for neural architecture search", "venue": null, "year": 1912 }, { "authors": [ "Bichen Wu", "Xiaoliang Dai", "Peizhao Zhang", "Yanghan Wang", "Fei Sun", "Yiming Wu", "Yuandong Tian", "Peter Vajda", "Yangqing Jia", "Kurt Keutzer" ], "title": "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Liang Lin" ], "title": "Snas: stochastic neural architecture search", "venue": "arXiv preprint arXiv:1812.09926,", "year": 2018 }, { "authors": [ "Yuhui Xu", "Lingxi Xie", "Xiaopeng Zhang", "Xin Chen", "Guo-Jun Qi", "Qi Tian", "Hongkai Xiong" ], "title": "Pcdarts: Partial channel connections for memory-efficient differentiable architecture", "venue": null, "year": 1907 }, { "authors": [ "Chris Ying", "Aaron Klein", "Eric Christiansen", "Esteban Real", "Kevin Murphy", "Frank Hutter" ], "title": "Nasbench-101: Towards reproducible neural architecture search", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural Architecture Search (NAS) has become a central topic in recent years with great progress (Liu et al., 2018b; Luo et al., 2018; Wu et al., 2019; Howard et al., 2019; Ning et al., 2020; Wei et al., 2020; Luo et al., 2018; Wen et al., 2019; Chau et al., 2020; Luo et al., 2020). Methodologically, all existing NAS methods try to find the best network architecture by exploring the architecture-toperformance manifold, such as reinforced-learning-based (Zoph & Le, 2016), evolution-based (Real et al., 2019) or gradient-based Liu et al. (2018b) approaches. In order to cover the whole space, they often train and evaluate a large amount of architectures, thus causing tremendous computation cost.\nRecently, predictor-based NAS methods alleviate this problem with two key steps: one sampling step to sample some architecture-performance pairs, and another performance modeling step to fit the performance distribution by training a proxy accuracy predictor. An in-depth analysis of existing methods (Luo et al., 2018) founds that most of those methods (Ning et al., 2020; Wei et al., 2020; Luo et al., 2018; Wen et al., 2019; Chau et al., 2020; Luo et al., 2020) attempt to model the performance distribution over the whole architecture space. However, since the architecture space is often exponentially large and highly non-convex, modeling the whole space is very challenging especially given limited samples. Meanwhile, different types of predictors in these methods have to demand handcraft design of the architecture representations to improve the performance.\nIn this paper, we envision that the ambitious goal of modeling the whole space may not be necessary if the final goal is to find the best architecture. Intuitively, we assume the whole space could be divided into different sub-spaces, some of which are relatively good while some are relatively bad. We tend to choose the good ones while neglecting the bad ones, which makes sure more samples will be used to model the good subspace precisely and then find the best architecture. From another perspective, instead of optimizing the predictor by sampling the whole space as well as existing methods, we propose to jointly optimize the sampling strategy and the predictor learning, which helps achieve better sample efficiency and prediction accuracy simultaneously.\nBased on the above motivation, we present a novel framework that estimates a series of weak predictors progressively. Rather than expecting a strong predictor to model the whole space, we instead seek a progressive evolving of weak predictors that can connect a path to the best architecture. In this way, it greatly simplifies the learning task of each predictor. To ensure moving the best architecture along the path, we increase the sampling probability of better architectures guided by the weak predictor at each iteration. Then, the consecutive weak predictor with better samples will be trained in the next iteration. We iterate until we arrive at an embedding subspace where the best architectures reside. The weak predictor achieved at the final iteration becomes the dedicated predictor focusing on such a fine subspace and the best performed architecture can be easily predicted.\nCompared to existing predictor-based NAS, our method has several merits. First, since only weak predictors are required to locate the good\nsubspace, it yields better sample efficiency. On NAS-Benchmark-101 and NAS-Benchmark-201, it costs significantly fewer samples to find the top-performance architecture than existing predictorbased NAS methods. Second, it is much less sensitive to the architecture representation (e.g., different architecture embeddings) and the predictor formulation design (e.g., MLP, Gradient Boosting Regression Tree, Random Forest). Experiments show our superior robustness in all their combinations. Third, it is generalized to other search spaces. Given a limited sample budget, it achieves the state-of-the-art ImageNet performance on the NASNet search space." }, { "heading": "2 OUR APPROACH", "text": "" }, { "heading": "2.1 REVISIT PREDICTOR-BASED NEURAL ARCHITECTURE SEARCH", "text": "Neural Architecture Search (NAS) finds the best network architecture by exploring the architectureto-performance manifold. It can be formulated as an optimization problem. Given a search space of network architectures X and a discrete architecture-to-performance mapping function f : X → P from architecture set X to performance set P , the objective is to find the best neural architecture x∗ with the highest performance f(x) in the search space X:\nx∗ = argmax x∈X f(x) (1)\nA naı̈ve solution is to estimate the performance mapping f(x) through the full search space, however, it is prohibitively expensive since all architectures have to be exhaustively trained from scratch. To address this problem, predictor-based NAS learns a proxy predictor f̃(x) to approximate f(x) using some architecture-performance pairs , which significantly reduces the training cost. In general, predictor-based NAS can be formulated as:\nx∗ = argmax x∈X f̃(x|S)\ns.t. f̃ = argmin S,f̃∈F̃ ∑ s∈S L(f̃(s), f(s))\n(2)\nwhere L is the loss function for the predictor f̃ , F̃ is a set of all possible approximation to f , S := {S ⊆ X | |S| ≤ C} is the training pairs for predictor f̃ given sample budget C. Here, C is directly correlated to the total training cost. Our objective is to minimize the loss L of the predictor f̃ based on some sampled architectures S.\nPrevious predictor-based NAS methods attempt to solve Equation 2 with two key steps: (1) sampling some architecture-performance pairs and (2) learning a proxy accuracy predictor. First, a common practice in previous work is to sample training pairs S uniformly from the search space X to learn the predictor. Such a sampling is inefficient considering that the goal of NAS is to find a subspace of well-performed architectures in the search space. A biased sampling strategy towards the wellperformed architectures can be more desirable. Second, given such pairs S, previous predictor-based NAS uses a predictor f̃ to model the performance distribution over the whole architecture space. Since the architecture space is often enormously large and highly non-convex, it is too challenging to model the whole space given the limited samples." }, { "heading": "2.2 PROGRESSIVE WEAK PREDICTORS APPROXIMATION", "text": "We envision that the above ambitious goal may not be necessary if the final aim of NAS is to find the best architecture. We argue that sampling S and learning f̃ should be co-evolving instead of a onetime deal as done in existing predictor-based NAS. Demonstrated in Figure 2, rather than expecting a single strong predictor to model the whole space at one time, we progressively evolve our weak predictors to sample towards subspace of best architectures, thus greatly simplifying the learning task of each predictor. With these coarse-to-fine iterations, the ranking of sampling space is refined gradually, which helps find the optimal architectures eventually.\nThus, we propose a novel coordinate descent way to jointly optimize the sampling and learning stages in predictor-based NAS progressively, which can be formulated as following:\nSampling Stage: P̃ k = {f̃k(s)|s ∈ X \\ Sk} (3) Sk+1 = argmax\nTk (P̃ k) ∪ Sk (4)\nLearning Stage: x∗ = argmax x∈X f̃(x|Sk+1)\ns.t.f̃k+1 = argmin f̃k∈F̃ ∑ s∈Sk+1 L(f̃(s), f(s)) (5)\nSuppose our iterative methods has K iterations, at k-th iteration where k = 1, 2, . . .K, we initialize our training set S1 by randomly sampling a few samples from X to train an initial predictor f̃1. We then jointly optimize the sampling set Sk and predictor f̃k in a progressive manner for K iterations.\nSampling Stage We first sort all the architectures in the search space X according to its predicted performance P̃ k at every iteration k. Given the sample budget, we then sample new architectures Sk+1 among the top T k ranked architectures.\nLearning Stage We learn a predictor f̃k, where we want to minimize the the lossL of the predictor f̃k based on sampled architectures Sk. We then evaluate all the architectures X in the search space using the learned predictor f̃k to get the predicted performance P̃ k.\nProgressive Approximation Through the above alternative iteration, the predictor f̃k would guide the sampling process to gradually zoom into the promising architecture samples. In addition, the good performing samples Sk+1 sampled from the promising architecture samples would in term improve the performance of the predictor f̃k+1 in the well-performed architectures.\nTo demonstrate the effectiveness of our iterative scheme, Figure 3 (a) shows the progressive procedure of finding the optimal architecture x∗ and learning the predicted best architecture x̃∗k over 5 iterations. As we can see, the optimal architecture and the predicted best one are moving towards each other closer and closer, which indicates that the performance of predictor over the optimal architecture(s) is growing better. In Figure 3 (b), we use the error empirical distribution function (EDF) proposed in (Radosavovic et al., 2020) to visualize the performance distribution of architectures in the subspace. We plot the EDF of the top-200 models based on the predicted performance over 5 iterations. As shown in Figure 3 (b), the subspace of top-performed architectures is consistently evolving towards more promising architecture samples over 5 iterations. In conclusion, the probabilities of sampling better architectures through these progressively improved weak predictors indeed keep increasing, as we desire them to." }, { "heading": "2.3 GENERALIZABILITY ON PREDICTORS AND FEATURES", "text": "Here we analyze the generalizability of our method and demonstrate its robustness on different predictors and features. In predictor-based NAS, the objective of learning the predictor f̃ can be formulated as a regression problem (Wen et al., 2019) or a ranking (Ning et al., 2020) problem. The choice of predictors is diverse, and usually critical to final performance (e.g. MLP (Ning et al., 2020; Wei et al., 2020), LSTM (Luo et al., 2018), GCN (Wen et al., 2019; Chau et al., 2020), Gradient Boosting Tree (Luo et al., 2020)). To illustrate our framework is generalizable and robust to the specific choice of predictors, we compare the following predictor variants.\n• Multilayer perceptron (MLP): MLP is the baseline commonly used in predictor-based NAS (Ning et al., 2020) due to its simplicity. Here we use a 4-layer MLP with hidden layer dimension of (1000, 1000, 1000, 1000) which is sufficient to model the architecture encoding. • Gradient Boosting Regression Tree (GBRT): Tree-based methods have recently been pre-\nferred in predictor-based NAS (Luo et al., 2020; Siems et al., 2020) since it is more suitable to model discrete representation of the architectures. Here, we use the Gradient Boosting Regression Tree based on XGBoost (Chen & Guestrin, 2016) implementation. • Random Forest: Random Forrest is another variant of tree-based predictor, it differs from\nGradient Boosting Trees in that it combines decisions at the end instead of along each hierarchy, and thus more robust to noise.\nThe selection of features to represent the architecture search space and learn the predictor is also sensitive to the performance. Previous methods tended to hand craft the feature for the best performance (e.g., raw architecture encoding (Wei et al., 2020), supernet statistic (Hu et al., 2020)). To demonstrate our framework is robust across different features, we compare the following features.\n• One-hot Vector: In NAS-Bench-201(Dong & Yang, 2020), its DART style search space fixed the graph connectivity, so one-hot vector is used to encode the choice of operator.\n• Adjacency Matrix: In NAS-Bench-101, we used the encoding scheme as well as (Ying et al., 2019; Wei et al., 2020), where a 7×7 adjacency matrix represents the graph connectivity and a 7-dimensional vector represents the choice of operator, on every node.\nWe compare the robustness across different predictors under our framework shown in Figure 4. We can see that all predictors perform similarly among different target datasets. As shown in Figure 4 with Figure 5, although different architecture encoding methods are used, our method can perform similarly well among different predictors, which demonstrates that our proposed method is robust to different predictors and features selection." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 SETUP", "text": "NAS-Bench-101 (Ying et al., 2019) is one of the first datasets used to benchmark NAS algorithms. The dataset provides a Directed Acyclic Graph (DAG) based cell structure, while (1) The connectivity of DAG can be arbitrary with a maximum number of 7 nodes and 9 edges (2) Each nodes on the DAG can choose from operator of 1×1 convolution, 3×3 convolution or 3×3 max-pooling. After removing duplications, the dataset consists of 423,624 diverse architectures trained on CIFAR10 dataset with each architecture trained for 3 trials.\nNAS-Bench-201 (Dong & Yang, 2020) is another recent NAS benchmark with a reduced DARTSlike search space. The DAG of each cell is fixed similar to DARTS(Liu et al., 2018b), however we can choose from 5 different operations (1×1 convolution, 3×3 convolution, 3×3 avg-pooling, skip, no connection) on each of the 6 edges totaling a search space of 15,625 architectures. The dataset is trained on 3 different datasets (CIFAR10/CIFAR100/ImageNet16-120) with each architecture trained for 3 trials.\nFor experiments on both benchmarks, we followed the same setting as (Wen et al., 2019). We use the validation accuracy as search signal, while test accuracy is only used for reporting the accuracy on the model that was selected at the end of a search. Since the best performing architecture on the validation and testing set does not necessarily match, we also report the performance on finding the oracle on the validation set of our NAS algorithm in the following experiments.\nOpen Domain Search: we follow the same NASNet search space used in (Zoph et al., 2018) to directly search for best-performing architectures on ImageNet. Due to the huge computational costs needed to train and evaluate architecture performance on ImageNet, we leverage a weight-sharing supernet approach (Guo et al., 2019) and use supernet accuracy as a performance proxy indicator." }, { "heading": "3.2 COMPARISON TO STATE-OF-THE-ART (SOTA) METHODS", "text": "We evaluate our method on both NAS-Bench-101 and NAS-Bench-201. We also apply our method to open domain search directly on ImageNet dataset using NASNet search space.\nNAS-Bench-101\nWe conduct experiments on the popular NAS-Bench-101 benchmark and compare with multiple popular methods (Real et al., 2019; Wang et al., 2019b;a; Luo et al., 2018; Wen et al., 2019). We first study the performance by limiting the number of queries. In Table 1, we vary the number of queries used in our method by changing the number of iterations. It is clear to see that, the searched performance consistently improves as more iterations are used. When compared to the results from popular predictor-based NAS methods, such as NAO (Luo et al., 2018) and Neural Predictor (Wen et al., 2019), our method (a) reaches higher search performance provided with the same query budget for training; and (b) uses fewer samples towards the same accuracy goal.\nWe then plot the best accuracy against number of samples in Figure 6 to show the sample efficiency on both validation and test set of NAS-Bench-101, we can see that our method consistently requires fewer sample to reach higher accuracy, compared to Random Search and Regularized Evolution.\nFinally, Table 2 shows that our method significantly outperforms baselines in terms of sample efficiency. Specifically, our method costs 44×, 20×, 17×, and 2.66× less samples to reach the optimal architecture, compared to Random Search, Regularized Evolution (Real et al., 2019), MCTS (Wang et al., 2019b), LaNAS (Wang et al., 2019a), respectively.\nNAS-Bench-201\nWe further evaluate our method on NAS-Bench-201. Since it is relatively newly released, we compare with two baseline methods Regularized Evolution (Real et al., 2019) and random search using our own implementation. Shown in Table 3, we conduct searches on all three subsets (CIFAR10, CIFAR100, ImageNet16-120) and report the average number of samples needed to reach global optimal over 250 runs. It is obvious to see that our method requires the minimum number of samples among all settings.\nWe also conduct a controlled experiment by varing the number of samples. As in Figure 7, our average performance over different number of samples yields a clear gain over Regularized Evolution (Real et al., 2019) in all three subsets. Our confidence interval is also tighter than Regularized Evolution, showcasing our method’s superior stability/reliability.\nOpen Domain Search\nIn order to demonstrate our method’s generalizability, we further apply it to open domain search without ground-truth. We adopt the popular search space from NASNet and compare with several popular methods (Zoph et al., 2018; Real et al., 2019; Liu et al., 2018a; Luo et al., 2018) with the utilization of number of samples reported. Shown in Table 5, it is clear to see that, using fewest samples among all, our method achieves state-of-the-art ImageNet top-1 error with similar number of parameters and FLOPs. Our searched architecture is also competitive to expert-design networks. Comparing with the previous SOTA predictor-based NAS method (Luo et al., 2018), our method\nreduces 0.9% top-1 error, using the same number of samples, which is significant. This experiment well proves that our method is robust, generalizable, and can be effectively applied to real-world open domain search." }, { "heading": "4 CONCLUSION", "text": "In this paper, we present a novel predictor-based NAS framework that progressively shrinks the sampling space, by learning a series of weak predictors that can connect towards the best architectures. We argue that using a single strong predictor to model the whole search space with limited samples may be too challenging a task and seemingly unnecessary. Instead by co-evolving the sampling stage and learning stage, our weak predictors can progressively evolve to sample towards the subspace of best architectures, thus greatly simplifying the learning task of each predictor. Extensive experiments on popular NAS benchmarks prove that proposed method is sample-efficient and robust to various combinations of predictors and architecture encoding means. We further apply our method to open domain search and demonstrate its generalization. Our future work will investigate how to jointly the predictor and encoding in our framework." }, { "heading": "A MORE COMPARSION ON NASNET SEARCH SPACE", "text": "In Table 5, we show more comparsion to representative Gradient-based methods including SNAS(Xie et al., 2018), DARTS(Liu et al., 2018b), P-DARTS(Chen et al., 2019), PC-DARTS(Xu et al., 2019), DS-NAS(Xu et al., 2019).\nB VISUALIZATION OF NASBENCH SEARCH SPACE\nFigure 8 illustrates the histogram of architecture performance in NASBench-101/201, we also accompany a zoomed in view of the histogram, those histograms clearly show that the NAS search spaces bias heavily towards good performing architectures." } ]
2,020
WEAK NAS PREDICTOR IS ALL YOU NEED
SP:720f167592297c58d88272599fb66978f3ae8001
[ "This paper studies the problem of gradient attack in deep learning models. In particular, this paper tries to form a system of linear equations to find a training data point when the gradient of the deep learning model with respect to that data point is available. The algorithm for finding the data point is called R-GAP." ]
Federated learning frameworks have been regarded as a promising approach to break the dilemma between demands on privacy and the promise of learning from large collections of distributed data. Many such frameworks only ask collaborators to share their local update of a common model, i.e. gradients, instead of exposing their raw data to other collaborators. However, recent optimization-based gradient attacks show that raw data can often be accurately recovered from gradients. It has been shown that minimizing the Euclidean distance between true gradients and those calculated from estimated data is often effective in fully recovering private data. However, there is a fundamental lack of theoretical understanding of how and when gradients can lead to unique recovery of original data. Our research fills this gap by providing a closed-form recursive procedure to recover data from gradients in deep neural networks. We name it Recursive Gradient Attack on Privacy (R-GAP). Experimental results demonstrate that R-GAP works as well as or even better than optimization-based approaches at a fraction of the computation under certain conditions. Additionally, we propose a Rank Analysis method, which can be used to estimate the risk of gradient attacks inherent in certain network architectures, regardless of whether an optimization-based or closed-form-recursive attack is used. Experimental results demonstrate the utility of the rank analysis towards improving the network’s security. Source code is available for download from https://github.com/JunyiZhu-AI/R-GAP.
[ { "affiliations": [], "name": "Junyi Zhu" } ]
[ { "authors": [ "Peter L Bartlett", "Michael I Jordan", "Jon D McAuliffe" ], "title": "Convexity, classification, and risk bounds", "venue": "Journal of the American Statistical Association,", "year": 2006 }, { "authors": [ "Emiliano De Cristofaro" ], "title": "An overview of privacy in machine learning", "venue": null, "year": 2005 }, { "authors": [ "Jonas Geiping", "Hartmut Bauermeister", "Hannah Dröge", "Michael Moeller" ], "title": "Inverting gradients: How easy is it to break privacy in federated learning", "venue": "Advances in Neural Information Processing Systems", "year": 2020 }, { "authors": [ "Gene H. Golub", "Charles F. Van Loan" ], "title": "Matrix Computations", "venue": null, "year": 1996 }, { "authors": [ "Zecheng He", "Tianwei Zhang", "Ruby B. Lee" ], "title": "Model inversion attacks against collaborative inference", "venue": "In Proceedings of the 35th Annual Computer Security Applications Conference,", "year": 2019 }, { "authors": [ "Arthur Jochems", "Timo M. Deist", "Johan van Soest", "Michael Eble", "Paul Bulens", "Philippe Coucke", "Wim Dries", "Philippe Lambin", "Andre Dekker" ], "title": "Distributed learning: Developing a predictive model based on data from multiple hospitals without data leaving the hospital – a real life proof of concept", "venue": "Radiotherapy and Oncology,", "year": 2016 }, { "authors": [ "Arthur Jochems", "Timo Deist", "Issam El Naqa", "Marc Kessler", "Chuck Mayo", "Jackson Reeves", "Shruti Jolly", "Martha Matuszak", "Randall Ten Haken", "Johan Soest", "Cary Oberije", "Corinne Faivre-Finn", "Gareth Price", "Dirk Ruysscher", "Philippe Lambin", "André Dekker" ], "title": "Developing and validating a survival prediction model for nsclc patients through distributed learning across three countries", "venue": "International Journal of Radiation Oncology*Biology*Physics,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Jakub Konečný", "H. Brendan McMahan", "Felix X. Yu", "Peter Richtarik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": "In NIPS Workshop on Private Multi-Party Machine Learning,", "year": 2016 }, { "authors": [ "Dong C. Liu", "Jorge Nocedal" ], "title": "On the limited memory BFGS method for large scale optimization", "venue": "Math. Program.,", "year": 1989 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "Proceedings of the 20th International Conference on Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "L.T. Phong", "Y. Aono", "T. Hayashi", "L. Wang", "S. Moriai" ], "title": "Privacy-preserving deep learning via additively homomorphic encryption", "venue": "IEEE Transactions on Information Forensics and Security,", "year": 2018 }, { "authors": [ "Maria Rigaki", "Sebastian Garcia" ], "title": "A survey of privacy attacks in machine learning", "venue": null, "year": 2007 }, { "authors": [ "Z. Wang", "M. Song", "Z. Zhang", "Y. Song", "Q. Wang", "H. Qi" ], "title": "Beyond inferring class representatives: User-level privacy leakage from federated learning", "venue": "In IEEE INFOCOM 2019 - IEEE Conference on Computer Communications,", "year": 2019 }, { "authors": [ "Wenqi Wei", "Ling Liu", "Margaret Loper", "Ka-Ho Chow", "Mehmet Emre Gursoy", "Stacey Truex", "Yanzhao Wu" ], "title": "A framework for evaluating client privacy leakages in federated learning", "venue": "Computer Security – ESORICS,", "year": 2020 }, { "authors": [ "Ziqi Yang", "Jiyi Zhang", "Ee-Chien Chang", "Zhenkai Liang" ], "title": "Neural network inversion in adversarial setting via background knowledge alignment", "venue": "In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2019 }, { "authors": [ "Yuheng Zhang", "Ruoxi Jia", "Hengzhi Pei", "Wenxiao Wang", "Bo Li", "Dawn Song" ], "title": "The secret revealer: Generative model-inversion attacks against deep neural networks", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Bo Zhao", "Konda Reddy Mopuri", "Hakan Bilen" ], "title": "iDLG: Improved deep leakage from gradients", "venue": null, "year": 2001 }, { "authors": [ "Ligeng Zhu", "Zhijian Liu", "Song Han" ], "title": "Deep leakage from gradients", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Geiping" ], "title": "MSE of the reconstructions of the two rank-deficient variants is significantly higher, which indicates that for deep networks, we can also improve the defendability by decreasing local redundancy or even making layers locally rank-deficient", "venue": "F R-GAP IN THE BATCH SETTING RETURNS A LINEAR COMBINATION OF", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Distributed and federated learning have become common strategies for training neural networks without transferring data (Jochems et al., 2016; 2017; Konečný et al., 2016; McMahan et al., 2017). Instead, model updates, often in the form of gradients, are exchanged between participating nodes. These are then used to update at each node a copy of the model. This has been widely applied for privacy purposes (Rigaki & Garcia, 2020; Cristofaro, 2020), including with medical data (Jochems et al., 2016; 2017). Recently, it has been demonstrated that this family of approaches is susceptible to attacks that can in some circumstances recover the training data from the gradient information exchanged in such federated learning approaches, calling into question their suitability for privacy preserving distributed machine learning (Phong et al., 2018; Wang et al., 2019; Zhu et al., 2019; Zhao et al., 2020; Geiping et al., 2020; Wei et al., 2020). To date these attack strategies have broadly fallen into two groups: (i) an analytical attack based on the use of gradients with respect to a bias term (Phong et al., 2018), and (ii) an optimization-based attack (Zhu et al., 2019) that can in some circumstances recover individual training samples in a batch, but that involves a difficult nonconvex optimization that doesn’t always converge to a correct solution (Geiping et al., 2020), and that provides comparatively little insights into the information that is being exploited in the attack.\nThe development of privacy attacks is most important because they inform strategies for protecting against them. This is achieved by perturbations to the transferred gradients, and the form of the attack can give insights into the type of perturbation that can effectively protect the data (Fan et al., 2020). As such, the development of novel closed-form attacks is essential to the analysis of privacy in federated learning. More broadly, the existence of model inversion attacks (He et al., 2019; Wang et al., 2019; Yang et al., 2019; Zhang et al., 2020) calls into question whether transferring\na fully trained model can be considered privacy preserving. As the weights of a model trained by (stochastic) gradient descent are the summation of individual gradients, understanding gradient attacks can assist in the analysis of and protection against model inversion attacks in and outside of a federated learning setting.\nIn this work, we develop a novel third family of attacks, recursive gradient attack on privacy (RGAP), that is based on a recursive, depth-wise algorithm for recovering training data from gradient information. Different from the analytical attack using the bias term, R-GAP utilizes much more information and is the first closed-form algorithm that works on both convolutional networks and fully connected networks with or without bias term. Compared to optimization-based attacks, it is not susceptible to local optima, and is orders of magnitude faster to run with a deterministic running time. Furthermore, we show that under certain conditions our recursive attack can fully recover training data in cases where optimization attacks fail. Additionally, the insights gained from the closed form of our recursive attack have lead to a refined rank analysis that predicts which network architectures enable full recovery, and which lead to provable noisy recovery due to rankdeficiency. This explains well the performance of both closed-form and optimization-based attacks. We also demonstrate that using rank analysis we are able to make small modifications to network architectures to increase the network’s security without sacrificing its accuracy." }, { "heading": "1.1 RELATED WORK", "text": "Bias attacks: The original discovery of the existence of an analytical attack based on gradients with respect to the bias term is due to Phong et al. (2018). Fan et al. (2020) also analyzed the bias attack as a system of linear equations, and proposed a method of perturbing the gradients to protect against it. Their work considers convolutional and fully-connected networks as equivalent, but this ignores the aggregation of gradients in convolutional networks. Similar to our work, they also perform a rank analysis, but it considers fewer constraints than is included in our analysis (Section 4).\nOptimization attacks: The first attack that utilized an optimization approach to minimize the distance between gradients appears to be due to Wang et al. (2019). In this work, optimization is adopted as a submodule in their GAN-style framework. Subsequently, Zhu et al. (2019) proposed a method called deep leakage from gradients (DLG) which relies entirely on minimization of the difference of gradients (Section 2). They propose the use of L-BFGS (Liu & Nocedal, 1989) to perform the optimization. Zhao et al. (2020) further analyzed label inference in this setting, proposing an analytic way to reconstruct the one-hot label of multi-class classification in terms of a single input. Wei et al. (2020) show that DLG is sensitive to initialization and proposed that the same class image is an optimal initialization. They proposed to use SSIM as image similarity metric, which can then be used to guide optimization by DLG. Geiping et al. (2020) point out that as DLG requires second-order derivatives, L-BFGS actually requires third-order derivatives, which leads to challenging optimzation for networks with activation functions such as ReLU and LeakyReLU. They therefore propose to replace L-BFGS with Adam (Kingma & Ba, 2015). Similar to the work of Wei et al. (2020), Geiping et al. (2020) propose to incorporate an image prior, in this case total variation, while using PSNR as a quality measurement.\n2 OPTIMIZATION-BASED GRADIENT ATTACKS ON PRIVACY (O-GAP)\nOptimization-based gradient attacks on privacy (O-GAP) take the real gradients as its ground-truth label and utilizes optimization to decrease the distance between the real gradients ∇W and the dummy gradients ∇W′ generated by a pair of randomly initialized dummy data and dummy label. The objective function of O-GAP can be generally expressed as:\narg min x′,y′ ‖∇W−∇W′‖2 = arg min x′,y′ d∑ i=1 ‖∇Wi −∇W′i‖2, (1)\nwhere the summation is taken over the layers of a network of depth d, and (x′, y′) is the dummy training data and label used to generate ∇W ′. The idea of O-GAP was proposed by Wang et al. (2019). However, they have adopted it as a part of their GAN-style framework and did not realize that O-GAP is able to preform a more accurate attack by itself. Later in the work of Zhu et al. (2019), O-GAP has been proposed as a stand alone approach, the framework has been named as Deep Leakage from Gradients (DLG).\nThe approach is intuitively simple, and in practice has been shown to give surprisingly good results (Zhu et al., 2019). However, it is sensitive to initialization and prone to fail (Zhao et al., 2020). The choice of optimizer is therefore important, and convergence can be very slow (Geiping et al., 2020). Perhaps most importantly, Equation 1 gives little insight into what information in the gradients is being exploited to recover the data. Analysis in Zhu et al. (2019) is limited to empirical insights, and fundamental open questions remain: What are sufficient conditions for arg minx′,y′ ∑d i=1 ‖∇Wi − ∇W ′ i‖2 to have a unique minimizer? We address this question in Section 4, and subsequently validate our findings empirically." }, { "heading": "3 CLOSED-FORM GRADIENT ATTACKS ON PRIVACY", "text": "The first attempt of closed-form GAP was proposed in a research of privacy-preserving deep learning by Phong et al. (2018). Theorem 1 (Phong et al. (2018)). Assume a layer of a fully connected network with a bias term, expressed as:\nWx + b = z, (2)\nwhere W, b denote the weight matrix and bias vector, and x, z denote the input vector and output vector of this layer. If the loss function ` of the network can be expressed as:\n` = `(f(x), y)\nwhere f indicates a nested function of x including activation function and all subsequent layers, y is the ground-truth label. Then x can be derived from gradients w.r.t. W and gradients w.r.t. b, i.e.:\n∂` ∂W = ∂` ∂z x>, ∂` ∂b = ∂` ∂z\nx> = ∂` ∂Wj / ∂` ∂bj (3)\nwhere j denotes the j-th row, note that in fact from each row we can compute a copy of x>.\nWhen this layer is the first layer of a network, it is possible to reconstruct the data, i.e. x, using this approach. In the case of noisy gradients, we can make use of the redundancy in estimating x by averaging over noisy estimates: x̂> = ∑ j ∂` ∂Wj / ∂` ∂bj . However, simply removing the bias term can disable this attack. Besides, this approach does not work on convolutional neural networks due to a dimension mismatch in Equation 3. Both of these two problems have been resolved in our approach.\n3.1 RECURSIVE GRADIENT ATTACK ON PRIVACY (R-GAP)\nFor simplicity we derive the R-GAP in terms of binary classification with a single image as input. In this setting we can generally describe the network and loss function as:\nµ = ywd =:fd−1(x)︷ ︸︸ ︷ σd−1 Wd−1 σd−2 (Wd−2φ (x))︸ ︷︷ ︸ =:fd−2(x) (4) ` = log(1 + e−µ) (5)\nwhere y ∈ {−1, 1}, d denotes the d-th layer, φ represents all layers previous to d− 2, and σ denotes the activation function. Note that, although our notation omits the bias term in our approach, with an augmented matrix and augmented vector it is able to represent both of the linear map and the translation, e.g. Equation 2, using matrix multiplication as shown in Equation 4. So our formulation also includes the approach proposed by Phong et al. (2018). Moreover, if the i-th layer is a convolutional layer, then Wi is an extended circulant matrix representing the convolutional kernel (Golub & Van Loan, 1996), and data x as well as input of each layer are represented by a flattened vector in Equation 4." }, { "heading": "3.1.1 RECOVERING DATA FROM GRADIENTS", "text": "From Equation 4 and Equation 5 we can derive following gradients: ∂`\n∂wd = y\n∂` ∂µ f>d−1 (6)\n∂`\n∂Wd−1 =\n(( w>d ( y ∂`\n∂µ\n)) σ′d−1 ) f>d−2 (7)\n∂`\n∂Wd−2 =\n(( W>d−1 (( w>d ( y ∂`\n∂µ\n)) σ′d−1 )) σ′d−2 ) φ> (8)\nwhere σ′ denotes the derivative of σ, for more details of deriving the gradients refer to Appendix H. The first observation of these gradients is that:\n∂`\n∂wd · wd =\n∂` ∂µ µ (9)\nAdditionally, if σ1, ... , σd−1 are ReLU or LeakyRelu, the dot product of the gradients and weights of each layer will be the same, i.e.:\n∂`\n∂wd · wd =\n∂`\n∂Wd−1 ·Wd−1 = ... =\n∂`\n∂W1 ·W1 =\n∂` ∂µ µ (10)\nSince gradients and weights of each layer are known, we can obtain ∂`∂µµ. If loss function ` is logistic loss (Equation 5), we obtain:\n∂` ∂µ µ = −µ 1 + eµ . (11)\nIn order to perform R-GAP, we need to derive µ from ∂`∂µµ. As we can see, ∂` ∂µµ is non-monotonic, which means knowing ∂`∂µµ does not always allow us to uniquely recover µ. However, even in the case that we cannot uniquely recover µ, there are only two possible values to consider. Figure 1 illustrates ∂`∂µµ of logistic, exponential, and hinge losses, showing when we can uniquely recover µ from ∂`∂µµ. The non-uniqueness of µ inspires us to find a sort of data that can trigger exactly the same gradients as the real data, which we name twin data, denoted by x̃. The existence of twin data demonstrates that the objective function of DLG could have more than one global minimum, which explains at least in part why DLG is sensitive to initialization, for more information and experiments about the twin data refer to Appendix B.\nThe second observation on Equations 6-8 is that the gradients of each layer have a repeated format: ∂`\n∂wd = kdf>d−1; kd := y\n∂` ∂µ (12)\n∂`\n∂Wd−1 = kd−1f>d−2; kd−1 :=\n( w>d kd ) σ′d−1 (13)\n∂`\n∂Wd−2 = kd−2φ>; kd−2 :=\n( W>d−1kd−1 ) σ′d−2 (14)\nIn Equation 12, the value of y can be derived from the sign of the gradients at this layer if the activation function of previous layer is ReLU or Sigmoid, i.e. fd−1 > 0. For multi-class classification, y can always be analytically derived as proved by Zhao et al. (2020). From Equations 12-14 we can see that gradients are actually linear constraints on the output of the previous layer, also the input of the current layer. We name these gradient constraints, which can be generally described as:\nKixi = flatten( ∂`\n∂Wi ), (15)\nwhere i denotes i-th layer, xi denotes the input and Ki is a coefficient matrix containing all gradient constraints at the i-th layer." }, { "heading": "3.1.2 IMPLEMENTATION OF R-GAP", "text": "To reconstruct the input xi from the gradients ∂`∂Wi at the i-th layer, we need to determine Ki or ki. The coefficient vector ki solely relies on the reconstruction of the subsequent layer. For example in Equation 13, kd−1 consists of wd,kd, σ′d−1, where wd is known, and kd and σ′d−1 are products of the reconstruction at the d-th layer. More specifically, kd can be calculated by deriving y and µ as described in Section 3.1.1, σ′d−1 can be derived from the reconstructed fd−1. The condition for recovering xi under gradient constraints ki is that the rank of the coefficient matrix equals the number of entries of the input, rank(Ki) = |xi|. Furthermore, if this rank condition holds for i = 1, ..., d, we are able to reconstruct the input at each layer and do this recursively back to the input of the first layer.\nThe number of gradient constraints is the same as the number of weights, i.e. rows(Ki) = |Wi|; i = 1, ..., d. Specifically, in the case of a fully connected layer we always have rank(Ki) = |xi|, which implies the reconstruction over FCNs is always feasible. However in the case of a convolutional layer the matrix could possibly be rank-deficient to derive x. Fortunately, from the view of recursive reconstruction and assuming we know the input of the subsequent layer, i.e. the output of the current layer, there is a new group of linear constraints which we name weight constraints:\nWixi = zi; zi ← fi (16)\nFor a convolution layer, the Wi we use in this paper is the corresponding circulant matrix representing the convolutional kernel (Golub & Van Loan, 1996), so we can express the convolution in the form of Equation 16. In order to derive zi from fi, the activation function σi should be monotonic. Commonly used activation functions satisfy this requirement. Note that for the ReLU activation function, a 0 value in fi will remove a constraint in Wi. Otherwise, the number of weights constraints is equal to the number of entries in output, i.e. rows(Wi) = |zi|; i = 1, ..., d. In CNNs the number of weight constraints |zi| is much larger than the number of gradient constraints |Wi| in bottom layers, and well compensate for the lack of gradient constraints in those layers. It is worth noting that, due to the transformation from a CNN to a FCN using the circulant matrix, a CNN has been regarded equivalent to a FCN in the parallel work of Fan et al. (2020). However, we would like to point out that in consideration of the gradients w.r.t. the circulant matrix, what we obtain from a CNN are the aggregated gradients. Therefore, the number of valid gradient constraints in a CNN are much smaller than its corresponding FCN. Therefore, the conclusion of a rank analysis derived from a FCN cannot be directly applied to a CNN.\nMoreover, padding in the i-th convolutional layer increases |xi|, but also involves the same number of constraints, so we omit this detail in the subsequent discussion. However, we have incorporated the corresponding constraints in our approach. Based on gradient constraints and weight constraints, we break the gradient attacks down to a recursive process of solving systems of linear equations, which we name R-GAP . The approach is detailed in Algorithm 1." }, { "heading": "4 RANK ANALYSIS", "text": "For optimization-based gradient attacks such as DLG, it is hard to estimate whether it will converge to a unique solution given a network’s architecture other than performing an empirical test. An intuitive assumption would be that the more parameters in the model, the greater the chance of unique recovery, since there will be more terms in the objective function constraining the solution. We provide here an analytic approach, with which it is easy to estimate the feasibility of performing\nAlgorithm 1: R-GAP (Notation is consistent with Equation 6 to Equation 15) Data: i: i-th layer; Wi: weights;∇Wi: gradients; Result: x1 for i← d to 1 do\nif i = d then ∂` ∂µµ = ∇Wi ·Wi; µ← ∂`∂µµ; ki := y ∂` ∂µ ; zi := µ y ; else /* Derive σ ′ i and zi from fi. Note that xi+1 = fi. */\nσ′i ← xi+1; zi ← xi+1; ki := (W>i+1 ki+1) σ′i;\nend Ki ← ki;∇wi := flatten(∇Wi); Ai := [\nWi Ki\n] ; bi := [ zi ∇w i ] ;\nxi := A†ibi // A † i:Moore-Penrose pseudoinverse\nend\nthe recursive gradient attack, which in turn is a good proxy to estimate when DLG converges to a good solution (see Figure 2).\nSince R-GAP solves a sequence of linear equations, it is infeasible when the number of unknown parameters is more than the number of constraints at any i-th layer, i.e. |xi|− |Wi|− |zi| > 0. More precisely, R-GAP requires that the rank of Ai, which consists of Wi and Ki as shown in Algorithm 1, is equal to the number of input entries |xi|. However, Aixi = zi does not include all effective constraints over xi. Because xi is unique to zi−1 or partly unique in terms of the ReLU activation function, any constraint over zi−1 will limit the possible value of xi. On that note, suppose |xi−1| = m, |zi−1| = n and the weight constraints at the i − 1 layer is overdetermined, i.e. Wi−1xi−1 = zi−1; m < n, rank(Wi−1) = m. Without the loss of generality, let the first m entries of zi−1 be linearly independent, them+1, . . . , n entries of zi−1 can be expressed as linear combination of the firstm entries, i.e. Mz1, ...,mi−1 = z m+1, ..., n i−1 . In other words, if the previous layers are overdetermined by weight constraints, the subsequent layer will have additional constraints, not merely its local weight constraints and gradient constraints. Since this type of additional constraint is not derived from the parameters of the layer that under reconstruction, we name them virtual constraints denoted by V . When the activation function is the identity function, the virtual constraints are linear and can be readily derived. For the derivative of the activation function not being a constant, the virtual constraints will become non-linear. For more details about deriving the virtual constraints, refer to Appendix C. Optimization based attacks such as DLG are iterative algorithms based on gradient descent, and are able to implicitly utilize the non-linear virtual constraints. Therefore to provide a comprehensive estimate of the data vulnerability under gradient attacks, we also have to count the number of virtual constraints. It is worth noticing that virtual constraints can be passed along through the linear equation systems chain, but only in one direction that is to the subsequent layers. Next, we will informally use |Vi| to denote the number of virtual constraints at the i-th layer, which can be approximated by ∑i−1 n=1max(|zn|− |xn|, 0)−max(|xn|− |zn|− |Wn|, 0). For more details refer to Appendix C. In practice, the real number of such constraints is dependent on the data, current weights, and choice of activation function.\nThese three types of constraints, gradient, weight and virtual constraints, are effective for predicting the risk of gradient attack. To conclude, we propose that |xi| − |Wi| − |zi| − |Vi| is a good index to estimate the feasibility of fully recovering the input using gradient attacks at the i-th layer. We denote this value rank analysis index (RA-i). Particularly, |xi| − |Wi| − |zi| − |Vi| > 0 indicates it is not possible to perform a complete reconstruction of the input, and the larger this index is, the poorer the quality of reconstruction will be. If the constraints in a particular problem are linearly independent, |xi| − |Wi| − |zi| − |Vi| < 0 implies the ability to fully recover the input. The quality of reconstruction of data is well estimated by the maximal RA-i of all layers, as shown in Figure 2. In practice, the layers close to the data usually have smaller RA-i due to fewer virtual constraints.\nOn top of that we analyse the residual block in ResNet, which shows some interesting traits of the skip connection in terms of the rank-deficiency, for more details refer to Appendix D.\nA valuable observation we obtain through the rank analysis is that the architecture rather than the number of parameters is critical to gradient attacks, as shown in Figure 2. This observation is not obvious from simply specifying the DLG optimization problem(see Equation 1). Furthermore, since the data vulnerability of a network depends on the layer with maximal RA-i, we can design rank-deficiency into the architecture to improve the security of a network (see Figure 4)." }, { "heading": "5 RESULTS", "text": "Our novel approach R-GAP successfully extends the analytic gradient attack (Phong et al., 2018) from attacking a FCN with bias terms to attacking FCNs and CNNs1 with or without bias terms. To test its performance, we use a CNN6 network as shown in Figure 3, which is full-rank considering gradient constraints and weight constraints. Additionally, we report results using a CNN6-d network, which is rank-deficient without consideration of virtual constraints, in order to to fairly compare the performance of DLG and R-GAP. CNN6-d has a CNN6 backbone and just decreases the output channel of the second convolutional layer to 20. The activation function is a LeakyReLU except the last layer, which is a Sigmoid. We have randomly initialized the network, as DLG is prone to fail if the network is at a late stage of training (Geiping et al., 2020). Furthermore, as the label can be analytically recovered by R-GAP, we always provide DLG the ground-truth label and let it recover the image only. Therefore the experiment actually compares R-GAP with iDLG (Zhao et al., 2020). The experimental results show that, due to an analytic one-shot process, run-time of R-GAP is orders of magnitude shorter than DLG. Moreover, R-GAP can recover the data more accurately,\n1via equivalence between convolution and multiplication with a (block) circulant matrix.\nwhile optimization-based methods like DLG recover the data with artifacts, as shown in Figure 3. The statistical results in Table 1 also show that the reconstruction of R-GAP has a much lower MSE than DLG on the CNN6 network. However, as R-GAP only considers gradient constraints and weight constraints in the current implementation, it does not work well on the CNN6-d network. Nonetheless, we find that it is easy to assess the quality of reconstruction of gradient attack without knowing the original image. As the better reconstruction has less salt-and-pepper type noise. We measure this by the difference of the image and its smoothed version (achieved by a simple 3x3 averaging) and select the output with the smaller norm. This hybrid approach which we name HGAP combines the strengths of R-GAP and DLG, and obtains the best results.\nMoreover, we compare R-GAP with DLG on LeNet which has been benchmarked in DLG(Zhu et al., 2019), the statistical results are shown in Table 2. Both DLG and R-GAP perform well on LeNet. Empirically, if the MSE is around or below 1×10−4, the difference of the reconstruction will be visually undetectable. However, we surprisingly find that by replacing the Sigmoid function with the Leaky ReLU, the reconstruction of DLG becomes much poorer. The condition number of matrix A (from Algorithm 1) changes significantly in this case. Since the Sigmoid function leads to a higher condition number at each convolutional layer, reconstruction error in the subsequent layer could be amplified in the previous layer, therefore DLG is forced to converge to a better result. In contrast, R-GAP has an accumulated error and naturally performs much better on LeNet*. Additionally, we find R-GAP could be a good initialization tool for DLG. As shown in the last column of Table 2, by initializing DLG with the reconstruction of R-GAP, and running 8% of the previous iterations, we achieve a visually indistinguishable result. However, for LeNet*, we find that DLG reduces the reconstruction quality obtained by R-GAP, which further shows the instability of DLG.\nOur rank analysis is a useful offline tool to understand the risk inherent in certain network architectures. More precisely, we can use the rank analysis to find out the critical layer for the success of\ngradient attacks and take precision measurements to improve the network’s defendability. We report results on the ResNet-18, where the third residual block is critical since by cutting its skip connection the RA-i increases substantially. To perform the experiments, we use the approach proposed by Geiping et al. (2020), which extends DLG to incorporate image priors and performs better on deep networks. As shown in Figure 4, by cutting the skip connection of the third residual block, reconstructions become significantly poorer and more unstable. As a control test, cutting the skip connection of a non-critical residual block does not increase defendability noticeably. Note that two variants have the same or even slightly better performance on the classification task compared with the backbone. In previous works (Zhu et al., 2019; Wei et al., 2020), trade-off between accuracy and defendability of adding noise to gradients has been discussed. We show that using the rank analysis we are able to increase the defendability of a network with no cost in accuracy." }, { "heading": "6 DISCUSSION AND CONCLUSIONS", "text": "R-GAP makes the first step towards a general analytic gradient attack and provides a framework to answer questions about the functioning of optimization-based attacks. It also opens new questions, such as how to analytically reconstruct a minibatch of images, especially considering nonuniqueness due to permutation of the image indices. Nonetheless, we believe that by studying these questions, we can gain deeper insights into gradient attacks and privacy secure federated learning.\nIn this paper, we propose a novel approach R-GAP, which has achieved an analytic gradient attack for CNNs for the first time. Through analysing the recursive reconstruction process, we propose a novel rank analysis to estimate the feasibility of performing gradient based privacy attacks given a network architecture. Our rank analysis can be applied to the analysis of both closed-form and optimization-based attacks such as DLG. Using our rank analysis, we are able to determine network modifications that maximally improve the network’s security, empirically without sacrificing its accuracy. Furthermore, we have analyzed the existence of twin data using R-GAP, which can explain at least in part why DLG is sensitive to initialization and what type of initialization is optimal. In summary, our work proposes a novel type of gradient attack, a risk estimation tool and advances the understanding of optimization-based gradient attacks." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This research received funding from the Flemish Government (AI Research Program)." }, { "heading": "A QUANTITATIVE RESULTS OF RANK ANALYSIS", "text": "A quantitative analysis of the predictive performance of the rank analysis index for the mean squared error of reconstruction is shown in Table 3." }, { "heading": "B TWIN DATA", "text": "As we know ∂`∂µµ is non-monotonic as shown in Figure 1, which means knowing ∂` ∂µµ does not always allow us to uniquely recover µ. It is relatively straightforward to show that for monotonic convex losses (Bartlett et al., 2006), ∂`∂µµ is invertible for µ < 0, ∂` ∂µµ ≤ 0 for µ ≥ 0, and limµ→∞ ∂`∂µµ = 0. Due to the non-uniqueness of µ w.r.t to ∂` ∂µµ, we have:\n∃ x, x̃ s.t. µ 6= µ̃; ∂` ∂µ µ = ∂` ∂µ̃ µ̃ (17)\nwhere x is the real data.\nTaking the common setting that activation functions are ReLU or LeakyReLU, we can derive from Eq. 10 that:\n∂`\n∂Wi ·Wi =\n∂`\n∂W̃ · W̃i; i = 1, . . . , d (18)\nif there is a W̃i is equal to Wi, whereas the corresponding x̃ is not same as x since µ 6= µ̃, we can find a data point that differs from the true data but leads to the same gradients. We name such data twin data, denoted by x̃. As we know the gradients and µ of the twin data x̃, by just giving them to R-GAP, we are able to easily find out the twin data. As shown in in Figure 5, twin data is actually proportional to the real data and smaller than it, which can also be straightforwardly derived from Equation 6 to Equation 8. Since the twin data and the real data trigger the same gradients, by decreasing the distance of gradients as Equation 1, DLG is suppose to converge to either of these data. As shown in Figure 5, we initialize DLG with a data close to the twin data x̃, DLG converges to the twin data. In the work of Wei et al. (2020), the authors argue that using an image from the same\nclass as the real data would be the optimal initialization and empirically prove that. We want to point out that twin data is one important factor why DLG is so sensitive to the initialization and prone to fail with random initialization of dummy data particularly after some training steps of the network. Since DLG converges either to the twin data or the real data depends on the distance between these two data and the initialization, an image of the same class is usually close to the real data, therefore, DLG works better with that. While, with respect to µ or the prediction of the network, a random initialization is close to the twin data, so DLG converges to the twin data. However, the twin data has extremely small value, so any noise that comes up with optimization process stands out in the last result as shown in Figure 5.\nIt is worth noting that the twin data can be fully reconstructed only if RA-i < 0. In other words, if complete reconstruction is feasible and the twin data exits, R-GAP and DLG can recover either the twin data or real data depend on the initialization. But both of them lead to privacy leakage.\nC VIRTUAL CONSTRAINTS\nIn this section we investigate the virtual constraints as proposed in the rank analysis. To the beginning, let us derive the explicit virtual constraints from the i − 1 layer at the reconstruction of the i layer by assuming the activation function is an identity function. The weight constraints of the i− 1 layer can be expressed as:\nWxi−1 = z;\nSplit W, z into two parts coherently, i.e.:[ W+ W− ] xi−1 = [ z+ z− ] (19)\nAssume the upper part of the weights W+ is already full rank, therefore: z+ = I+z (20)\nxi−1 = W−1+ I+z (21) z− = I−z (22)\nW−xi−1 = I−z (23) Substituting Equation 21 into Equation 23, we can derive the following constraints over z after rearranging:\n(W−W−1+ I+ − I−)z = 0 (24) Since the activation function is the identity function, i.e. z = xi, the virtual constraints V that the i-th layer has inherited from the weight constraints of i− 1 layer are:\nVxi = 0; V = W−W−1+ I+ − I− (25) Virtual constraints as external constraints are able to compensate the local rank-deficiency of an intermediate layer. For other strictly monotonic activation function like Leaky ReLU, Sigmoid, Tanh, the virtual constraints over xi can be expressed as: Vσ−1i−1(xi) = 0 (26) This is not a linear equation system w.r.t. xi, therefore it is hard to be incorporated in R-GAP. In terms of ReLU the virtual constraints could become further more complicated which will reduce its efficacy. Nevertheless, the reconstruction of the i-th layer must take the virtual constraints into account. Otherwise, it will trigger a non-negligible reconstruction error later on. From this perspective, we can see that iterative algorithms like optimization-based attacks can inherently utilize such virtual constraints, which is a strength of O-GAP.\nWe would like to point out that theoretically the gradient constraints also have the same effect as the weight constraints in the virtual constraints but in a more sophisticated way. Empirical results show that the gradient constraints of previous layers do not have an evident impact on the subsequent layer in the O-GAP, so we have not taken it into account. The number of virtual constraints at i-th layer can therefore be approximated by ∑i−1 n=1max(|zn| − |xn|, 0)−max(|xn| − |zn| − |Wn|, 0)." }, { "heading": "D RANK ANALYSIS OF THE SKIP CONNECTION", "text": "If the skip connection skips one layer, for simplicity assuming the activation function is the identity function, then the layer can be expressed as:\nf = W∗x; W∗ = W + I (27) where f is the output of this layer, the weight matrix W∗ is clear and the number of weight constraints is equal to |f |. While the expression of gradients are the same as without skip connection, since:\n∇W∗ = ∇W (28) Therefore the number of gradient constraints is equal to |W|. In other words, without consideration of the virtual constraints, if |f | + |W| < |x| this layer is locally rank-deficient, otherwise it is full rank. This is the same as removing the skip connection.\nIf the skip connection skips over two layers, for simplicity assuming the activation function is identity function, then the residual block can be expressed as:\nx2 = W1x1; f = W2x2 + x1 (29) Whereas, the residual block has its equivalent fully connected format, i.e.:\nW∗1 = [ W1 I ] ; W∗2 = [W2 I] (30)\nx∗2 = W ∗ 1x1 =\n[ W1x1\nx1\n] (31)\nf = W∗2W ∗ 1x1 (32)\nFrom the perspective of a recursive reconstruction, f is clear, so after the reconstruction of x2, the input of this block x1 can be directly calculated by subtracting W2x2 from f as shown in Equation 29. Back to the Equation 31 that means only x∗2 needs to be recovered. Similar to the analysis for one layer, in terms of the reconstruction of x∗2, the number of weight constraints is |f | and the number of gradient constraints is |W2|. On top of that the upper part and lower part of x∗2 are related, which actually represents the virtual constraints from the first layer. Taking these into account, there are |W2|+ |f |+ |x2| constraints for the reconstruction of x∗2. However, x∗2 is also augmented compared with x2 and the number of entries is |x1|+ |x2|. To conclude, if |f |+ |W2| < |x1| the residual block is locally rank-deficient, otherwise it is full rank. Seemingly, the constraints of the last layer have been used to reconstruct the input of the residual block due to the skip connection2. This is an interesting trait, because the skip connection is able to make the rank-deficient layers like bottlenecks again full rank, as shown in Figure 6. It is worth noticing that the bottlenecks have been commonly used for residual blocks. Further, downsampling residual blocks also have this characteristic of rank condition, as the gradient constraints in the last layer are much more than the first layer due to the number of channels.\nE IMPROVING DEFENDABILITY OF RESNET101\nWe also apply the rank analysis to ResNet101 and try to improve its defendability. However, we find that this network is too redundant. It is not possible to decrease the RA-i by cutting a single skip connection as was done in Figure 4. Nevertheless, we devise two variants, the first of which cuts the skip connection of the third residual block and generates a layer that is locally rank-deficient\n2Through formulating the residual block with its equivalent sequential structure, this conclusion readily generalizes to residual blocks with three layers.\nand requires a large number of virtual constraints. Additionally, we devise a second variant, which cuts the skip connection of the first residual block and reduces the redundancy of two layers. The accuracy and reconstruction error of these networks can be found in Table 4." }, { "heading": "F R-GAP IN THE BATCH SETTING RETURNS A LINEAR COMBINATION OF", "text": "TRAINING IMAGES\nIt can be verified straightforwardly that R-GAP in the batch setting will return a linear combination of the training data. This is due to the fact that in the batch setting the gradients are simply accumulated. The weighting coefficients of the data in this linear mixture are dependent on the various values of µ for the different training data (see Figure 1). Figures 7 and 8 illustrate the results vs. batch DLG (Zhu et al., 2019) on examples from MNIST.\nOrigin R-GAP DLG" }, { "heading": "G ADDING NOISE TO THE GRADIENTS", "text": "The effect on reconstruction of adding noise to the gradients is illustrated in Figure 9.\nH DERIVING GRADIENTS\nµ = ywd =:fd−1(x)︷ ︸︸ ︷ σd−1 Wd−1 σd−2 (Wd−2φ (x))︸ ︷︷ ︸ =:fd−2(x) (33) ` = log(1 + e−µ) (34)\nd` = −µ\n1 + eµ dµ;\n∂` ∂µ = −µ 1 + eµ (35)\nd` = ( ∂`\n∂µ y) · d(wdfd−1(x)) (36)\nd` = ( ∂`\n∂µ y) · (d(wd)fd−1(x) + wdd(fd−1(x))) (37)\nd` = ∂`\n∂µ yf>d−1(x) · dwd + (w>d (\n∂` ∂µ y)) · dfd−1(x) (38)\n∂` ∂wd = ∂` ∂µ yf>d−1 (39)\nd` = ∂`\n∂wd · dwd + (w>d (\n∂` ∂µ y)) · (σ′d−1 dfd−1(x)) (40)\nd` = ∂`\n∂wd · dwd + ((w>d (\n∂` ∂µ y)) σ′d−1) · dfd−1(x) (41)\nd` = ∂`\n∂wd · dwd + ((w>d (\n∂` ∂µ y)) σ′d−1) · (d(Wd−1)fd−2(x) + Wd−1d(fd−2(x))) (42)\n∂`\n∂Wd−1 =\n(( w>d ( ∂`\n∂µ y\n)) σ′d−1 ) f>d−2 (43)\nd` = ∂`\n∂wd · dwd +\n∂`\n∂Wd−1 · dWd−1 + W>d−1((w>d (\n∂` ∂µ y)) σ′d−1) · dfd−2(x) (44)\n. . ." } ]
2,021
R-GAP: RECURSIVE GRADIENT ATTACK ON PRIVACY
SP:6cf84af3e1ae0c84dc251ba41a5acb3dc7f61645
[ "Considering a continuous time RNN with Lipschitz-continuous nonlinearity, the authors formulate sufficient conditions on the parameter matrices for the network to be globally stable, in the sense of a globally attracting fixed point. They provide a specific parameterization for the hidden-to-hidden weight matrices to control global stability and error gradients, consisting of a weighted combination of a symmetric and a skew-symmetric matrix (and some diagonal offset). The authors discuss numerical integration by forward-Euler and RK2, and thoroughly benchmark their approach against a large set of other state-of-the-art RNNs on various tasks including versions of MNIST and TIMIT. Finally, they highlight improved stability of their RNN against parameter and input perturbations." ]
Viewing recurrent neural networks (RNNs) as continuous-time dynamical systems, we propose a recurrent unit that describes the hidden state’s evolution with two parts: a well-understood linear component plus a Lipschitz nonlinearity. This particular functional form facilitates stability analysis of the long-term behavior of the recurrent unit using tools from nonlinear systems theory. In turn, this enables architectural design decisions before experimentation. Sufficient conditions for global stability of the recurrent unit are obtained, motivating a novel scheme for constructing hidden-to-hidden matrices. Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks, including computer vision, language modeling and speech prediction tasks. Finally, through Hessian-based analysis we demonstrate that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
[ { "affiliations": [], "name": "Omri Azencot" }, { "affiliations": [], "name": "Alejandro Queiruga" }, { "affiliations": [], "name": "Michael W. Mahoney" } ]
[ { "authors": [ "Martin Arjovsky", "Amar Shah", "Yoshua Bengio" ], "title": "Unitary evolution recurrent neural networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Yoshua Bengio", "Patrice Simard", "Paolo Frasconi" ], "title": "Learning long-term dependencies with gradient descent is difficult", "venue": "IEEE Transactions on Neural Networks,", "year": 1994 }, { "authors": [ "Rajendra Bhatia" ], "title": "Matrix analysis, volume 169", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Léon Bottou", "Olivier Bousquet" ], "title": "The tradeoffs of large scale learning", "venue": "In Advances in Neural Information Processing Systems, pp", "year": 2008 }, { "authors": [ "Bo Chang", "Minmin Chen", "Eldad Haber", "Ed Chi" ], "title": "AntisymmetricRNN: A dynamical system view on recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhengping Che", "Sanjay Purushotham", "Kyunghyun Cho", "David Sontag", "Yan Liu" ], "title": "Recurrent neural networks for multivariate time series with missing values", "venue": "Scientific reports,", "year": 2018 }, { "authors": [ "Tian Chen", "Yulia Rubanova", "Jesse Bettencourt", "David Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zhengdao Chen", "Jianyu Zhang", "Martin Arjovsky", "Léon Bottou" ], "title": "Symplectic recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tommy W.S. Chow", "Xiao-Dong Li" ], "title": "Modeling of continuous time dynamical systems with input by recurrent neural networks", "venue": "IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications,", "year": 2000 }, { "authors": [ "Marco Ciccone", "Marco Gallieri", "Jonathan Masci", "Christian Osendorfer", "Faustino Gomez" ], "title": "Nais-net: Stable deep networks from non-autonomous differential equations", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Edward De Brouwer", "Jaak Simm", "Adam Arany", "Yves Moreau" ], "title": "GRU-ODE-Bayes: Continuous modeling of sporadically-observed time series", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ken-ichi Funahashi", "Yuichi Nakamura" ], "title": "Approximation of dynamical systems by continuous time recurrent neural networks", "venue": "Neural Networks,", "year": 1993 }, { "authors": [ "John S. Garofolo" ], "title": "TIMIT acoustic phonetic continuous speech corpus", "venue": "Linguistic Data Consortium,", "year": 1993 }, { "authors": [ "Behrooz Ghorbani", "Shankar Krishnan", "Ying Xiao" ], "title": "An investigation into neural net optimization via Hessian eigenvalue density", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Wolfgang Hahn" ], "title": "Stability of motion, volume 138", "venue": null, "year": 1967 }, { "authors": [ "Kyle Helfrich", "Devin Willmott", "Qiang Ye" ], "title": "Orthogonal recurrent neural networks with scaled Cayley transform", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Mikael Henaff", "Arthur Szlam", "Yann LeCun" ], "title": "Recurrent orthogonal networks and long-memory tasks. volume", "venue": "Proceedings of Machine Learning Research,", "year": 2016 }, { "authors": [ "Geoffrey E. Hinton" ], "title": "Learning distributed representations of concepts", "venue": "In Conference of the Cognitive Science Society,", "year": 1986 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Roger A. Horn", "Charles R. Johnson" ], "title": "Matrix analysis", "venue": null, "year": 2012 }, { "authors": [ "Sanqing Hu", "Jun Wang" ], "title": "Global stability of a class of continuous-time recurrent neural networks", "venue": "IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications,", "year": 2002 }, { "authors": [ "Li Jing", "Yichen Shen", "Tena Dubcek", "John Peurifoy", "Scott Skirlo", "Yann LeCun", "Max Tegmark", "Marin Soljačić" ], "title": "Tunable efficient unitary neural networks (EUNN) and their application to RNNs", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Cijo Jose", "Moustapha Cisse", "Francois Fleuret" ], "title": "Kronecker recurrent units", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Anil Kag", "Ziming Zhang", "Venkatesh Saligrama" ], "title": "RNNs incrementally evolving on an equilibrium manifold: A panacea for vanishing and exploding gradients", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Giancarlo Kerg", "Kyle Goyette", "Maximilian Puelma Touzel", "Gauthier Gidel", "Eugene Vorontsov", "Yoshua Bengio", "Guillaume Lajoie" ], "title": "Non-normal recurrent neural network (nnRNN): Learning long time dependencies while improving expressivity with transient dynamics", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hassan K. Khalil" ], "title": "Nonlinear Systems. Pearson Education", "venue": null, "year": 2002 }, { "authors": [ "Young H. Kim", "Frank L. Lewis", "Chaouki T. Abdallah" ], "title": "Nonlinear observer design using dynamic recurrent neural networks", "venue": "In Proceedings of 35th IEEE Conference on Decision and Control,", "year": 1996 }, { "authors": [ "Aditya Kusupati", "Manish Singh", "Kush Bhatia", "Ashish Kumar", "Prateek Jain", "Manik Varma" ], "title": "Fastgrnn: A fast, accurate, stable and tiny kilobyte sized gated recurrent neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Quoc V. Le", "Navdeep Jaitly", "Geoffrey E. Hinton" ], "title": "A simple way to initialize recurrent networks of rectified linear units", "venue": "arXiv preprint arXiv:1504.00941,", "year": 2015 }, { "authors": [ "Mathias Lechner", "Ramin Hasani" ], "title": "Learning long-term dependencies in irregularly-sampled time series", "venue": "arXiv preprint arXiv:2006.04418,", "year": 2020 }, { "authors": [ "Randall J. LeVeque" ], "title": "Finite Difference Methods for Ordinary and Partial Differential Equations", "venue": "Society for Industrial and Applied Mathematics,", "year": 2007 }, { "authors": [ "Mario Lezcano-Casado", "David Martinez-Rubio" ], "title": "Cheap orthogonal constraints in neural networks: A simple parametrization of the orthogonal and unitary group", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Xiao-Dong Li", "John K.L. Ho", "Tommy W.S. Chow" ], "title": "Approximation of dynamical time-variant systems by continuous-time recurrent neural networks", "venue": "IEEE Transactions on Circuits and Systems II: Express Briefs,", "year": 2005 }, { "authors": [ "Mitchell Marcus", "Beatrice Santorini", "Mary Ann Marcinkiewicz" ], "title": "Building a large annotated corpus of English: the Penn Treebank", "venue": null, "year": 1993 }, { "authors": [ "Hongyuan Mei", "Jason M Eisner" ], "title": "The neural hawkes process: A neurally self-modulating multivariate point process", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Zakaria Mhammedi", "Andrew Hellicar", "Ashfaqur Rahman", "James Bailey" ], "title": "Efficient orthogonal parametrisation of recurrent neural networks using Householder reflections", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "John Miller", "Moritz Hardt" ], "title": "Stable recurrent models", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: A simple and accurate method to fool deep neural networks", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Tan M. Nguyen", "Richard G. Baraniuk", "Andrea L. Bertozzi", "Stanley J. Osher", "Bao Wang" ], "title": "Momentumrnn: Integrating momentum into recurrent neural networks", "venue": "arXiv preprint arXiv:2006.06919,", "year": 2020 }, { "authors": [ "Murphy Yuezhen Niu", "Lior Horesh", "Isaac Chuang" ], "title": "Recurrent neural networks in the eye of differential equations", "venue": "arXiv preprint arXiv:1904.12933,", "year": 2019 }, { "authors": [ "Razvan Pascanu", "Tomas Mikolov", "Yoshua Bengio" ], "title": "On the difficulty of training recurrent neural networks", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Barak A. Pearlmutter" ], "title": "Gradient calculations for dynamic recurrent neural networks: A survey", "venue": "IEEE Transactions on Neural networks,", "year": 1995 }, { "authors": [ "Fernando J. Pineda" ], "title": "Dynamics and architecture for neural computation", "venue": "Journal of Complexity,", "year": 1988 }, { "authors": [ "Yulia Rubanova", "Tian Chen", "David Duvenaud" ], "title": "Latent ordinary differential equations for irregularly-sampled time series", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Levent Sagun", "Utku Evci", "V. Ugur Guney", "Yann Dauphin", "Leon Bottou" ], "title": "Empirical analysis of the Hessian of over-parametrized neural networks", "venue": "arXiv preprint arXiv:1706.04454,", "year": 2017 }, { "authors": [ "Shankar Sastry" ], "title": "Nonlinear systems: Analysis, stability, and control, volume 10", "venue": null, "year": 2013 }, { "authors": [ "Adam P. Trischler", "Gabriele M. T" ], "title": "D’Eleuterio. Synthesis of recurrent neural networks for dynamical system simulation", "venue": "Neural Networks,", "year": 2016 }, { "authors": [ "Eugene Vorontsov", "Chiheb Trabelsi", "Samuel Kadoury", "Chris Pal" ], "title": "On orthogonality and learning recurrent networks with long term dependencies", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Scott Wisdom", "Thomas Powers", "John Hershey", "Jonathan Le Roux", "Les Atlas" ], "title": "Full-capacity unitary recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Zhewei Yao", "Amir Gholami", "Kurt Keutzer", "Michael W. Mahoney" ], "title": "PyHessian: Neural networks through the lens of the Hessian", "venue": null, "year": 1912 }, { "authors": [ "Huaguang Zhang", "Zhanshan Wang", "Derong Liu" ], "title": "A comprehensive review of stability analysis of continuous-time recurrent neural networks", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2014 }, { "authors": [ "Zhang" ], "title": "A PROOFS A.1 PROOFS OF THEOREM 1 AND LEMMA 1 There are numerous ways that one can analyze the global stability of (4) through the related model", "venue": null, "year": 2021 }, { "authors": [ "Kerg" ], "title": "2019), which computes the performance in terms of mean bits per character (BPC). Table 6 shows the results for back-propagation through time (BPTT) over 150 and 300 time steps, respectively. The Lipschitz RNN performs slightly better then the exponential RNN and the nonnormal", "venue": "RNN on this task. (Kerg et al.,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many interesting problems exhibit temporal structures that can be modeled with recurrent neural networks (RNNs), including problems in robotics, system identification, natural language processing, and machine learning control. In contrast to feed-forward neural networks, RNNs consist of one or more recurrent units that are designed to have dynamical (recurrent) properties, thereby enabling them to acquire some form of internal memory. This equips RNNs with the ability to discover and exploit spatiotemporal patterns, such as symmetries and periodic structures (Hinton, 1986). However, RNNs are known to have stability issues and are notoriously difficult to train, most notably due to the vanishing and exploding gradients problem (Bengio et al., 1994; Pascanu et al., 2013).\nSeveral recurrent models deal with the vanishing and exploding gradients issue by restricting the hidden-to-hidden weight matrix to be an element of the orthogonal group (Arjovsky et al., 2016; Wisdom et al., 2016; Mhammedi et al., 2017; Vorontsov et al., 2017; Lezcano-Casado & MartinezRubio, 2019). While such an approach is advantageous in maintaining long-range memory, it limits the expressivity of the model. To address this issue, recent work suggested to construct hidden-tohidden weights which have unit norm eigenvalues and can be nonnormal (Kerg et al., 2019). Another approach for resolving the exploding/vanishing gradient problem has recently been proposed by Kag et al. (2020), who formulate the recurrent units as a differential equation and update the hidden states based on the difference between predicted and previous states.\nIn this work, we address these challenges by viewing RNNs as dynamical systems whose temporal evolution is governed by an abstract system of differential equations with an external input. The data are formulated in continuous-time where the external input is defined by the function x = x(t) ∈ Rp, and the target signal is defined as y = y(t) ∈ Rd. Based on insights from dynamical systems theory, we propose a continuous-time Lipschitz recurrent neural network with the functional form{\nḣ = AβA,γAh+ tanh(WβW ,γW h+ Ux+ b) ,\ny = Dh ,\n(1a) (1b)\nwhere the hidden-to-hidden matrices Aβ,γ ∈ RN×N and Wβ,γ ∈ RN×N are of the form{ AβA,γA = (1− βA)(MA +MTA ) + βA(MA −MTA )− γAI WβW ,γW = (1− βW )(MW +MTW ) + βW (MW −MTW )− γW I,\n(2a)\n(2b)\nwhere βA, βW ∈ [0, 1], γA, γW > 0 are tunable parameters and MA,MW ∈ RN×N are trainable matrices. Here, h = h(t) ∈ RN is a function of time t that represents an internal (hidden) state, and ḣ = ∂h(t)∂t is its time derivative. The hidden state represents the memory that the system has of its past. The function in Eq. (1) is parameterized by the hidden-to-hidden weight matrices A ∈ RN×N and W ∈ RN×N , the input-to-hidden encoder matrix U ∈ RN×p, and an offset b. The function in Eq. (1b) is parameterized by the hidden-to-output decoder matrix D ∈ Rd×N . Nonlinearity is introduced via the 1-Lipschitz tanh activation function. While RNNs that are governed by differential equations with an additive structure have been studied before (Zhang et al., 2014), the specific formulation that we propose in (1) and our theoretical analysis are distinct.\nTreating RNNs as dynamical systems enables studying the long-term behavior of the hidden state with tools from stability analysis. From this point of view, an unstable unit presents an exploding gradient problem, while a stable unit has well-behaved gradients over time (Miller & Hardt, 2019). However, a stable recurrent unit can suffer from vanishing gradients, leading to catastrophic forgetting (Hochreiter & Schmidhuber, 1997b). Thus, we opt for a stable model whose dynamics do not (or only slowly do) decay over time. Importantly, stability is also a statement about the robustness of neural units with respect to input perturbations, i.e., stable models are less sensitive to small perturbations compared to unstable models. Recently, Chang et al. (2019) explored the stability of linearized RNNs and provided a local stability guarantee based on the Jacobian. In contrast, the particular structure of our unit (1) allows us to obtain guarantees of global exponential stability using control theoretical arguments. In turn, the sufficient conditions for global stability motivate a novel symmetric-skew decomposition based scheme for constructing hidden-to-hidden matrices. This scheme alleviates exploding and vanishing gradients, while remaining highly expressive.\nIn summary, the main contributions of this work are as follows:\n• First, in Section 3, using control theoretical arguments in a direct Lyapunov approach, we provide sufficient conditions for global exponential stability of the Lipschitz RNN unit (Theorem 1). Global stability is advantageous over local stability results since it guarantees non-exploding gradients regardless of the state. In the special case where A is symmetric, we find that these conditions agree with those in classical theoretical analyses (Lemma 1).\n• Next, in Section 4, drawing from our stability analysis, we propose a novel scheme based on the symmetric-skew decomposition for constructing hidden-to-hidden matrices. This scheme mitigates the vanishing and exploding gradients problem, while obtaining highly expressive hidden-to-hidden matrices.\n• In Section 6, we show that our Lipschitz RNN has the ability to outperform state-of-theart recurrent units on computer vision, language modeling and speech prediction tasks. Further, our results show that the higher-order explicit midpoint time integrator improves the predictive accuracy as compared to using the simpler one-step forward Euler scheme.\n• Finally, in Section 7), we study our Lipschitz RNN via the lens of the Hessian and show that it is robust with respect to parameter perturbations; we also show that our model is more robust with respect to input perturbations, compared to other continuous-time RNNs." }, { "heading": "2 RELATED WORK", "text": "The problem of vanishing and exploding gradients (and stability) have a storied history in the study of RNNs. Below, we summarize two particular approaches to the problem (constructing unitary/orthogonal RNNs and the dynamical systems viewpoint) that have gained significant attention.\nUnitary and orthogonal RNNs. Unitary recurrent units have received attention recently, largely due to Arjovsky et al. (2016) showing that unitary hidden-to-hidden matrices alleviate the vanishing and exploding gradients problem. Several other unitary and orthogonal models have also been proposed (Wisdom et al., 2016; Mhammedi et al., 2017; Jing et al., 2017; Vorontsov et al., 2017; Jose et al., 2018). While these approaches stabilize the training process of RNNs considerably, they also\nlimit their expressivity and their prediction accuracy. Further, unitary RNNs are expensive to train, as they typically involve the computation of a matrix inverse at each step of training. Recent work by Lezcano-Casado & Martinez-Rubio (2019) overcame some of these limitations. By leveraging concepts from Riemannian geometry and Lie group theory, their recurrent unit exhibits improved expressivity and predictive accuracy on a range of benchmark tasks while also being efficient to train. Another competitive recurrent design was recently proposed by Kerg et al. (2019). Their approach is based on the Schur decomposition, and it enables the construction of general nonnormal hidden-to-hidden matrices with unit-norm eigenvalues.\nDynamical systems inspired RNNs. The continuous time view of RNNs has a long history in the neurodynamics community as it provides higher flexibility and increased interpretability (Pineda, 1988; Pearlmutter, 1995; Zhang et al., 2014). In particular, RNNs that are governed by differential equations with an additive structure have been extensively studied from a theoretical point of view (Funahashi & Nakamura, 1993; Kim et al., 1996; Chow & Li, 2000; Hu & Wang, 2002; Li et al., 2005; Trischler & D’Eleuterio, 2016). See Zhang et al. (2014) for a comprehensive survey of continuous-time RNNs and their stability properties.\nRecently, several works have adopted the dynamical systems perspective to alleviate the challenges of training RNNs which are related to the vanishing and exploding gradients problem. For nonsequential data, Ciccone et al. (2018) proposed a negative-definite parameterization for enforcing stability in the RNN during training. Chang et al. (2019) introduced an antisymmetric hidden-tohidden weight matrix and provided guarantees for local stability. Kag et al. (2020) have proposed a differential equation based formulation for resolving the exploding/vanishing gradients problem by updating the hidden states based on the difference between predicted and previous states. Niu et al. (2019) employed numerical methods for differential equations to study the stability of RNNs.\nAnother line of recent work has focused on continuous-time models that deal with irregular sampled time-series, missing values and multidimensional time series. Rubanova et al. (2019) and De Brouwer et al. (2019) formulated novel recurrent models based on the theory of differential equations and their discrete integration. Lechner & Hasani (2020) extended these ordinary differential equation (ODE) based models and addresses the issue of vanishing and exploding gradients by designing an ODE-model that is based on the idea of long short-term memory (LSTM). This ODE-LSTM outperforms the continuous-time LSTM (Mei & Eisner, 2017) as well as the GRU-D model (Che et al., 2018) that is based on a gated recurrent unit (GRU).\nThe link between dynamical systems and models for forecasting sequential data also provides the opportunity to incorporate physical knowledge into the learning process which improves the generalization performance, robustness, and ability to learn with limited data (Chen et al., 2019)." }, { "heading": "3 STABILITY ANALYSIS OF LIPSCHITZ RECURRENT UNITS", "text": "One of the key contributions in this work is that we prove that model (1) is globally exponentially stable under some mild conditions on A and W . Namely, for any initial hidden state we can guarantee that our Lipschitz unit converges to an equilibrium if it exists, and therefore, gradients can never explode. We improve upon recent work on stability in recurrent models, which provide only a local analysis, see e.g., (Chang et al., 2019). In fact, global exponential stability is among the strongest notions of stability in nonlinear systems theory, implying all other forms of Lyapunov stability about the equilibrium h∗ (Khalil, 2002, Definitions 4.4 and 4.5). Definition 1. A point h∗ is an equilibrium point of ḣ = f(h, t) if f(h∗, t) = 0 for all t. Such a point is globally exponentially stable if there exists some C > 0 and λ > 0 such that for any choice of initial values h(0) ∈ RN ,\n‖h(t)− h∗‖ ≤ Ce−λt‖h(0)− h∗‖, for any t ≥ 0. (3)\nThe presence of a Lipschitz nonlinearity in (1) plays an important role in our analysis. While we focus on tanh in our experiments, our proof is more general and is applicable to models whose nonlinearity σ(·) is an M -Lipschitz function. Specifically, we consider the general model\nḣ = Ah+ σ(Wh+ Ux+ b) , (4) for which we have the following stability result. In the following, we let σmin and σmax denote the smallest and largest singular values of the hidden-to-hidden matrices, respectively.\nTheorem 1. Let h∗ be an equilibrium point of a differential equation of the form (4) for some x ∈ Rp. The point h∗ is globally exponentially stable if the eigenvalues of Asym := 12 (A + A\nT ) are strictly negative, W is non-singular, and either (a) σmin(Asym) > Mσmax(W ); or (b) σ is monotone non-decreasing, W +WT is negative definite, and ATW +WTA is positive definite.\nThe two cases show that global exponential stability is guaranteed if either (a) the matrix A has eigenvalues with real parts sufficiently negative to counteract expanding trajectories in the nonlinearity; or (b) the nonlinearity is monotone, both A and W yield stable linear systems u̇ = Au, v̇ = Wv, and A,W have sufficiently similar eigenvectors. In practice, case (b) occasionally holds, but is challenging to ensure without assuming specific structure onA,W . Because such assumptions could limit the expressiveness of the model, the next section will develop a tunable formulation for A and W with the capacity to ensure that case (a) holds.\nIn Appendix A.1, we provide a proof of Theorem 1 using a direct Lyapunov approach. One advantage of this approach is that the driving input x is permitted to evolve in time arbitrarily in the analysis. The proof relies on the classical Kalman-Yakubovich-Popov lemma and circle criterion from control theory — to our knowledge, these tools have not been applied in the modern RNN literature, and we hope our proof can illustrate their value to the community.\nIn the special case whereA is symmetric and x(t) constant, we show that we can also inherit criteria for both local and global stability from a class of well-studied Cohen–Grossberg–Hopfield models. Lemma 1. Suppose that A is symmetric and W is nonsingular. There exists a diagonal matrix D ∈ RN×N , and nonsingular matrices L, V ∈ RN×N such that an equilibrium of (4) is (globally exponentially) stable if and only if there is a corresponding (globally exponentially) stable equilibrium for the system\nż = Dz + Lσ(V z + Ux+ b). (5)\nFor a thorough review of analyses of (5), see (Zhang et al., 2014). In this special case, the criteria in Theorem 1 coincide with those obtained for the corresponding model (5). However, in practice, we will not choose A to be symmetric." }, { "heading": "4 SYMMETRIC-SKEW HIDDEN-TO-HIDDEN MATRICES", "text": "In this section we propose a novel scheme for constructing hidden-to-hidden matrices. Specifically, based on the successful application of skew-symmetric hidden-to-hidden weights in several recent recurrent architectures, and our stability criteria in Theorem 1, we propose an effective symmetricskew decomposition for hidden matrices. Our decomposition allows for a simple control of the matrix spectrum while retaining its wide expressive range, enabling us to satisfy the spectral constraints derived in the previous section on both A and W . The proposed scheme also accounts for the issue of vanishing gradients by reducing the magnitude of large negative eigenvalues.\nRecently, several methods used skew-symmetric matrices, i.e., S + ST = 0 to parameterize the recurrent weights W ∈ RN×N , see e.g., (Wisdom et al., 2016; Chang et al., 2019). From a stability analysis viewpoint, there are two main advantages for using skew-symmetric weights: these matrices generate the orthogonal group whose elements are isometric maps and thus preserve norms (Lezcano-Casado & Martinez-Rubio, 2019); and the spectrum of skew-symmetric matrices is purely imaginary which simplifies stability analysis (Chang et al., 2019). The main shortcoming of this parametrization is its reduced expressivity, as these matrices have fewer than half of the parameters of a full matrix (Kerg et al., 2019). The latter limiting aspect can be explained from a dynamical systems perspective: skew-symmetric matrices can only describe oscillatory behavior, whereas a matrix whose eigenvalues have nonzero real parts can also encode viable growth and decay information.\nTo address the expressivity issue, we aim for hidden matrices which on the one hand, allow to control the expansion and shrinkage of their associated trajectories, and on the other hand, will be sampled from a superset of the skew-symmetric matrices. Our analysis in Theorem 1 guarantees that Lipschitz recurrent units maintain non-expanding trajectories under mild conditions on A and W . Unfortunately, this proposition does not provide any information with respect to the shrinkage of paths. Here, we opt for a system whose expansion and shrinkage can be easily controlled. Formally, the latter requirement is equivalent to designing hidden weights S with smallRλi(S), i = 1, 2, . . . , N , where R(z) denotes the real part of z. A system of the form (4) whose matrices A and W exhibit\nsmall spectra and satisfy the conditions of Theorem 1, will exhibit dynamics with moderate decay and growth behavior and alleviate the problem of exploding and vanishing gradients. To this end, we propose the following symmetric-skew decomposition for constructing hidden matrices:\nSβ,γ := (1− β) · (M +MT ) + β · (M −MT )− γI, (6) where M is a weight matrix, and β ∈ [0.5, 1], γ > 0 are tuning parameters. In the case (β, γ) = (1, 0), we recover a skew-symmetric matrix, i.e., S1,0 + ST1,0 = 0. The construction Sβ,γ is useful as we can easily bound its spectrum via the parameters β and γ, as we show in the next proposition. Proposition 1. Let Sβ,γ satisfy (6), and let M sym = 12 (M +M\nT ). The real parts <λi(Sβ,γ) of the eigenvalues of Sβ,γ , as well as the eigenvalues of S sym β,γ = Sβ,γ + S T β,γ , lie in the interval\n[(1− β)λmin(M sym)− γ, (1− β)λmax(M sym)− γ].\nA proof is provided in Appendix A.2. We infer that β controls the width of the spectrum, while increasing γ shifts the spectrum to the left along the real axis, thus enforcing eigenvalues with nonpositive real parts. Choosing our hidden-to-hidden matrices to be AβA,γA and WβW ,γW of the form (6) for different values of βA, βW and γA, γW , we can ensure small spectra and satisfy the conditions of Theorem 1 as desired. Note, that different tuning parameters β and γ affect the stability behavior of the Lipschitz recurrent unit. This is illustrated in Figure 1, where different values for β and γ are used to construct both Aβ,γ and Wβ,γ and applied to learning simple pendulum dynamics.\nOne cannot guarantee that model parameters will remain in the stability region during training. However, we can show that when β is taken to be close to one, the eigenvalues of Asymβ,γ and W sym β,γ (which dictate the stability of the RNN) change slowly during training. Let ∆δF denote the change in a function F depending on the parameters of the RNN (1) after one step of gradient descent with step size δ with respect to some loss L(y). For a matrix A, we let λk(A) denote the k-th singular value of A. We have the following lemma. Lemma 2. As β → 1−, maxk |∆δλk(Asymβ,γ )|+ maxk |∆δλk(W sym β,γ )| = O(δ(1− β)2).\nTherefore, provided both the initial and optimal parameters lie within the stability region, the model parameters will remain in the stability region for longer periods of time with high probability as β → 1. Further empirical evidence of parameters often remaining in the stability region during training are provided alongside the proof of Lemma 2 in the Appendix (see Figure 5)." }, { "heading": "5 TRAINING CONTINUOUS-TIME LIPSCHITZ RECURRENT UNITS", "text": "ODEs such as Eq. (1) can be approximately solved by employing numerical integrators. In scientific computing, numerical integration is a well studied field that provides well understood techniques (LeVeque, 2007). Recent literature has also introduced new approaches which are designed with neural network frameworks in mind (Chen et al., 2018).\nTo learn the weightsA,W,U and b, we discretize the continuous model using one step of a numerical integrator between sequence entries. In what follows, a subscript t denotes discrete time indices,\n∆t represents the time difference between a pair of consecutive data points. Letting f(h, t) = Ah+ tanh(Wh+ Ux(s) + b) so that ḣ(t) = f(h, t), the exact and approximate solutions for ht+1 given ht are given by\nht+1 = ht + ∫ t+∆t t f(h(s), s)ds := ht + ∫ t+∆t t Ah(s) + tanh(Wh(s) + Ux(s) + b) ds (7)\n≈ ht + ∆t · scheme [f, ht, ∆t] , (8)\nwhere scheme represents one step of a numerical integration scheme whose application yields an approximate solution for 1∆t ∫ t+∆t t f(h(s), s)ds given ht using one or more evaluations of f .\nWe consider both the explicit (forward) Euler scheme,\nht+1 = ht + ∆t ·Aht + ∆t · tanh(zt), (9)\nas well as the midpoint method which is a two-stage explicit Runge-Kutta scheme (RK2),\nht+1 = ht + ∆t ·Ah̃+ ∆t · tanh(Wh̃+ Uxt + b), (10)\nwhere h̃ = ht + ∆t/2 · Aht + ∆t/2 · tanh(zt) is an intermediate hidden state. The RK2 scheme can potentially improve the performance since the scheme is more accurate, however, this scheme also requires twice as many function evaluations as compared to the forward Euler scheme. Given a β and γ that yields a globally exponentially stable continuous model, ∆t can always be chosen so that the model remains in the stability region of forward Euler and RK2 (LeVeque, 2007)." }, { "heading": "6 EMPIRICAL EVALUATION", "text": "In this section, we evaluate the performance of the Lipschitz RNN and compare it to other state-ofthe-art methods. The model is applied to ordered and permuted pixel-by-pixel MNIST classification, as well as to audio data using the TIMIT dataset. We show the sensitivity with respect to to random initialization in Appendix B. Appendix B also contains additional results for: pixel-by-pixel CIFAR10 and a noise-padded version of CIFAR-10; as well as for character level and word level prediction using the Penn Tree Bank (PTB) dataset. All of these tasks require that the recurrent unit learns long-term dependencies: that is, the hidden-to-hidden matrices need to have sufficient memory to remember information from far in the past." }, { "heading": "6.1 ORDERED AND PERMUTED PIXEL-BY-PIXEL MNIST", "text": "The pixel-by-pixel MNIST task tests long range dependency by sequentially presenting 784 pixels to the recurrent unit, i.e., the RNN processes one pixel at a time (Le et al., 2015). At the end of the\nsequence, the learned hidden state is used to predict the class membership probability of the input image. This task requires that the RNN has a sufficient long-term memory in order to discriminate between different classes. A more challenging variation to this task is to operate on a fixed random permutation of the input sequence.\nTable 1 provides a summary of our results. The Lipschitz RNN, with hidden dimension of N = 128 and trained with the forward Euler and RK2 scheme, achieves 99.4% and 99.3% accuracy on the ordered pixel-by-pixel MNIST task. For the permuted task, the model trained with forward Euler achieves 96.3% accuracy, whereas the model trained with RK2 achieves 96.2% accuracy. Hence, our Lipschitz recurrent unit outperforms state-of-the-art RNNs on both tasks and is competitive even when a hidden dimension of N = 64 is used, however, it can be seen that a larger unit with more capacity is advantageous for the permuted task. Our results show that we significantly outperform the Antisymmetric RNN (Chang et al., 2019) on the ordered tasks, while using fewer weights. That shows that the antisymmetric weight paramterization is limiting the expressivity of the recurrent unit. The exponential RNN is the next most competitive model, yet this model requires a larger hidden-to-hidden unit to perform well on the two considered tasks." }, { "heading": "6.2 TIMIT", "text": "Next, we consider the TIMIT dataset (Garofolo, 1993) to study the capabilities of the Lipschitz RNN for speech prediction using audio data. For our experiments, we used the publicly available implementation of this task by Lezcano-Casado & Martinez-Rubio (2019). This implementation applies the preprocessing steps suggested by Wisdom et al. (2016): (i) downsample each audio sequence to 8kHz; (ii) process the downsampled sequences with a short-time Fourier transform using a Hann window of 256 samples and a window hop of 128 samples; and (iii) normalize the logmagnitude of the Fourier amplitudes. We obtain a set of frames that each have 129 complex-valued Fourier amplitudes and the task is to predict the log-magnitude of future frames. To compare our results with those of other models, we used the common train / validation / test split: 3690 utterances from 462 speakers for training, 192 utterances for validation, and 400 utterances for testing.\nTable 2 lists the results for the Lipschitz recurrent unit as well as for several benchmark models. It can be seen that the Lipschitz RNN outperforms other state-of-the-art models for a fixed number of parameters (≈ 200K). In particular, LSTMs do not perform well on this task, however, the recently proposed momentum based LSTMs (Nguyen et al., 2020) have improvemed performance. Interestingly, the RK2 scheme leads to a better performance since this scheme provides more accurate approximations for the intermediate states." }, { "heading": "7 ROBUSTNESS WITH RESPECT TO PERTURBATIONS", "text": "An important consideration beyond accuracy is robustness with respect to input and parameter perturbations. We consider a Hessian-based analysis and noise-response analysis of different continuous-time recurrent units and train the models on MNIST. Here, we reshape each MNIST thumbnail into sequences of length 98 so that each input has dimension x ∈ R8. We consider this\nsimpler problem so that all models obtain roughly the same training loss. Here we use stochastic gradient decent (SGD) with momentum to train the models.\nEigenanalysis of the Hessian provides a tool for studying various aspects of neural networks (Hochreiter & Schmidhuber, 1997a; Sagun et al., 2017; Ghorbani et al., 2019). Here, we study the Hessian H spectrum with respect to the model parameters of the recurrent unit using PyHessian (Yao et al., 2019). The Hessian provides us with insights about the curvature of the loss function L. This is because the Hessian is defined as the derivatives of the gradients, and thus the Hessian eigenvalues describe the change in the gradient of L as we take an infinitesimal step into a given direction. The eigenvectors span the (local) surface of the loss function at a given point, and the corresponding eigenvalue determines the curvature in the direction of the eigenvectors. This means that larger eigenvalues indicate a larger curvature, i.e., greater sensitivity, and the sign of the eigenvalues determines whether the curvature will be positive or negative.\nTo demonstrate the advantage of the additional linear term and our weight parameterization, we compare the Lipschitz RNN to two other continuous-time recurrent units. First, we consider a simple neural ODE RNN (Rubanova et al., 2019) that takes the form\nḣ = tanh(Wh+ Ux+ b), y = Dh, (11)\nwhere W is a simple hidden-to-hidden matrix. As a second model we consider the antisymmetric RNN (Chang et al., 2019), that takes the same form as (11), but uses a skew-symmetric scheme to parameterize the hidden-to-hidden matrix as W := (M −MT )−γI , where M is a trainable weight matrix and γ is a tunable parameter.\nTable 3 reports the largest eigenvalue λmax(H) and the trace of the Hessian tr(H).The largest eigenvalue being smaller indicates that our Lipschitz RNN found a flatter minimum, as compared to the simple neural ODE and Antisymmetric RNN. It is known that such flat minima can be perturbed without significantly changing the loss value (Hochreiter & Schmidhuber, 1997a). Table 3 also reports the condition number κ(H) := λmax(H)λmin(H) of the Hessian. The condition number κ(H) provides a measure for the spread of the eigenvalues of the Hessian. It is known that first-order methods can slow down in situations where κ is large (Bottou & Bousquet, 2008). The condition number and trace of our Lipshitz RNN being smaller also indicates improved robustness properties.\nNext, we study the sensitivity of the response yT at time T in terms of the test accuracy with respect to a sequence of perturbed inputs {x̃1, . . . , x̃T } ∈ R8. We consider three different perturbations. The results for the artificially constructed perturbations are presented in Table 3, showing that the Lipschitz RNN is more resilient to adversarial perturbation. Here, we have considered the projected gradient decent (PGD) (Goodfellow et al., 2014) method with l∞, and the DeepFool\nmethod (Moosavi-Dezfooli et al., 2016) with l2 and l∞ norm ball perturbations. We construct the adversarial examples with full access to the models, using 7 iterations. The step size for PGD is set to 0.01.\nFurther, Figure 2 shows the results for white noise and salt and pepper noise. It can be seen that the Lipschitz unit is less sensitive to input perturbations, as compared to the simple neural ODE RNN, and the antisymmetric RNN. In addition, we also show the results for an unitary RNN here." }, { "heading": "7.1 ABLATION STUDY", "text": "The performance of the Lipschitz recurrent unit is due to two main innovations: (i) the additional linear term; and (ii) the scheme for constructing the hidden-to-hidden matrices A and W in Eq. (6). Thus, we investigate the effect of both innovations, while keeping all other conditions fixed. More concretely, we consider the following ablation recurrent unit\nht+1 = ht + α · ·Aht + · tanh(zt), with zt = Wht + Uxt + b, (12)\nwhere α controls the effect of the linear hidden unit. Both A and W depend on the parameters β, γ.\nFigure 3a studies the effect of the linear hidden unit, with β = 0.65 for the ordered task and β = 0.8 for the permuted task. In both cases we use γ = 0.001. It can be seen that the test accuracies of both the ordered and permuted pixel-by-pixel MNIST tasks clearly depend on the linear hidden unit. For α = 0, our models reduces to simple neural ODE recurrent units (Eq. (11)). The recurrent unit degenerates for α > 1.6, since the external input is superimposed by the hidden state. Figure 3b studies the effect of the hidden-to-hidden matrices with respect to β. It can be seen that β = {0.65, 0.70} achieves peak performance for the ordered task, and β = {0.8, 0.85} does so for the permuted task. Note that β = 1.0 recovers an skew-symmetric hidden-to-hidden matrix." }, { "heading": "8 CONCLUSION", "text": "Viewing RNNs as continuous-time dynamical systems with input, we have proposed a new Lipschitz recurrent unit that excels on a range of benchmark tasks. The special structure of the recurrent unit allows us to obtain guarantees of global exponential stability using control theoretical arguments. In turn, the insights from this analysis motivated the symmetric-skew decomposition scheme for constructing hidden-to-hidden matrices, which mitigates the vanishing and exploding gradients problem. Due to the nice stability properties of the Lipschitz recurrent unit, we also obtain a model that is more robust with respect to input and parameter perturbations as compared to other continuoustime units. This behavior is also reflected by the Hessian analysis of the model. We expect that the improved robustness will make Lipschitz RNNs more reliable for sensitive applications. The theoretical results for our symmetric-skew decomposition of parameterizing hidden-to-hidden matrices also directly extend to the convolutional setting. Future work will explore this extension and study the potential advantages of these more parsimonious hidden-to-hidden matrices in combination with our parameterization in practice. Research code is shared via github.com/erichson/LipschitzRNN." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Ed H. Chi for fruitful discussions about physics-informed machine learning and the Antisymmetric RNN. We are grateful to the generous support from Amazon AWS and Google Cloud. NBE and MWM would like to acknowledge IARPA (contract W911NF20C0035), NSF, ONR and CLTC for providing partial support of this work. Our conclusions do not necessarily reflect the position or the policy of our sponsors, and no official endorsement should be inferred." }, { "heading": "A PROOFS", "text": "" }, { "heading": "A.1 PROOFS OF THEOREM 1 AND LEMMA 1", "text": "There are numerous ways that one can analyze the global stability of (4) through the related model (5), many of which are discussed in Zhang et al. (2014). Instead, here we shall conduct a direct approach and avoid appealing to diagonalization in order to obtain cleaner conditions, and a more straightforward proof that readily applies in the time-inhomogeneous setting.\nOur method of choice relies on Lyapunov arguments summarized in the following theorem, which can be found as (Khalil, 2002, Theorem 4.10). For more details on related Lyapunov theory, see also Hahn (1967); Sastry (2013).\nTheorem 2. An equilibrium h∗ for ḣ = f(t, h) is globally exponentially stable if there exists a continuously differentiable function V : [0,∞)×RN → [0,∞) such that for all h ∈ RN and t ≥ 0,\nk1‖h− h∗‖α ≤ V (t, h) ≤ k2‖h− h∗‖α, and ∂V ∂t + ∂V ∂h ≤ −k3‖h− h∗‖α,\nfor some constants k1, k2, k3, α > 0. and V̇ (h) < 0 for h 6= h∗.\nTo simplify matters, we shall choose a Lyapunov function V : RN → [0,∞) that is independent of time. The most common type of Lyapunov function satisfying the conditions of Theorem 2 is of the form V (h) = (h− h∗)TP (h− h∗), where P is a positive definite matrix. One need only show that V̇ (h) ≤ −(h − h∗)TQ(h − h∗) for some other positive definite matrix Q to guarantee global exponential stability.\nThe construction of the Lyapunov function V that satisfies the conditions of Theorem 2 is accomplished using the Kalman-Yakubovich-Popov lemma, which is a statement regarding strictly positive real transfer functions. We use the following definition, equivalent to other standard definitions by (Khalil, 2002, Lemma 6.1). Definition 2. A function G : C→ CN×N is strictly positive real if it satisfies the following:\n(i) The poles of G(s) have negative real parts.\n(ii) G(iω) +G(−iω)T is positive definite for all ω ∈ R, where i = √ −1.\n(iii) Either G(∞) + G(∞)T is positive definite or it is positive semidefinite and limω→∞ ω\n2MT [G(iω) +G(−iω)T ]M is positive definite for any N × (N − q) full-rank matrix M such that MT [G(∞) +G(∞)T ]M = 0, where q = rank[G(∞) +G(∞)T ].\nThe following is presented in (Khalil, 2002, Lemma 6.3). Lemma 3 (Kalman-Yakubovich-Popov). Let A,W : RN → RN be full-rank square matrices. There exists a symmetric positive-definite matrix P and matrices L,U and a constant > 0 such that\nPA+ATP = −LTL− P P = LTU −WT\nUTU = 0,\nif and only if the transfer function G(s) = W (sI − A)−1 is strictly positive real. In this case, we may take = 2µ, where µ > 0 is chosen so that G(s− µ) remains strictly positive real.\nA shorter proof for case (a) is available to us through the (multivariable) circle criterion — the following theorem is a corollary of (Khalil, 2002, Theorem 7.1) suitable for our purposes. Theorem 3 (Circle Criterion). The system of differential equations\nḣ = Ah+ ψ(t,Wh)\nis globally exponentially stable towards an equilibrium at the origin if ‖ψ(t, y)‖ ≤ M‖y‖ for some M > 0 and Z(s) = [I + MG(s)][I −MG(s)]−1 is strictly positive real, where G(s) = W (sI −A)−1.\nBoth the Kalman-Yakubovich-Popov lemma and the circle criterion are classical results in control theory, and are typically discussed in the setting of feedback systems (Khalil, 2002, Chapter 6, 7). Our presentation here is less general than the complete formulation, but makes clearer the connection to RNNs. With these tools, we state our proof of Theorem 1.\nProof of Theorem 1. To begin, we shall center the differential equation about the equilibrium. By assumption, there exists h∗ such that Ah∗ = −σ(Wh∗ + Ux(t) + b). Letting h̄ = h− h∗, we find that\n˙̄h = Ah+ σ(Wh+ Ux(t) + b)\n= Ah̄+Ah∗ + σ(Wh̄+Wh∗ + Ux(t) + b)\n= Ah̄+ σ(Wh̄+Wh∗ + Ux(t) + b)− σ(Wh∗ + Ux(t) + b). (13) It will suffice to show that (13) is globally exponentially stable at the origin.\nLet us begin with case (a). The proof follows arguments analogous to (Khalil, 2002, Example 7.1). Let G(s) = W (A− sI)−1 denote the transfer function for the system (13). Letting\nψ(t, x) = σ(x+Wh∗ + Ux(t) + b)− σ(Wh∗ + Ux(t) + b), since σ is M -Lipschitz, we know that ‖ψ(t, x)‖ ≤ M‖x‖ for any x ∈ RN . Therefore, let Z(s) = [I + MG(s)][I −MG(s)]−1 denote the transfer function in the circle criterion. Our objective is to show that Z(s) is strictly positive real — by Theorem 3, this will guarantee the desired global exponential stability of (4). First, we need to show that the poles of Z(s) have negative real parts. This can only occur when G(s) itself has poles or I −MG(s) is singular. The former case occurs precisely where A − sI is singular, which occurs when s is an eigenvalue of A. Since A + AT is assumed to be negative definite, A must have eigenvalues with negative real part by Lemma 4, and so the poles of G(s) also have negative real parts. The latter case is more difficult to treat. First, since σmax(AB) ≤ σmax(A)σmax(B) and σmax(B−1) = σmin(B)−1,\nσmax(G(s)) ≤ σmax(W )\nσmin(A− sI) . (14)\nTherefore, we observe that\nσmin(I −MG(s)) ≥ 1− σmax(MG(s)) ≥ 1−Mσmax(G(s))\n≥ 1− Mσmax(W ) σmin(A− sI) .\nFrom the Fan-Hoffman inequality (Bhatia, 2013, Proposition III.5.1), we have that σmin(A− sI) = σmin(sI −A) ≥ λmin ( <(s)I − A+A T\n2\n) = <(s) + λmin ( −A+A T\n2\n) ,\nand since A+AT is negative definite, for any s with <(s) ≥ 0, σmin(A− sI) ≥ <(s) + σmin ( A+AT\n2\n) ≥ σmin(Asym). (15)\nSince σmin(Asym) > Mσmax(W ), it follows that σmin(I − MG(s)) > 0 whenever s has nonnegative real part, and so the poles of Z(s) must have negative real parts.\nNext, we need to show that Z(iω) + Z(−iω)T is positive definite for all ω ∈ R. Observe that\nZ(iω) + Z(−iω)T = [I +MG(iω)][I −MG(iω)]−1 + [I −MG(−iω)T ]−1[I +MG(−iω)T ] = 2[I −MG(−iω)T ]−1[I −M2G(−iω)TG(iω)][I −MG(iω)]−1.\nFrom Sylvester’s law of inertia, we may infer that Z(iω) +Z(−iω)T is positive definite if and only if I + Yω is positive definite, where Yω = M2G(−iω)TG(iω). If we can show that the eigenvalues of Yω lie strictly within the unit circle, that is, σmax(Yω) < 1 for all ω ∈ R, then I + Yω will necessarily be positive definite. From (14) and (15), we may verify that\nsup ω∈R σmax(G(iω)) ≤ sup ω∈R\nσmax(W ) σmin(A− iωI) ≤ σmax(W ) σmin(Asym) .\nTherefore, σmax(Yω) ≤M2σmax(G(−iω)T )σmax(G(iω)) ≤ ( Mσmax(W )\nσmin(Asym)\n)2 < 1,\nby assumption. Finally, since Z(∞) + Z(∞)T = 2I is positive definite, Z(s) is strictly positive real and Theorem 3 applies.\nNow, consider case (b). The proof proceeds in two steps. First, we verify that the transfer function G(s) = W (A − sI)−1 satisfies the conditions of the Kalman-Yakubovich-Popov lemma. Then, using the matrices P , L, U , and the constant inferred from the lemma, a Lyapunov function is constructed which satisfies the conditions of Theorem 2, guaranteeing global exponential stability. Once again, condition (i) of Lemma 3 is straightforward to verify: G(s) exhibits poles when s is an eigenvalue of A, and so the poles of G(s) also have negative real parts. Furthermore, condition (iii) is easily satisfied with M = I since G(∞) + G(∞)T = 0. To show that condition (ii) holds, observe that for any ω ∈ R, letting A−T = (A−1)T for brevity,\nG(iω) +G(−iω)T = W (A− iωI)−1 + (A+ iωI)−TWT\n= (A+ iωI)−T [(A+ iωI)TW +WT (A− iωI)](A− iωI)−1.\nSince the inner matrix factor is Hermitian, Sylvester’s law of inertia implies that G(iω) +G(−iω)T is positive definite if and only if\nBω := (A+ iωI) TW +WT (A− iωI).\nis positive definite. SinceBω is a Hermitian matrix, it has real eigenvalues, with minimal eigenvalue given by the infimum of the Rayleigh quotient:\nλmin(Bω) = inf ‖v‖=1\nvTBωv\n= inf ‖v‖=1\nvT (ATW +WTA)v + iωvT (W −WT )v\n= inf ‖v‖=1\nvT (ATW +WTA)v\n= λmin(A TW +WTA).\nBy assumption, ATW + WTA has strictly positive eigenvalues, and hence Bω and G(iω) + G(−iω)T are positive definite. Therefore, Lemma 3 applies, and we obtain matrices P,L, U and a constant > 0 with the corresponding properties.\nNow we may construct our Lyapunov function V . Let v = Wh̄ and\nu(t) = σ(v(t) +Wh∗ + Ux(t) + b)− σ(Wh∗ + Ux(t) + b),\nso that ˙̄h = Ah̄ + u. Since σ is monotone non-decreasing, σ(x) − σ(y) ≥ 0 for any x ≥ y. This implies that for each i = 1, . . . , N , vi and ui have the same sign. In particular, vTu ≥ 0. Now, let V (h) = hTPh be our Lyapunov function, noting that V is independent of t. Taking the derivative of the Lyapunov function over (13) and using the properties of P,L, U, ,\nV̇ (h̄) = h̄TP ˙̄h+ ˙̄hTPh̄\n= h̄T (PA+ATP )h̄+ 2h̄TPu\n= h̄T (−LTL− P )h̄+ 2h̄T (LTU −WT )u = −(Lh̄)T (Lh̄) + (Lh̄)TUu+ (Uu)T (Lh̄)− uTUTUu− 2vTu = −(Lh̄+ Uu)T (Lh̄+ Uu)− h̄TPh̄− 2vTu.\nSince vTu ≥ 0 and (Lh̄ + Uu)T (Lh̄ + Uu) ≥ 0, it follows that V̇ (h̄) ≤ − λmin(P )‖h‖2, and hence global exponential stability follows from Theorem 2 and positive-definiteness of P .\nTo finish off discussion regarding the results from Sec. 3, we provide a quick proof of Lemma 1 using a simple diagonalization argument.\nProof of Lemma 1. Since A is symmetric and real-valued, by (Horn & Johnson, 2012, Theorem 4.1.5), there exists an orthogonal matrix P and a real diagonal matrix D such that A = PDPT . Letting z = PTh where h satisfies (4), since h = Pz, we see that\nż = PTPDPTh+ PTσ(Wh+ Ux+ b)\n= Dz + PTσ(WPz + Ux+ b).\nTherefore, z satisfies (5) with L = PT and V = WP , both of which are nonsingular by orthogonality of P . By the same argument, for any equilibrium h∗, taking z∗ = PTh∗,\nDz∗ + PTσ(WPz∗ + Ux+ b) = PT (PDPTh∗ + σ(Wh∗ + Ux+ b))\n= PT (Ah∗ + σ(Wh∗ + Ux+ b)) = 0,\nimplying that z∗ is an equilibrium of (5). Furthermore, since\n‖z − z∗‖2 = (PTh− PTh∗)T (PTh− PTh∗) = (h− h∗)TPPT (h− h∗) = ‖h− h∗‖2,\nfrom orthogonality of P . Because every form of Lyapunov stability, both local and global, including global exponential stability, depend only on the norm ‖h − h∗‖ (Khalil, 2002, Definitions 4.4 and 4.5), h∗ is stable under any of these forms if and only if z∗ is also stable.\nWe remark that the proof of Lemma 1 can extend to matrices A which have real eigenvalues and are diagonalizable. These attributes are implied for symmetric matrices. However, they can be difficult to ensure in practice for nonsymmetric matrices without imposing difficult structural constraints." }, { "heading": "A.2 PROOF OF PROPOSITION 1", "text": "The proof of Proposition 1 relies on the following lemma, which we also have made use of several times throughout this work. Lemma 4. For any matrix A ∈ RN×N , the real parts of the eigenvalues <λi(A) are contained in the interval [λmin(Asym), λmax(Asym)], where Asym = 12 (A+A T ).\nProof. Recall by the min-max theorem, for 〈u, v〉 = u∗v, where u∗ is the conjugate transpose of u, the upper and lower eigenvalues of A+AT satisfy\nλmin(A+A T ) = inf v∈CN , ‖v‖=1 〈v, (A+AT )v〉 = inf v∈CN , ‖v‖=1 〈v,Av〉+ 〈Av, v〉,\nλmax(A+A T ) = sup v∈CN , ‖v‖=1 〈v, (A+AT )v〉 = sup v∈CN , ‖v‖=1 〈v,Av〉+ 〈Av, v〉.\nLet λi(A) = u + iω be an eigenvalue of A with corresponding eigenvector v satisfying ‖v‖ = 1. Since Av = (u+ iω)v,\n〈v,Av〉+ 〈Av, v〉 = 〈v,Av〉+ 〈v,Av〉 = 2<〈v,Av〉 = 2u‖v‖2 = 2u." }, { "heading": "Hence, λmin(A+AT ) ≤ u ≤ λmax(A+AT ).", "text": "Proof of Proposition 1. By construction, Ssymβ,γ = Sβ,γ + S T β,γ = (1− β)M sym − γI, and so from Lemma 4, both the real parts <λi(Sβ,γ) of the eigenvalues of Sβ,γ as well as the eigenvalues of Ssymβ,γ lie in the interval\n[λmin(S sym β,γ ), λmax(S sym β,γ )] = [λmin((1− β)M sym − γI), λmax((1− β)M sym − γI)]. If β < 1, for any eigenvalue λ of Ssymβ,γ with corresponding eigenvector v,\n(1− β)M symv − γv = λv, and so M symv = λ+ γ 1− β v\nimplying that λ+γ1−β is an eigenvalue of M sym, and therefore contained in [λmin(M sym), λmax(M\nsym)]. In particular, we find that [λmin(S sym β,γ ), λmax(S sym β,γ )] ⊆ [(1− β)λmin(M\nsym)− γ, (1− β)λmax(M sym)], (16) as required. Finally, if β = 1, then (16) still holds, since both intervals collapse to the single point {−γ}.\nFigure 4 illustrates the effect of β onto the eigenvalues of Aβ,γ with the largest and smallest real parts. It can be seen, both empirically and theoretically, that the real part of the eigenvalues converges towards zero as β tends towards one, i.e., we yield a skew-symmetric matrix with purely imaginary eigenvalues in the limit. Thus, for a sufficiently large parameter β we yield a system that approximately preserves an “energy” for a limited time-horizon\nRλi(Aβ,γ) ≈ 0, for i = 1, 2, . . . , N. (17)" }, { "heading": "A.3 PROOF OF LEMMA 2", "text": "First, it follows from Gronwall’s inequality that the norm of the final hidden state ‖h(T )‖ is bounded uniformly in β. From Weyl’s inequalities and the definition of Aβ,γ ,\nmax k |∆δλk(Asymβ,γ )| ≤ ‖∆δA sym β,γ ‖ = (1− β)‖∆δM sym A ‖.\nBy the chain rule, for each element M ijA of the matrix MA,\n∂L\n∂M ijA =\n∂L\n∂y(T )\n∂y(T )\n∂h(T )\n∂h(T ) ∂M ijA = ∂L ∂y(T ) D ∂h(T ) ∂M ijA .\nNow, for any collection of parameters θi,\nd\ndt ∑ i ∂h ∂θi = A ∑ i ∂h ∂θi + ∑ i ∂A ∂θi h+ sech2 (Wh+ Ux+ b) ( W ∑ i ∂h ∂θi + ∑ i ∂W ∂θi h ) ,\nand from Gronwall’s inequality,∥∥∥∥∥∑ i ∂h(T ) ∂θi ∥∥∥∥∥ ≤ (∥∥∥∥∥∑ i ∂Aβ,γ ∂θi ∥∥∥∥∥+ ∥∥∥∥∥∑ i ∂Wβ,γ ∂θi ∥∥∥∥∥ ) ‖h‖ e(‖Aβ,γ‖+‖Wβ,γ‖)T .\nSince ∆δM sym A = δ ∂L ∂MA\n+ δ (\n∂L ∂MA\n)T ,\n‖∆δM symA ‖ ≤ ‖∆δM sym A ‖F\n≤ δ √√√√∑ i,j ( ∂L ∂M ijA + ∂L ∂M ijA )2\n≤ δ ∥∥∥∥∂L∂y ∥∥∥∥ ‖D‖ ‖h‖ e(‖Aβ,γ‖+‖Wβ,γ‖)T √√√√∑\ni,j ∥∥∥∥∥∂Aβ,γ∂M ijA + ∂Aβ,γ∂M jiA ∥∥∥∥∥ 2 .\nSince ∂(MAh) ∂MijA\n= ∂(MTAh)\n∂MjiA , it follows that\n∂Aβ,γ ∂M ijA + ∂Aβ,γ ∂M jiA = 2(1− β)\n( ∂(MAh)\n∂M ijA + ∂(MTAh) ∂M jiA\n) ,\nand so ‖∆δM symA ‖ = O(δ(1−β)), and therefore maxk |∆δσk(A sym β,γ )| = O(δ(1−β)2). Similarly, for the matrix MW ,\nmax k ∣∣∣∆δλk(W symβ,γ )∣∣∣ ≤ (1− β) ‖∆δM symW ‖ ≤ δ(1− β) ∥∥∥∥∂L∂y ∥∥∥∥ ‖D‖ ‖h‖ e(‖Aβ,γ‖+‖Wβ,γ‖)T √√√√∑ i,j ∥∥∥∥∥∂Wβ,γ∂M ijW + ∂Wβ,γ∂M jiW ∥∥∥∥∥ 2\n= 2δ(1− β)2 ∥∥∥∥∂L∂y ∥∥∥∥ ‖D‖ ‖h‖ e(‖Aβ,γ‖+‖Wβ,γ‖)T √√√√∑\ni,j\n( ∂(MWh)\n∂M ijW + ∂(MTWh) ∂M jiW\n)2 ,\nand hence maxk |∆δλk(W symβ,γ )| = O(δ(1− β)2).\nIn Figure 5, we plot the most positive real part of the eigenvalues of Aβ,γ and Wβ,γ during training for the ordered MNIST task. As β increases, the eigenvalues change less during training, remaining in the stability region provided by case (b) of Theorem 1 for more of the training time." }, { "heading": "B ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "B.1 SENSITIVITY TO RANDOM INITIALIZATION FOR MNIST AND TIMIT", "text": "The hidden matrices are initialized by sampling weights from the normal distributionN (0, σ), where σ is the variance, which can be treated as a tuning parameter. In our experiments we typically chose a small σ; see the Table 8 for details. To show that the Lipschitz RNN is insensitive to random initialization, we have trained each model with 10 different seeds. Table 4 shows the maximum, average and minimum values obtained for each task. Note that higher values indicate better performance on the ordered and permuted MNIST tasks, while lower values indicate better performance on the TIMIT task." }, { "heading": "B.2 ORDERED PIXEL-BY-PIXEL AND NOISE-PADDED CIFAR-10", "text": "The pixel-by-pixel CIFAR-10 benchmark problem that has recently been proposed by (Chang et al., 2019). This task is similar to the pixel-by-pixel MNIST task, yet more challenging due to the increased sequence length and the more difficult classification problem. Similar to MNIST, we flatten the CIFAR-10 images to construct a sequence of length 1024 in scanline order, where each element of the sequence consists of three pixels (one from each channel).\nA variation of this problem is the noise-padded CIFAR-10 problem (Chang et al., 2019), where we consider each row of an image as input at time step t. The rows from each channel are stacked so that we obtain an input of dimension x ∈ R96. Then, after the 32 time step which process the 32\nrow, we start to feed the recurrent unit with independent standard Gaussian noise for 968 time steps. At the final point in T = 1000, we use the learned hidden state for classification. This problem is challenging because only the first 32 time steps contain signals. Thus, the recurrent unit needs to recall information from the beginning of the process.\nTable 5 provides a summary of our results. Our Lipschitz recurrent unit outperforms both the incremental RNN (Kag et al., 2020) and the antisymmetric RNN (Chang et al., 2019) by a significant margin. This impressively demonstrates that the Lipschitz unit enables the stable propagation of signals over long time horizons." }, { "heading": "B.3 PENN TREE BANK (PTB)", "text": "" }, { "heading": "B.3.1 CHARACTER LEVEL PREDICTION", "text": "Next, we consider a character level language modeling task using the Penn Treebank Corpus (PTB) (Marcus et al., 1993). Specifically, this task studies how well a model can predict the next character in a sequence of text. The dataset is composed of a train / validation / test set, where 5017K characters are used for training, 393K characters are used for validation and 442K characters are used for testing. For our experiments, we used the publicly available implementation of this task by Kerg et al. (2019), which computes the performance in terms of mean bits per character (BPC).\nTable 6 shows the results for back-propagation through time (BPTT) over 150 and 300 time steps, respectively. The Lipschitz RNN performs slightly better then the exponential RNN and the nonnormal RNN on this task. (Kerg et al., 2019) notes that orthogonal hidden-to-hidden matrices are not particular well-suited for this task. Thus, it is not surprising that the Lipschitz unit has a small advantage here.\nFor comparison, we have also tested the Antisymmetric RNN (Chang et al., 2019) on this task. The performance of this unit is considerably weaker as compared to our Lipschitz unit. This suggests that the Lipschitz RNN is more expressive and improves the propagation of meaningful signals over longer time scales." }, { "heading": "B.3.2 WORD-LEVEL PREDICTION", "text": "In addition to character-level prediction, we also consider word-level prediction using the PTB corpus. For comparison with other state-of-the-art units, we consider the setup by Kusupati et al. (2018), who use a sequence length of 300. Table 7 shows results for back-propagation through time (BPTT) over 300 time steps. The Lipschitz RNN performs slightly better than the other RNNs on this task and the baseline LSTM for the test perplexity metric reported by Kusupati et al. (2018)." }, { "heading": "C TUNING PARAMETERS", "text": "For tuning we utilized a standard training procedure using a non-exhaustive random search within the following plausible ranges for the our weight parameterization β = 0.65, 0.7, 0.75, 0.8, γ = [0.001, 1.0]. For Adam we explored learning rates between 0.001 and 0.005, and for SGD we considered 0.1. For the step size we explored values in the range 0.001 to 1.0. We did not perform an automated grid search and thus expect that the models can be further fine-tuned.\nThe tuning parameters for the different tasks that we have considered are summarized in Table 8.\nFor pixel-by-pixel MNIST and CIFAR-10, we use Adam for minimizing the objective. We train all our models for 100 epochs, with scheduled learning rate decays at epochs {90}. We do not use gradient clipping during training. Figure 6 shows the test accuracy curves for our Lipschitz RNN for the ordered and permuted MNIST classification tasks.\nFor TIMIT we use Adam with default parameters for minimizing the objective. We also tried Adam using betas (0.0, 0.9) as well as RMSprop with α = 0.9, however, Adam with default values worked best in our experiments. We train the model for 1200 epochs without learning-rate decay. Similar to Kerg et al. (2019) we train our model with gradient clipping, however, we observed that the performance of our model is relatively insensitive to the clipping value.\nFor the character level prediction task, we use Adam with default parameters for minimizing the objective, while we use RMSprop with α = 0.9 for the word level prediction task. We train the model for 200 epochs for the character-level task, and for 500 epochs for the word-level task." } ]
2,021
null
SP:2ad12575818f72f453eb0c04c953a48be56e80e3
[ "In continual learning settings, one of the important technique for avoiding catastrophe forgetting is to replay data points from the past. For memory efficiency purposes, representative samples can be generated from a generative model, such as GANs, rather than replaying the original samples which can be large in number. It is argued that GANs generate new samples which may not belong exactly to one of the classes, so a new generative model is proposed. Experimental results are appealing." ]
The two main impediments to continual learning are catastrophic forgetting and memory limitations on the storage of data. To cope with these challenges, we propose a novel, cognitively-inspired approach which trains autoencoders with Neural Style Transfer to encode and store images. During training on a new task, reconstructed images from encoded episodes are replayed in order to avoid catastrophic forgetting. The loss function for the reconstructed images is weighted to reduce its effect during classifier training to cope with image degradation. When the system runs out of memory the encoded episodes are converted into centroids and covariance matrices, which are used to generate pseudo-images during classifier training, keeping classifier performance stable while using less memory. Our approach increases classification accuracy by 13-17% over state-of-the-art methods on benchmark datasets, while requiring 78% less storage space.1
[ { "affiliations": [], "name": "CONTINUAL LEARNING" }, { "affiliations": [], "name": "Ali Ayub" }, { "affiliations": [], "name": "Alan R. Wagner" } ]
[ { "authors": [ "Ali Ayub", "Alan R. Wagner" ], "title": "Centroid based concept learning for rgb-d indoor scene classification", "venue": "In British Machine Vision Conference (BMVC),", "year": 2020 }, { "authors": [ "Ali Ayub", "Alan R. Wagner" ], "title": "Cognitively-inspired model for incremental learning using a few examples. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2020b", "venue": null, "year": 2020 }, { "authors": [ "Ali Ayub", "Alan R. Wagner" ], "title": "Storing encoded episodes as concepts for continual learning", "venue": null, "year": 2020 }, { "authors": [ "Ali Ayub", "Alan R. Wagner" ], "title": "Tell me what this is: Few-shot incremental object learning by a robot", "venue": null, "year": 2020 }, { "authors": [ "Francisco M. Castro", "Manuel J. Marin-Jimenez", "Nicolas Guil", "Cordelia Schmid", "Karteek Alahari" ], "title": "End-to-end incremental learning", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Arslan Chaudhry", "Puneet K. Dokania", "Thalaiyasingam Ajanthan", "Philip H.S. Torr" ], "title": "Riemannian walk for incremental learning: Understanding forgetting and intransigence", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Leon A. Gatys", "Alexander S. Ecker", "Matthias Bethge" ], "title": "Image style transfer using convolutional neural networks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Ross Girshick" ], "title": "Fast R-CNN", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Ian Goodfellow", "Yoshua Bengio", "Aaron Courville" ], "title": "Deep Learning", "venue": null, "year": 2016 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Motonobu Hattori" ], "title": "A biologically inspired dual-network memory model for reduction of catastrophic forgetting", "venue": "Neurocomput.,", "year": 2014 }, { "authors": [ "Tyler L. Hayes", "Christopher Kanan" ], "title": "Lifelong machine learning with deep streaming linear discriminant analysis", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeffrey Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "In NIPS Deep Learning and Representation Learning Workshop,", "year": 2015 }, { "authors": [ "Saihui Hou", "Xinyu Pan", "Chen Change Loy", "Zilei Wang", "Dahua Lin" ], "title": "Learning a unified classifier incrementally via rebalancing", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Ronald Kemker", "Christopher Kanan" ], "title": "Fearnet: Brain-inspired model for incremental learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil C. Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A. Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska", "Demis Hassabis", "Claudia Clopath", "Dharshan Kumaran", "Raia Hadsell" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the National Academy of Sciences of the United States of America,", "year": 2017 }, { "authors": [ "Y. LeChun" ], "title": "The mnist database of handwritten digits, 1998. URL http://yann.lecun.com/ exdb/mnist", "venue": null, "year": 1998 }, { "authors": [ "Michael L. Mack", "Bradley C. Love", "Alison R. Preston" ], "title": "Building concepts one episode at a time: The hippocampus and concept formation", "venue": "Neuroscience Letters,", "year": 2018 }, { "authors": [ "Oleksiy Ostapenko", "Mihai Puscas", "Tassilo Klein", "Patrick Jahnichen", "Moin Nabi" ], "title": "Learning to remember: A synaptic plasticity driven framework for continual learning", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "CoRR, abs/1511.06434,", "year": 2015 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander Kolesnikov", "Georg Sperl", "Christoph H. Lampert" ], "title": "iCaRL: Incremental classifier and representation learning", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Louis Renoult", "Patrick S.R. Davidson", "Erika Schmitz", "Lillian Park", "Kenneth Campbell", "Morris Moscovitch", "Brian Levine" ], "title": "Autobiographically significant concepts: More episodic than semantic in nature? an electrophysiological investigation of overlapping types of memory", "venue": "Journal of Cognitive Neuroscience,", "year": 2015 }, { "authors": [ "Anthony Robins" ], "title": "Catastrophic forgetting, rehearsal and pseudorehearsal", "venue": "Connection Science,", "year": 1995 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li FeiFei" ], "title": "Imagenet large scale visual recognition challenge", "venue": "Int. J. Comput. Vision,", "year": 2015 }, { "authors": [ "Ari Seff", "Alex Beatson", "Daniel Suo", "Han Liu" ], "title": "Continual learning in generative adversarial", "venue": "nets. ArXiv,", "year": 2017 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Chenshen Wu", "Luis Herranz", "Xialei Liu", "yaxing wang", "Joost van de Weijer", "Bogdan Raducanu" ], "title": "Memory replay gans: Learning to generate new categories without forgetting", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Yue Wu", "Yinpeng Chen", "Lijuan Wang", "Yuancheng Ye", "Zicheng Liu", "Yandong Guo", "Zhengyou Zhang", "Yun Fu" ], "title": "Incremental classifier learning with generative adversarial networks", "venue": "CoRR, abs/1802.00853,", "year": 2018 }, { "authors": [ "Yue Wu", "Yinpeng Chen", "Lijuan Wang", "Yuancheng Ye", "Zicheng Liu", "Yandong Guo", "Yun Fu" ], "title": "Large scale incremental learning", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Y. Xiang", "Y. Fu", "P. Ji", "H. Huang" ], "title": "Incremental learning using conditional adversarial networks", "venue": "IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Humans continue to learn new concepts over their lifetime without the need to relearn most previous concepts. Modern machine learning systems, however, require the complete training data to be available at one time (batch learning) (Girshick, 2015). In this paper, we consider the problem of continual learning from the class-incremental perspective. Class-incremental systems are required to learn from a stream of data belonging to different classes and are evaluated in a single-headed evaluation (Chaudhry et al., 2018). In single-headed evaluation, the model is evaluated on all classes observed so far without any information indicating which class is being observed.\nCreating highly accurate class-incremental learning systems is a challenging problem. One simple way to create a class-incremental learner is by training the model on the data of the new classes, without revisiting the old classes. However, this causes the model to forget the previously learned classes and the overall classification accuracy decreases, a phenomenon known as catastrophic forgetting (Kirkpatrick et al., 2017). Most existing class-incremental learning methods avoid this problem by storing a portion of the training samples from the earlier learned classes and retraining the model (often a neural network) on a mixture of the stored data and new data containing new classes (Rebuffi et al., 2017; Hou et al., 2019). Storing real samples of the previous classes, however, leads to several issues. First, as pointed out by Wu et al. (2018b), storing real samples exhausts memory capacity and limits performance for real-world applications. Second, storing real samples introduces privacy and security issues (Wu et al., 2018b). Third, storing real samples is not biologically inspired, i.e. humans do not need to relearn previously known classes.\nThis paper explores the ”strict” class-incremental learning problem in which the model is not allowed to store any real samples of the previously learned classes. The strict class-incremental learning problem is more akin to realistic learning scenarios such as a home service robot that must learn continually with limited on-board memory. This problem has been previously addressed using generative models such as autoencoders (Kemker & Kanan, 2018) or Generative Adversarial Networks (GANs) (Ostapenko et al., 2019). Most approaches for strict class-incremental learning\n1A preliminary version of this work was presented at ICML 2020 Workshop on Lifelong Machine Learning (Ayub & Wagner, 2020c).\nuse GANs to generate samples reflecting old class data, because GANs generate sharp, fine-grained images (Ostapenko et al., 2019). The downside of GANs, however, is that they tend to generate images which do not belong to any of the learned classes, hurting classification performance. Autoencoders, on the other hand, always generate images that relate to the learned classes, but tend to produce blurry images that are also not good for classification.\nTo cope with these issues, we propose a novel, cognitively-inspired approach termed Encoding Episodes as Concepts (EEC) for continual learning, which utilizes convolutional autoencoders to generate previously learned class data. Inspired by models of the hippocampus (Renoult et al., 2015), we use autoencoders to create compressed embeddings (encoded episodes) of real images and store them in memory. To avoid the generation of blurry images, we borrow ideas from the Neural Style Transfer (NST) algorithm proposed by Gatys et al. (2016) to train the autoencoders. For efficient memory management, we use the notion of memory integration, from hippocampal and neocortical concept learning (Mack et al., 2018), to combine similar episodes into centroids and covariance matrices eliminating the need to store real data.\nThis paper contributes: 1) an autoencoder based approach to strict class-incremental learning which uses Neural Style Transfer to produce quality samples reflecting old class data (Sec. 3.1); 2) a cognitively-inspired memory management technique that combines similar samples into a centroid/covariance representation, drastically reducing the memory required (Sec. 3.2); 3) a data filtering and a loss weighting technique to manage image degradation of old classes during classifier training (Sec. 3.3). We further show that EEC outperforms state-of-the-art (SOTA) approaches on benchmark datasets by significant margins while also using far less memory." }, { "heading": "2 RELATED WORK", "text": "Most recent approaches to class-incremental learning store a portion of the real images belonging to the old classes to avoid catastrophic forgetting. Rebuffi et al. (2017) (iCaRL) store old class images and utilize knowledge distillation (Hinton et al., 2015) for representation learning and the nearest class mean (NCM) classifier for classification of the old and new classes. Knowledge distillation uses a loss term to force the labels of the images of previous classes to remain the same when learning new classes. Castro et al. (2018) (EEIL) improves iCaRL with an end-to-end learning approach. Wu et al. (2019) also stores real images and uses a bias correction layer to avoid any bias toward the new classes.\nTo avoid storing old class images, some approaches store features from the last fully-connected layer of the neural networks (Xiang et al., 2019; Hayes & Kanan, 2020; Ayub & Wagner, 2020b;d). These approaches, however, use a network pretrained on ImageNet to extract features, which gives them an unfair advantage over other approaches. Because of their reliance on a pretrained network, these approaches cannot be applied in situations when new data differs drastically from ImageNet (Russakovsky et al., 2015).\nThese difficulties have forced researchers to consider using generative networks. Methods employing generative networks tend to model previous class statistics and regenerate images belonging to the old classes while attempting to learn new classes. Both Shin et al. (2017) and Wu et al. (2018a) use generative replay where the generator is trained on a mixture of generated old class images and real images from the new classes. This approach, however, causes images belonging to classes learned in earlier increments to start to semantically drift, i.e. the quality of images degrades because of the repeated training on synthesized images. Ostapenko et al. (2019) avoids semantic drift by training the GAN only once on the data of each class. Catastrophic forgetting is avoided by applying elastic weight consolidation (Kirkpatrick et al., 2017), in which changes in important weights needed for old classes are avoided when learning new classes. They also grow their network when it runs out of memory while learning new classes, which can be difficult to apply in situations with restricted memory. One major issue with GAN based approaches is that GANs tend to generate images that do not belong to any of the learned classes which decreases classification accuracy. For these reasons, most approaches only perform well on simpler datasets such as MNIST (LeChun, 1998) but perform poorly on complex datasets such as ImageNet. Conditional GAN can be used to mitigate the problem of images belonging to none of the classes as done by Ostapenko et al. (2019), however the performance is still poor on complex datasets such as ImageNet-50 (see Table 1 and\nTable 2). We avoid the problem of generating images that do not belong to any learned class by training autoencoders instead of GANs.\nComparatively little work has focused on using autoencoders to generate samples because the images generated by autoencoders are blurry, limiting their usefulness for classifier training. Hattori (2014) uses autoencoders on binary pixel images and Kemker & Kanan (2018) (FearNet) uses a network pre-trained on ImageNet to extract feature embeddings for images, applying the autoencoder to the feature embeddings. Neither of these approaches are scalable to RGB images. Moreover, the use of a pre-trained network to extract features gives FearNet an unfair advantage over other approaches." }, { "heading": "3 ENCODING EPISODES AS CONCEPTS (EEC)", "text": "Following the notation of Chaudhry et al. (2018), we consider St = {(xti, yti)}n t\ni=1 to be the set of samples xi ∈ X and their ground truth labels yti belonging to task t. In a class-incremental setup, St can contain one or multiple classes and data for different tasks is available to the model in different increments. In each increment, the model is evaluated on all the classes seen so far.\nOur formal presentation of continual learning follows Ostapenko et al. (2019), where a task solver model (classifier for class-incremental learning) D has to update its parameters θD on the data of task t in an increment such that it performs equally well on all the t − 1 previous tasks seen so far. Data for the t − 1 tasks is not available when the model is learning task t. The subsections below present our approach." }, { "heading": "3.1 AUTOENCODER TRAINING WITH NEURAL STYLE TRANSFER", "text": "An autoencoder is a neural network that is trained to compress and then reconstruct the input (Goodfellow et al., 2016), formally fr : X → X . The network consists of an encoder that compresses the input into a lower dimensional feature space (termed as the encoded episode in this paper), genc : X → F and a decoder that reconstructs the input from the feature embedding, gdec : F → X . Formally, for a given input x ∈ X , the reconstruction pipeline fr is defined as: fr(x) = (gdec ◦ genc)(x). The parameters θr of the network are usually optimized using an l2 loss (Lr) between the inputs and the reconstructions:\nLr = ||x− fr(x)||2 (1)\nAlthough autoencoders are suitable for dimensionality reduction for complex, high-dimensional data like RGB images, the reconstructed images lose the high frequency components necessary for correct classification. To tackle this problem, we train autoencoders using some of the ideas that underline Neural Style Transfer (NST). NST uses a pre-trained CNN to transfer the style of one image to another. The process takes three images, an input image, a content image and a style image and alters the input image such that it has the content image’s content and the artistic style of the style image. The three images are passed through the pre-trained CNN generating convolutional feature maps (usually from the last convolutional layer) and l2 distances between the feature maps of the input image and content image (content loss) and style image (style loss) are calculated. These losses are then used to update the input image.\nIntuitively, our intent here is to create reconstructed images that are similar to the real images (in the pixel and convolutional space), thereby improving classification accuracy. Hence, we only utilize the idea of content transfer from the NST algorithm, where the input image is the image reconstructed by the autoencoder and content image is the real image corresponding to the reconstructed image. The classifier model, D, is used to generate convolutional feature maps for the NST, since it is already trained on real data for the classes in the increment t. In contrast to the traditional NST algorithm, we use the content loss (Lcont) to train the autoencoder, rather than updating the input image directly. Formally, let fc : X → Fc be the classifier pipeline that converts input images into convolutional features. For an input image, xti of task t, the content loss is:\nLcont = ||fc(xti)− fc(fr(xti))||2 (2)\nThe autoencoder parameters are optimized using a combination of reconstruction and content losses:\nL = (1− λ)Lr + λLcont (3)\nwhere, λ is a hyperparamter that controls the contribution of each loss term towards the complete loss. During autoencoder training, classifier D acts as a fixed feature extractor and its parameters are not updated. This portion of the complete procedure is depicted in Figure 1 (a).\nTo provide an illustration of our approach, we perform an experiment with ImageNet-50 dataset. We trained one autoencoder on 10 classes from ImageNet-50 with NST and one without NST. Figure 2 depicts the reconstructed images by the two autoencoders. Note that the images generated by the autoencoder trained without using NST are blurry. In contrast, the autoencoder trained using NST creates images with fine-grained details which improves the classification accuracy." }, { "heading": "3.2 MEMORY INTEGRATION", "text": "As mentioned in the introduction, continual learning also presents issues associated with the storage of data in memory. For EEC, for each new task t, the data is encoded and stored in memory. Even though the encoded episodes require less memory than the real images, the system can still run out of memory when managing a continuous stream of incoming tasks. To cope with this issue, we propose a process inspired by memory integration in the hippocampus and the neocortex (Mack et al., 2018). Memory integration combines a new episode with a previously learned episode summarizing the information in both episodes in a single representation. The original episodes themselves are forgotten.\nConsider a system that can store a total of K encoded episodes based on its available memory. Assume that at increment t − 1, the system has a total of Kt−1 encoded episodes stored. It is now required to store Kt more episodes in increment t. The system runs out of memory because Kt + Kt−1 > K. Therefore, it must reduce the number of episodes to Kr = Kt−1 + Kt −K. Because each task is composed of a set of classes at each increment, we reduce the total encoded episodes belonging to different classes based on their previous number of encoded episodes. Formally, the reduction in the number of encoded episodes Ny for a class y is calculated as (whole number):\nNy(new) = Ny(1− Kr Kt−1 ) (4)\nTo reduce the encoded episodes toNy(new) for class y, inspired by the memory integration process, we use an incremental clustering process that combines the closest encoded episodes to produce cen-\ntroids and covariance matrices. This clustering technique is similar to the Agg-Var clustering proposed in our earlier work (Ayub & Wagner, 2020b;a). The distance between encoded episodes is calculated using the Euclidean distance, and the centroids are calculated using the weighted mean of the encoded episodes. The process is repeated until the sum of the total number of centroids/covariance matrices and the encoded episodes for class y equal Ny(new). The original encoded episodes are removed and only the centroid and the covariances are kept (see Appendix A for more details)." }, { "heading": "3.3 REHEARSAL, PSEUDOREHEARSAL AND CLASSIFIER TRAINING", "text": "Figure 1 (b) depicts the procedure for training the classifier D when new data belonging to task t becomes available. The classifier is trained on data from three sources: 1) the real data for task t, 2) reconstructed images generated from the autoencoder’s decoder, and 3) pseudo-images generated from centroids and covariance matrices for previous tasks. Source (2) uses the encodings from the previous tasks to generate a set of reconstructed images by passing them through the autoencoder’s decoder. This process is referred to as rehearsal (Robins, 1995).\nPseudorehearsal: If the system also has old class data stored as centroids/covariance matrix pairs, pseudorehearsal is employed. For each centroid/covariance matrix pair of a class we sample a multivariate Gaussian distribution with mean as the centroid and the covariance matrix to generate a large set of pseudo-encoded episodes. The episodes are then passed through the autoencoder’s decoder to generate pseudo-images for the previous classes. Many of the pseudo-images are noisy. To filter the noisy pseudo-images, we pass them through classifier D, which has already been trained on the prior classes, to get predicted labels for each pseudo-image. We only keep those pseudo-images that have the same predicted label as the label of the centroid they originated from. Among the filtered pseudo-images, we select the same number of pseudo-images as the total number of encoded episodes represented by the centroid and discard the rest (see Appendix A for the algorithm).\nTo illustrate the effect of memory integration and pseudorehearsal, we performed an experiment on MNIST dataset. We trained an autoencoder on 2 classes (11379 images) from MNIST dataset with the embedding dimension of size 2. After training, we passed all the training images for the two classes through the encoder to get 11379 encoded episodes (see Figure 3 (a)). Next, we applied our memory integration technique on the encoded episodes to reduce the encoded episodes to 4000, 2000, 1000 and 200 centroids (and covariances). No original encoded episodes were kept. Pseudorehearsal was then applied on the centroids to generate pseudo-encoded episodes. The pseudo-encoded episodes for different numbers of centroids are shown in Figure 3 (b-e). Note that the feature space for the pseudo-encoded episodes generated by different number of centroids is very similar to the original encoded episodes. For a larger number of centroids, the feature space looks almost exactly the same as the original feature space. For the smallest number of centroids (Figure 3 (e)), the feature space becomes less uniform and more divided into small dense regions. This is because a smaller number of centroids are being used to represent the overall concept of the feature space across different regions resulting in large gaps between the centroids. Hence, the pseudo-encoded episodes generated using the centroids are more dense around the centroids reducing uniformity. Still, the overall shape of the feature space is preserved after using the centroids and pseudorehearsal. Hence, our approach conserves information about previous classes, even with less memory, contributing to classifier performance while also avoiding catastrophic forgetting. The results presented in Section 4.3 on ImageNet-50 confirm the effectiveness of memory integration and pseudorehearsal.\nSample Decay Weight: The reconstructed images and pseudo-images can still be quite different from the original images, hurting classifier performance. We therefore weigh the loss term for reconstructed and pseudo-images while training D. For this, we estimate the degradation in the reconstructed and pseudo-images. To estimate degradation in the reconstructed images, we find the ratio of the classification accuracy of the reconstructed images (cr) to the accuracy of the original images (co) on network D trained on previous tasks. This ratio is used to control the weight of the loss term for the reconstructed images. For a previous task t− 1, the weight Γrt−1 (the sample decay weight) for the loss term Lrt−1 of the reconstructed images is defined as:\nΓrt−1 = e −γrt−1αt−1 (5)\nwhere γrt−1 = 1− crt−1 cot−1\n(sample decay coefficient) denotes the degradation in reconstructed images of task t−1 and αt−1 represents the number of times an autoencoder has been trained on the pseudoimages or reconstructed images of task t − 1. The value of sample decay coefficient ranges from 0 to 1, depending on the classification accuracy of the reconstructed images crt−1. If c r t−1 = c o t−1, γrt−1 = 0 (no degradation) and Γ r t−1 = 1. Similarly, the sample decay weight Γ p t−1 for loss term Lpt−1 for the pseudo-images is based on the classification accuracy c p t−1 of pseudo-images for task t− 1. Thus, for a new increment, the total loss LD for training D on the reconstructed and pseudoimages of the old tasks and real images of the new task t is defined as:\nLD = Lt + t−1∑ i=1 (ΓriLri + Γ p iL p i ) (6)" }, { "heading": "4 EXPERIMENTS", "text": "We tested and compared EEC to several SOTA approaches on four benchmark datasets: MNIST, SVHN, CIFAR-10 and ImageNet-50. We also report the memory used by our approach and its performance in restricted memory conditions. Finally, we present an ablation study to evaluate the contribution of different components of EEC. Experiments on CIFAR-100 and comparison with additional methods are presented in Appendix D. Other generative memory approaches by previous authors did not test on the CIFAR-100 dataset." }, { "heading": "4.1 DATASETS", "text": "The MNIST dataset consists of grey-scale images of handwritten digits between 0 to 9, with 50,000 training images, 10,000 validation images and 10,000 test images. SVHN consists of colored cropped images of street house numbers with different illuminations and viewpoints. It contains 73,257 training and 26,032 test images belonging to 10 classes. CIFAR-10 consists of 50,000 RGB training images and 10,000 test images belonging to 10 object classes. Each class contains 5000 training and 1000 test images. ImageNet-50 is a smaller subset of the iLSVRC-2012 dataset containing 50 classes with 1300 training images and 50 validation images per class. All of the dataset images were resized to 32×32, in concordance to (Ostapenko et al., 2019)." }, { "heading": "4.2 IMPLEMENTAION DETAILS", "text": "We used Pytorch (Paszke et al., 2019) and an Nvidia Titan RTX GPU for implementation and training of all neural network models. A 3-layer shallow convolutional autoencoder was used for all datasets (see Appendix B), which requires approximately 0.2MB of disk space for storage. For classification, on the MNIST and SVHN datasets the same classifier as the DCGAN discriminator (Radford et al., 2015) was used, for CIFAR-10 the ResNet architecture proposed by Gulrajani et al. (2017) was used and for ImageNet-50, ResNet-18 (He et al., 2016) was used.\nIn concordance with Ostapenko et al. (2019) (DGMw), we report the average incremental accuracy on 5 and 10 classes (A5 and A10) for the MNIST, SVHN and CIFAR-10 datasets trained continually with one class per increment. For ImageNet-50, the average incremental accuracy on 3 and 5 increments (A30 and A50) is reported with 10 classes in each increment. For a fair comparison,\nwe typically compare against approaches with a generative memory replay component that do not use a pre-trained feature extractor and are evaluated in a single-headed fashion. Among such approaches, to the best of our knowledge, DGMw represents the state-of-the-art benchmark on these datasets which is followed by MeRGAN (Wu et al., 2018a), DGR (Shin et al., 2017) and EWC-M (Seff et al., 2017). Joint training (JT) is used to produce an upperbound for all four datasets. We compare these methods against two variants of our approach: EEC and EECS. EEC uses a separate autoencoder for classes in a new increment, while EECS uses a single autoencoder that is retrained on the reconstructed images of the old classes when learning new classes. For both EEC and EECS, results are reported when all of the encoded episodes for the previous classes are stored. The results for EEC under restricted memory conditions are also presented.\nHyperparameter values and training details are reported in Appendix C. We performed each experiment 10 times with different random seeds and report average accuracy over all runs." }, { "heading": "4.3 COMPARISON WITH SOTA METHODS", "text": "Table 1 compares EEC and EECS against SOTA approaches on the MNIST, SVHN, CIFAR-10 and ImageNet-50 datasets. We compare against two different types of approaches, those that use real images (episodic memory) of the old classes and those that generate previous class images when learning new classes. Both EEC and EECS outperform EWC-M and DGR by significant margins on the MNIST on A5 and A10. MeRGAN and DGMw perform similarly to our methods on the A5 and A10 experiments. Note that MeRGAN and EEC approach the JT upperbound on A5. Further, accuracy for MeRGAN, DGMw and EEC changes only slightly between A5 and A10, suggesting that MNIST is too simple of a dataset for testing continual learning using generative replay.\nConsidering SVHN, a more complex dataset consisting of colored images of street house numbers, the performance of our method remains reasonably close to the JT upperbound, even though the performance of other approaches decreases significantly. For A5, EEC achieves only 0.27% lower accuracy than JT and 11.36% higher than DGMw (current best method). For A10, EEC is 4.55% lower than JT but still 15.21% higher than DGMw and achieves 5.66% higher accuracy on A10 than DGMw did on A5. EECS performs slightly lower than EEC on A5 but the gap widens on A10. However, EECS also beats the SOTA approaches on both A5 and A10.\nConsidering the more complex CIFAR-10 and ImageNet-50 datasets, only DGMw reported results for these datasets. On CIFAR-10, EEC beats DGMw on A5 by a margin of 20.18% and on A10 by a margin of 10.7%. Similar to the SVHN results, the accuracy achieved by EEC on A10 is even higher than DGMw’s accuracy on A5. In comparison with the JT, EEC performs similarly on A5 (0.38% lower), however on A10 it performs significantly lower. For ImageNet-50, again we see similar results as both EEC and EECS outperform DGMw on A30 and A50 by significant margins (13.29% and 17.42%, respectively). Similar to SVHN and CIFAR-10 results, accuracy for EEC on A50 is even higher than DGMw’s accuracy on A30. Further, EEC also beats iCaRL (episodic\nmemory SOTA method) with margins of 16.01% and 6.26% on A30 and A50, respectively, even though iCaRL has an unfair advantage of using stored real images.\nDiscussion: Our method performed consistently across all four datasets, especially on A5 for MNIST, SVHN and CIFAR-10. In contrast, DGMw (the best current method) shows significantly different results across the four datasets. The results suggest that the current generative memory-based SOTA approaches are unable to mitigate catastrophic forgetting on more complex RGB datasets. This could be because GANs tend to generate images that do not belong to any of learned classes, which can drastically reduce classifier performance. Our approach copes with these issues by training autoencoders borrowing ideas from the NST algorithm and retraining of the classifier with sample decay weights. Images reconstructed by EEC for CIFAR-10 after 10 tasks and for ImageNet-50 after 5 tasks are shown in Figure 4 (more examples in supplementary file).\nMemory Usage Analysis: Similar to DGMw, we analyze the disc space required by our model for the ImageNet-50 dataset. For EEC, the autoencoders use a total disc space of 1 MB, ResNet-18 uses about 44 MB, while the encoded episodes use a total of about 66.56 MB. Hence, the total disc space required by EEC is about 111.56 MB. DGMw’s generator (with corresponding weight masks), however, uses 228MB of disc space and storing pre-processed real images of ImageNet-50 requires disc space of 315MB. Hence, our model requires 51.07% ((228-111.56)/228 = 0.5107) less space than DGMw and 64.58% less space than the real images for ImageNet-50 dataset.\nEEC was tested with different memory budgets on the ImageNet-50 dataset to evaluate the impact of our memory management technique. The memory budgets (K) are defined as the sum of the total number of encoded episodes, centroids and covariance matrices (diagonal entries) stored by the system. Figure 5 shows the results in terms ofA30 andA50 for a wide range of memory budgets. The accuracy of EEC changes only slightly over different memory budgets. Even with a low budget of K=5000, the A30 and A50 accuracies are only 3.1% and 3.73% lower than the accuracy of EEC with unlimited memory. Further, even for K=5000, EEC beats DGMw (current SOTA on ImageNet50) by margins of 10.17% and 13.73% on A30 and A50, respectively. The total disc space required for K=5000 is only 5.12 MB and the total disc space for the complete system is 50.12 MB (44 MB for ResNet-18 and 1 MB for autoencoders), which is 78.01% less than DGMw’s required disc space (228 MB). These results clearly depict that our approach produces the best results even with extremely limited memory, a trait that is not shared by other SOTA approaches. Moreover, the results also show that our approach is capable of dealing with the two main challenges of continual learning mentioned earlier: catastrophic forgetting and memory management." }, { "heading": "4.4 ABLATION STUDY", "text": "We performed an ablation study to examine the contribution of using: 1) the content loss while training the autoencoders, 2) pseudo-rehearsal and 3) the sample weight decay. These experiments were performed on the ImageNet-50 dataset. Complete encoded episodes of previous tasks were used for ablations 1 and 3, and a memory budget of K=5000 was used for ablation 2. We created hybrid versions of EEC to test the contribution of each component. Hybrid 1: termed as EEC-noNST does not use the content loss while training the autoencoders. Hybrid 2: termed as EEC-CGAN uses a conditional GAN instead of NST based autoencoder. Hybrid 3: termed as EEC-VAE uses a variational autoencoder and Hybrid 4: termed as EEC-DNA uses a denoising autoencoder instead of our proposed NST based autonecoder. Hybrid 5: termed as EEC-noPseudo simply removes the extra encoded episodes when the system runs out of memory and does not use pseudo-rehearsal. Hybrid 6: termed as EEC-noDecay does not use sample weight decay during classifier training on new and\nold tasks. Except for the changed component, all the other components in the hybrid approaches were the same as used in EEC.\nAll of the hybrids show inferior performance as compared to the complete EEC approach (Table 2). Evaluated on A30 and A50, EEC-noNST, EEC-CGAN, EEC-VAE and EEC-DNA all result in ∼5.8% and ∼6.5% lower accuracy, respectively than EEC. EEC-noDecay results in 3.77% and 4.77% lower accuracy onA30 andA50 respectively, than EEC. As a fair comparison with EEC using K=5000, EEC-noPseudo results in 4.98% and 6.27% lower accuracy for A30 and A50, respectively. The results show that all of these components contribute significantly to the overall performance of EEC, however training the autoencoders with content loss and pseudo-rehearsal seem to be more important. Also, note that the conditional GAN with the other components of our approach (such as sample weight decay) produces 7.80% and 11.3% higher accuracy on A30 and A50, respectively, than DGMw. Further note that all the hybrids that do not use the NST based autoencoder (Hybrids 1-4) produce similar accuracy, since they use the sample weight decay to help mange the degradation of reconstructed images. These results show the effectiveness of sample weight decay to cope with image degradation." }, { "heading": "4.5 EXPERIMENTS ON HIGHER RESOLUTION IMAGES", "text": "As mentioned in Subsection 4.2, all the experiments on the four datasets were done using resized images of size 32×32. To show further insight into our approach, we performed experiments on ImageNet-50 dataset with higher resolution images of size 256×256 and randomly cropped to 224×224. Evaluated on A30 and A50, EEC achieved 70.75% and 56.89% accuracy, respectively using images of size 224×224. These results show that using higher resolution images improve the performance of EEC by significant margins (25.36% and 21.65% increase on A30 and A50). Note that storing real high resolution images requires a much larger disk space than storing images of size 32×32. For ImageNet-50, storing original images of size 224×224 requires 39.13GB, while images of size 32×32 require only 315MB. In contrast, EEC requires only 13.04 GB (66% less than real images of size 224×224) when using the same autoencoder architecture that was used for images of size 32×32. Using a different architecture, we can bring the size of encoded episodes to be the same as for images of size 32×32 but the accuracy achieved is lower. These results further show the effectiveness of our approach to mitigate catastrophic forgetting while using significantly less memory compared to real images." }, { "heading": "5 CONCLUSION", "text": "This paper has presented a novel and potentially powerful approach (EEC) to strict class-incremental learning. Our paper demonstrates that the generation of high quality reconstructed data can serve as the basis for improved classification during continual learning. We further demonstrate techniques for dealing with image degradation during classifier training on new tasks as well as a cognitivelyinspired clustering approach that can be used to manage the memory. Our experimental results demonstrate that these techniques mitigate the effects of catastrophic forgetting, especially on complex RGB datasets, while also using less memory than SOTA approaches. Future continual learning approaches can incorporate different components of our approach such as the NST-based autoencoder, pseudo-rehearsal and sample decay weights for improved performance." }, { "heading": "A EEC ALGORITHMS", "text": "The algorithms below describe portions of the complete EEC algorithm. Algorithm 1 is for autoencoder training (Section 3.1 in paper), Algorithm 2 is for memory integration (Section 3.2 in paper), Algorithm 3 is for rehearsal, pseudo-rehearsal and classifier training (Section 3.3 in paper) and Algorithm 4 is for filtering pseudo-images (Section 3.3 in paper).\nAlgorithm 1: EEC: Train Autoencoder\nInput: St = {(xti, yti)}n t\ni=1 . data points with ground truths for task t require: λ require: fr = (gdec ◦ genc) . autoencoder pipeline require: fc : X → Fc . convolutional feature extractor from classifier D require: Nepoch . Number of epochs Output: Ft = {(f ti , yti)}n t i=1 . encoded episodes with ground truths for task t\n1: for j = 1;j < Nepoch do 2: X∗t = fr(X\nt) . get reconstructed images for task t 3: Lr = ||Xt −X∗t ||2 . Reconstruction loss 4: Lcont = ||fc(Xt)− fc(X∗t )||2 5: . Content loss 6: L = (1− λ)Lr + λLcont . complete autoencoder loss 7: θr ← minimize(L) . update autoencoder parameters such that it minimizes L 8: Ft = genc(Xt) . get encoded episodes for images of task t\nAlgorithm 2: EEC: Memory Integration\nInput: F = {Fi}t−1i=1 , where Fi = {(f ij , yij)}n i j=1 . encoded episodes set for previous t− 1 tasks require: K . maximum number of episodes require: Kt . number of encoded episodes for task t Output: F (new) . reduced encoded episodes for previous tasks Output: C = {Ci}t−1i=1 . centroids for previous tasks Output: Σ = {Σi}t−1i=1 . covariances for previous tasks Initialize: Kt−1 = ∑t−1 i=1 n\ni . number of encoded episodes for t− 1 previous tasks 1: Kr = Kt +Kt−1 −K 2: for i = 1;i ≤ t− 1 do 3: for j = 1;j ≤ yt do 4: repeat . for each class in task i find centroids and covariances 5: cj , σj ← combine(xil, xiq)∀yil = yiq = j . Combine closest points for class j in task i\n6: Nj(new) = 2Nj(cent) + ∑ni l=1,yil=j 1 . Nj(cent): number of centroids for class j 7: until Nj(new) = Nj(1− KrKt−1 )\nAlgorithm 3: EEC: Train Classifier\nInput: St = {(xti, yti)}n t\ni=1 . data points with ground truths for task t Input: F = {Fi}t−1i=1 , where Fi = {(f ij , yij)}n i j=1 . encoded episodes for previous tasks Input: C = {Ci}t−1i=1 , where Ci = {(cij , yij)} Ni(cent) j=1 . centroids for previous tasks Input: Σ = {Σi}t−1i=1 , where Σi = {(σij , yij)} Ni(cent) j=1 . covariances for previous tasks Input: M = {Mi}t−1i=1 , where Mi = {mij} Ni(cent) j=1 . no. of episodes clustered in each centroid require: cot−1 . accuracy of original training images of previous t− 1 tasks require: D . Classifier model require: {gjdec} t−1 j=1 . decoder part of autoencoders for each of the previous tasks require: Nepoch . Number of epochs 1: for i = 1;i ≤ t− 1 do 2: for j = 1;j ≤ N i(cent) do 3: F ∗i (large).append(multivariate normal(c i j , σ i j)) . generate pseudo-samples for\neach concept 4: X ′\ni(large) = g i dec(F ∗ i (large)). generate large no. of pseudo-images from pseudo-samples\n5: X ′ i = FILTER(X ′\ni(large),Mi) . Algorithm 4 6: X∗i = g i dec(Fi) . get reconstructed images for encoded episodes 7: for i = 1;i ≤ t− 1 do 8: Y ∗i = D(X ∗ i ) . predictions for reconstructed images for task i 9: Y ′\ni = D(X ′ i) . predictions for pseudo-images for task i\n10: cri = ∑ni∗ j=1 1[Y ∗ i (j)==Yi(j)] ni∗ . accuracy for reconstructed images of task i\n11: cpi = ∑ni′ j=1 1[Y ′ i (j)==Yi(j)] ni′ . accuracy for pseudo images of task i 12: γri = 1− cri coi . sample decay coefficient for reconstructed images of task i 13: γpi = 1− cpi coi . sample decay coefficient for pseudo images of task i 14: Γri = e −γri αi 15: Γpi = e −γpi αi 16: for j = 1;j < Nepoch do 17: for i = 1;i ≤ t− 1 do 18: Y ∗i = D(X ∗ i ) . predictions for reconstructed images for task i 19: Y ′\ni = D(X ′\ni) . predictions for pseudo-images for task i 20: Lri = CrossEntropy(Y ∗i − Yi) . loss for reconstructed images of task i 21: Lpi = CrossEntropy(Y ′\ni − Yi) . loss for psuedo-images of task i 22: Y ∗t = D(Xt) . predictions for data points for task t 23: Lt = CrossEntropy(Y ∗t − Yt) . loss for task t 24: LD = Lt + ∑t−1 i=1 Γ r iLri + Γ p iL p i 25: θD ← minimize(LD) . update classifier parameters such that it minimizes LD\nAlgorithm 4: EEC: Filter Pseudo-images\nInput: St(large) = {(xti, yti)} nt(large) i=1 , where Xt(large) = {xti} nt(large) i=1 , Yt(large) = {yti} nt(large) i=1 Input: Mt = {mti} Nt(cent) i=1 require: D Output: St = {(xti, yti)}n t i=1 . filtered pseudo-images for task t\n1: Y ∗t (large) = D(Xt(large)) 2: Xt = {xti ∈ Xt(large); yti == yt ∗ i } nt(large) i=1 . pseudo-images with correct predicted label 3: Xt = {xti ∈ Xt} Mt i=1 . keep Mt pseudo-images" }, { "heading": "C HYPERPARAMETERS", "text": "" }, { "heading": "B AUTOENCODER ARCHITECTURES", "text": "" }, { "heading": "D EXPERIMENTS ON CIFAR-100", "text": "For a fair comparison, the main set of approaches that we compared to are generative memory approaches. These approaches were only tested on the four datasets (Section 4). However, many episodic memory approaches and approaches that use pre-trained CNN features did not present results on the four datasets in Section 4. Hence, we test our approach (EEC) on CIFAR-100 to compare against more episodic memory approaches and approaches that use a pre-trained CNN as a feature extractor.\nCIFAR-100 consists of 60,000 32 × 32 images belonging to 100 object classes. There are 500 training images and 100 test images for each class. We divided the dataset into 10 batches with 10 classes per batch. During training, in each increment we provided the algorithm a new batch of 10 classes. We report the average incremental accuracy over the 10 increments as the evaluation metric. For evaluation against approaches that use a pre-trained network, we first extracted features from a pre-trained ResNet-34 on ImageNet and then applied EEC on the feature vectors instead of raw RGB images. This version of EEC is denoted as EEC-P. The hyperparmeter values and training details for this experiment are presented in Appendix C. We performed this experiment 10 times with different random seeds and report the average accuracy over all the runs.\nAmong the episodic memory approaches, we compare against iCaRL (Rebuffi et al., 2017), EEIL (Castro et al., 2018), BiC (Wu et al., 2019) and LUCIR (Hou et al., 2019). Among the approaches that use a pre-trained CNN, we compare against FearNet (Kemker & Kanan, 2018) and the method proposed by Xiang et al. (2019) (CAN). These approaches have been introduced in Section 2.\nTable 6 shows the comparison of EEC against all the above mentioned approaches. EEC beats all the state-of-the-art (SOTA) episodic memory approaches by a significant margin, even though these approaches have an unfair advantage of using stored real images of the old classes. Although the difference between EEC and other approaches (1.3%) is lower than the difference on the other four datasets. EEC-P also beats the SOTA approaches that use a pre-trained CNN by a much greater margin (9.6%). These results further show the effectiveness of our approach for mitigating catastrophic forgetting." }, { "heading": "E COMPARISON WITH FEARNET", "text": "FearNet (Kemker & Kanan, 2018) is one of the few approaches that uses some ideas similar to EEC. In particular, FearNet uses autoencoders to reconstruct old data and stores a single centroid and covariance matrix for each of the old classes. FearNet also uses pseudorehearsal on the centroids and covariances of old classes to regenerate pseudo-encoded episodes of the old classes, which is the same as our approach. However, unlike EEC, FearNet uses a pre-trained feature extractor which is a limitation. Further, for pseudorehearsal, FearNet only stores a single centroid and covariance matrix for each of the old classes, which is not enough to capture the entire feature space of the classes. In contrast, EEC uses memory integration based clustering technique to store multiple centroids and covariance matrices per class to better capture the overall feature space of the old classes.\nTo illustrate the difference between the two approaches, we performed the same experiment on MNIST dataset as in Section 3.3. For FearNet, we allowed the model to store a single centroid (and\ncovariance matrix) for each of the two classes and then used pseudorehearsal to generate pseudoencoded episodes. Figure 6 shows the comparison between the original feature space (Figure 6 (a)), pseudo-encoded episodes generated with 200 centroids using memory integration (Figure 6 (b)) and pseudo-encoded episodes generated using a single centroid per class (2 centroids) as in FearNet (Figure 6 (c)). The feature space generated by FearNet is almost circular and does not resemble the original feature space. Also, there is an overlap between the feature spaces of the two classes which will lead to similar images for the two classes which will eventually hurt classifier performance. In contrast, the pseudo-encoded episodes generated by memory integration capture the overall concept of the original feature space without any overlap between the feature spaces of the two classes. Results in Table 6 on CIFAR-100 dataset also confirm that EEC is significantly superior to FearNet in terms of the average incremental accuracy (10.5% improvement in accuracy)." }, { "heading": "F EXPERIMENT ON IMAGENET-50 WITH ORIGINAL AND RECONSTRUCTED IMAGES", "text": "In this experiment, we allow EEC to partially store original images and the rest as encoded episodes. We define a ratio of original images and encoded episodes as r = no/N , where no is the number of original images stored per class andN is the total number of images (original and reconstructed) per class. This experiment was performed with images of size 32×32. Table 7 shows the accuracy for EEC in terms of A30 and A50 for different values of r. r = 0 shows EEC with no original images and only the encoded episodes. By increasing r to 0.1, we see a dramatic increase in accuracy, however the increase in memory is minimal (131.53 MB for r = 0.1 as compared to 111.56 MB for r = 0). As the value of r continues to increase, EEC’s accuracy gets closer and closer to the JT upperbound." } ]
2,021
null
SP:da8ca392a4eb366f4fdedb09d461ef804615b0b2
[ "In this paper, the authors propose a latent space regression method for analyzing and manipulating the latent space of pre-trained GAN models. Unlike existing optimization-based methods, an explicit latent code regressor is learned to map the input to the latent space. The authors apply this approach to several applications: image composition, attribute modification, image completion, and multimodal editing. They also present some analysis on the independence of semantic parts of an image." ]
In recent years, Generative Adversarial Networks have become ubiquitous in both research and public perception, but how GANs convert an unstructured latent code to a high quality output is still an open question. In this work, we investigate regression into the latent space as a probe to understand the compositional properties of GANs. We find that combining the regressor and a pretrained generator provides a strong image prior, allowing us to create composite images from a collage of random image parts at inference time while maintaining global consistency. To compare compositional properties across different generators, we measure the trade-offs between reconstruction of the unrealistic input and image quality of the regenerated samples. We find that the regression approach enables more localized editing of individual image parts compared to direct editing in the latent space, and we conduct experiments to quantify this independence effect. Our method is agnostic to the semantics of edits, and does not require labels or predefined concepts during training. Beyond image composition, our method extends to a number of related applications, such as image inpainting or example-based image editing, which we demonstrate on several GANs and datasets, and because it uses only a single forward pass, it can operate in real-time. Code is available on our project page: https://chail.github.io/latent-composition/. Compose ⊗ Image Composition Attribute Editing Multimodal Editing Image Completion ⊗ ⊗ ⊗
[ { "affiliations": [], "name": "Lucy Chai" }, { "affiliations": [], "name": "Jonas Wulff" }, { "affiliations": [], "name": "Phillip Isola" } ]
[ { "authors": [ "Rameen Abdal", "Yipeng Qin", "Peter Wonka" ], "title": "Image2stylegan: How to embed images into the stylegan latent space", "venue": null, "year": 2019 }, { "authors": [ "Rameen Abdal", "Yipeng Qin", "Peter Wonka" ], "title": "Image2stylegan++: How to edit the embedded images", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2020 }, { "authors": [ "Amjad Almahairi", "Sai Rajeswar", "Alessandro Sordoni", "Philip Bachman", "Aaron Courville" ], "title": "Augmented cyclegan: Learning many-to-many mappings from unpaired data", "venue": "In Int. Conf. Machine Learning,", "year": 2018 }, { "authors": [ "David Bau", "Jun-Yan Zhu", "Jonas Wulff", "William Peebles", "Hendrik Strobelt", "Bolei Zhou", "Antonio Torralba" ], "title": "Seeing what a gan cannot generate", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2019 }, { "authors": [ "David Bau", "Hendrik Strobelt", "William Peebles", "Bolei Zhou", "Jun-Yan Zhu", "Antonio Torralba" ], "title": "Semantic photo manipulation with a generative image prior", "venue": "ACM Trans. Graph.,", "year": 2020 }, { "authors": [ "Michel Besserve", "Arash Mehrjou", "Rémy Sun", "Bernhard Schölkopf" ], "title": "Counterfactuals uncover the modular structure of deep generative models", "venue": "In Int. Conf. Learn. Represent.,", "year": 2018 }, { "authors": [ "Irving Biederman" ], "title": "Recognition-by-components: a theory of human image understanding", "venue": "Psychological review,", "year": 1987 }, { "authors": [ "Ashish Bora", "Ajil Jalal", "Eric Price", "Alexandros G Dimakis" ], "title": "Compressed sensing using generative models", "venue": "In Int. Conf. Machine Learning,", "year": 2017 }, { "authors": [ "Peter Burt", "Edward Adelson" ], "title": "The laplacian pyramid as a compact image code", "venue": "IEEE Transactions on communications,", "year": 1983 }, { "authors": [ "Edo Collins", "Raja Bala", "Bob Price", "Sabine Süsstrunk" ], "title": "Editing in style: Uncovering the local semantics of gans", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2020 }, { "authors": [ "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale adversarial representation learning", "venue": "In Adv. Neural Inform. Process. Syst.,", "year": 2019 }, { "authors": [ "Jeff Donahue", "Philipp Krähenbühl", "Trevor Darrell" ], "title": "Adversarial feature learning", "venue": "Int. Conf. Learn. Represent.,", "year": 2017 }, { "authors": [ "Vincent Dumoulin", "Ishmael Belghazi", "Ben Poole", "Olivier Mastropietro", "Alex Lamb", "Martin Arjovsky", "Aaron Courville" ], "title": "Adversarially learned inference", "venue": "In Int. Conf. Learn. Represent.,", "year": 2016 }, { "authors": [ "Lore Goetschalckx", "Alex Andonian", "Aude Oliva", "Phillip Isola" ], "title": "Ganalyze: Toward visual definitions of cognitive image properties", "venue": null, "year": 2019 }, { "authors": [ "Jinjin Gu", "Yujun Shen", "Bolei Zhou" ], "title": "Image processing using multi-code gan prior", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2019 }, { "authors": [ "Shuyang Gu", "Jianmin Bao", "Hao Yang", "Dong Chen", "Fang Wen", "Lu Yuan" ], "title": "Mask-guided portrait editing with conditional gans", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2019 }, { "authors": [ "Shanyan Guan", "Ying Tai", "Bingbing Ni", "Feida Zhu", "Feiyue Huang", "Xiaokang Yang" ], "title": "Collaborative learning for faster stylegan embedding", "venue": "arXiv preprint arXiv:2007.01758,", "year": 2020 }, { "authors": [ "James Hays", "Alexei A Efros" ], "title": "Scene completion using millions of photographs", "venue": "ACM Transactions on Graphics (TOG),", "year": 2007 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2016 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Adv. Neural Inform. Process", "year": 2017 }, { "authors": [ "Minyoung Huh", "Richard Zhang", "Jun-Yan Zhu", "Sylvain Paris", "Aaron Hertzmann" ], "title": "Transforming and projecting images into class-conditional generative networks", "venue": "In Eur. Conf. Comput. Vis.,", "year": 2020 }, { "authors": [ "Satoshi Iizuka", "Edgar Simo-Serra", "Hiroshi Ishikawa" ], "title": "Globally and locally consistent image completion", "venue": "ACM Trans. Graph.,", "year": 2017 }, { "authors": [ "Phillip Isola", "Ce Liu" ], "title": "Scene collaging: Analysis and synthesis of natural images with semantic layers", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2013 }, { "authors": [ "Ali Jahanian", "Lucy Chai", "Phillip Isola" ], "title": "On the”steerability” of generative adversarial networks", "venue": "In Int. Conf. Learn. Represent.,", "year": 2020 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "In Int. Conf. Learn. Represent.,", "year": 2017 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2019 }, { "authors": [ "Tero Karras", "Samuli Laine", "Miika Aittala", "Janne Hellsten", "Jaakko Lehtinen", "Timo Aila" ], "title": "Analyzing and improving the image quality of stylegan", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Adv. Neural Inform. Process. Syst.,", "year": 2018 }, { "authors": [ "Adam Kortylewski", "Ju He", "Qing Liu", "Alan L Yuille" ], "title": "Compositional convolutional neural networks: A deep architecture with innate robustness to partial occlusion", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2020 }, { "authors": [ "Dong C Liu", "Jorge Nocedal" ], "title": "On the limited memory bfgs method for large scale optimization", "venue": "Mathematical programming,", "year": 1989 }, { "authors": [ "Nian Liu", "Junwei Han", "Ming-Hsuan Yang" ], "title": "Picanet: Learning pixel-wise contextual attention for saliency detection", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2018 }, { "authors": [ "Ron Mokady", "Sagie Benaim", "Lior Wolf", "Amit Bermano" ], "title": "Mask based unsupervised content transfer", "venue": "In Int. Conf. Learn. Represent.,", "year": 2019 }, { "authors": [ "Muhammad Ferjad Naeem", "Seong Joon Oh", "Youngjung Uh", "Yunjey Choi", "Jaejun Yoo" ], "title": "Reliable fidelity and diversity metrics for generative models", "venue": "In Int. Conf. Machine Learning,", "year": 2020 }, { "authors": [ "Xingang Pan", "Xiaohang Zhan", "Bo Dai", "Dahua Lin", "Chen Change Loy", "Ping Luo" ], "title": "Exploiting deep generative prior for versatile image restoration and manipulation", "venue": null, "year": 2020 }, { "authors": [ "Taesung Park", "Ming-Yu Liu", "Ting-Chun Wang", "Jun-Yan Zhu" ], "title": "Semantic image synthesis with spatially-adaptive normalization", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2019 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2016 }, { "authors": [ "Patrick Pérez", "Michel Gangnet", "Andrew Blake" ], "title": "Poisson image editing", "venue": "In ACM SIGGRAPH", "year": 2003 }, { "authors": [ "Stanislav Pidhorskyi", "Donald A Adjeroh", "Gianfranco Doretto" ], "title": "Adversarial latent autoencoders", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2020 }, { "authors": [ "Marcos Pividori", "Guillermo L Grinblat", "Lucas C Uzal" ], "title": "Exploiting gan internal capacity for high-quality reconstruction of natural images", "venue": null, "year": 1911 }, { "authors": [ "Ori Press", "Tomer Galanti", "Sagie Benaim", "Lior Wolf" ], "title": "Emerging disentanglement in auto-encoder based unsupervised image content transfer", "venue": "In Int. Conf. Learn. Represent.,", "year": 2020 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Elad Richardson", "Yuval Alaluf", "Or Patashnik", "Yotam Nitzan", "Yaniv Azar", "Stav Shapiro", "Daniel Cohen-Or" ], "title": "Encoding in style: a stylegan encoder for image-to-image translation", "venue": null, "year": 2008 }, { "authors": [ "Yujun Shen", "Jinjin Gu", "Xiaoou Tang", "Bolei Zhou" ], "title": "Interpreting the latent space of gans for semantic face editing", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2019 }, { "authors": [ "Assaf Shocher", "Yossi Gandelsman", "Inbar Mosseri", "Michal Yarom", "Michal Irani", "William T Freeman", "Tali Dekel" ], "title": "Semantic pyramid for image generation", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2020 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In Int. Conf. Learn. Represent.,", "year": 2015 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Ryohei Suzuki", "Masanori Koyama", "Takeru Miyato", "Taizan Yonetsuji", "Huachun Zhu" ], "title": "Spatially controllable image synthesis with internal representation collaging", "venue": "arXiv preprint arXiv:1811.10153,", "year": 2018 }, { "authors": [ "Domen Tabernik", "Matej Kristan", "Jeremy L Wyatt", "Aleš Leonardis" ], "title": "Towards deep compositional networks", "venue": "In Int. Conf. Pattern Recog.,", "year": 2016 }, { "authors": [ "Ayush Tewari", "Michael Zollhofer", "Hyeongwoo Kim", "Pablo Garrido", "Florian Bernard", "Patrick Perez", "Christian Theobalt" ], "title": "Mofa: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog. Worksh.,", "year": 2017 }, { "authors": [ "Ayush Tewari", "Mohamed Elgharib", "Gaurav Bharaj", "Florian Bernard", "Hans-Peter Seidel", "Patrick Pérez", "Michael Zollhofer", "Christian Theobalt" ], "title": "Stylerig: Rigging stylegan for 3d control over portrait images", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2020 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Deep image prior", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2018 }, { "authors": [ "Jonas Wulff", "Antonio Torralba" ], "title": "Improving inversion and generation diversity in stylegan using a gaussianized latent space", "venue": "arXiv preprint arXiv:2009.06529,", "year": 2020 }, { "authors": [ "Tete Xiao", "Yingcheng Liu", "Bolei Zhou", "Yuning Jiang", "Jian Sun" ], "title": "Unified perceptual parsing for scene understanding", "venue": "In Eur. Conf", "year": 2018 }, { "authors": [ "Jiahui Yu", "Zhe Lin", "Jimei Yang", "Xiaohui Shen", "Xin Lu", "Thomas S Huang" ], "title": "Generative image inpainting with contextual attention", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2018 }, { "authors": [ "Yu Zeng", "Zhe Lin", "Jimei Yang", "Jianming Zhang", "Eli Shechtman", "Huchuan Lu" ], "title": "High-resolution image inpainting with iterative confidence feedback and guided upsampling", "venue": null, "year": 2020 }, { "authors": [ "Richard Zhang", "Jun-Yan Zhu", "Phillip Isola", "Xinyang Geng", "Angela S Lin", "Tianhe Yu", "Alexei A Efros" ], "title": "Real-time user-guided image colorization with learned deep priors", "venue": "ACM Trans. Graph.,", "year": 2017 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": "In IEEE Conf. Comput. Vis. Pattern Recog.,", "year": 2018 }, { "authors": [ "Jiapeng Zhu", "Yujun Shen", "Deli Zhao", "Bolei Zhou" ], "title": "In-domain gan inversion for real image editing", "venue": "In Eur. Conf. Comput. Vis.,", "year": 2020 }, { "authors": [ "Jun-Yan Zhu", "Philipp Krähenbühl", "Eli Shechtman", "Alexei A Efros" ], "title": "Generative visual manipulation on the natural image manifold", "venue": "In Eur. Conf", "year": 2016 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Natural scenes are comprised of disparate parts and objects that humans can easily segment and interchange (Biederman, 1987). Recently, unconditional generative adversarial networks (Karras et al., 2017; 2019b;a; Radford et al., 2015) have become capable of mimicking the complexity of natural images by learning a mapping from a latent space noise distribution to the image manifold. But how does this seemingly unstructured latent space produce a strikingly realistic and structured\n1Dome image from: https://www.technologyreview.com/2019/10/24/132370/mit-dome/\nscene? Here, we use a latent regressor to probe the latent space of a pretrained GAN, allowing us to uncover and manipulate the concepts that GANs learn about the world in an unsupervised manner.\nFor example, given a church image, is it possible to swap one foreground tree for another one? Given only parts of the building, can the missing portion be realistically filled? To achieve these modifications, the generator must be compositional, i.e., understanding discrete and separate representations of objects. We show that the pretrained generator – without any additional interventions – already represents these compositional properties in its latent code. Furthermore, these properties can be manipulated using a regression network that predicts the latent code of a given image. The pixels of this image then provide us with an intuitive interface to control and modify the latent code. Given the modified latent code, the network then applies image priors learned from the dataset, ensuring that the output is always a coherent scene regardless of inconsistencies in the input (Fig. 1).\nOur approach is simple – given a fixed pretrained generator, we train a regressor network to predict the latent code from an input image, while adding a masking modification to learn to handle missing pixels. To investigate the GAN’s ability to produce a globally coherent version of a scene, we hand the regressor a rough, incoherent template of the scene we desire, and use the two networks to convert it into a realistic image. Even though our regressor is never trained on these unrealistic templates, it projects the given image into a reasonable part of the latent space, which the generator maps onto the image manifold. This approach requires no labels or clustering of attributes; all we need is a single example of approximately how we want the generated image to look. It only requires a forward pass of the regressor and generator, so there is low latency in obtaining the output image, unlike iterative optimization approaches that can require upwards of a minute to reconstruct an image.\nWe use the regressor to investigate the compositional capabilities of pretrained GANs across different datasets. Using input images composed of different image parts (“collages”), we leverage the generator to recombine this unrealistic content into a coherent image. This requires solving three tasks simultaneously – blending, alignment, and inpainting. We then investigate the GAN’s ability to independently vary localized portions of a given image. In summary, our contributions are:\n• We propose a latent regression model that learns to perform image reconstruction even in the case of incomplete images and missing pixels and show that the combination of regressor and generator forms a strong image prior.\n• Using the learned regressor, we show that the representation of the generator is already compositional in the latent code, without having to resort to intermediate layer activations.\n• There is no use of labelled attributes nor test-time optimization, so we can edit images based on a single example of the desired modification and reconstruct in real-time.\n• We use the regressor to probe what parts of a scene can vary independently, and investigate the difference between image mixing using the encoder and interpolation in latent space.\n• The same regressor setup can be used for a variety of other image editing applications, such as multimodal editing, scene completion, or dataset rebalancing." }, { "heading": "2 RELATED WORK", "text": "Image Inversion. Given a target image, the GAN inversion problem aims to recover a latent code which best generates the target. Image inversion comes with a number of challenges, including 1) a complex optimization landscape and 2) the generator’s inability to reconstruct out-of-domain images. To relax the domain limitations of the generator, one possibility is to invert to a more flexible intermediate latent space (Abdal et al., 2019), but this may allow the generator to become overly flexible and requires regularizers to ensure that the recovered latent code does not deviate too far from the latent manifold (Pividori et al., 2019; Zhu et al., 2020; Wulff & Torralba, 2020). An alternative to increasing the flexibility of the generator is to learn an ensemble of latent codes that approximate a target image when combined (Gu et al., 2019a). Due to challenging optimization, the quality of inversion depends on good initialization. A number of approaches use a hybrid of a latent regression network to provide an initial guess of the latent code with subsequent optimization of the latent code (Bau et al., 2019; Guan et al., 2020) or the generator weights (Zhu et al., 2016; Bau et al., 2020; Pan et al., 2020), while Huh et al. (2020) investigates gradient-free approaches for optimization. Besides inverting whole images, a different use case of image inversion through a generator is to complete\npartial scenes. When using optimization, this is achieved by only measuring the reconstruction loss on the known pixels (Bora et al., 2017; Gu et al., 2019a; Abdal et al., 2020), whereas in feed-forward methods, the missing region must be provided explicitly to the model. Rather than inverting to the latent code of a pretrained generator, one can train the generator and encoder jointly, based on modifications to the Variational Autoencoder (Kingma & Welling, 2013). Donahue et al. (2017); Donahue & Simonyan (2019); Dumoulin et al. (2016) use this setup to investigate the properties of latent representations learned during training, while Pidhorskyi et al. (2020) demonstrate a joint learning method that can achieve comparable image quality to recent GAN models. In our work, we investigate the emergent priors of a pretrained GAN using a masked latent regression network as an approximate image inverter. While such a regressor has lower reconstruction accuracy than optimization-based techniques, its lower latency allows us to investigate the learned priors in a computationally efficient way and makes real-time image editing incorporating such priors possible.\nComposition in Image Domains. To join segments of disparate image sources into one cohesive output, early works use hand-designed features, such as Laplacian pyramids for seamless blending (Burt & Adelson, 1983). Hays & Efros (2007) and Isola & Liu (2013) employ nearest-neighbor approaches for scene composition and completion. More recently, a number of deep network architectures have been developed for compositional tasks. For discriminative tasks, Tabernik et al. (2016) and Kortylewski et al. (2020) train CNNs with modified compositional architectures to understand model interpretability and reason about object occlusion in classification. For image synthesis, Mokady et al. (2019) and Press et al. (2020) use an autoencoder to encode, disentangle, and swap properties between two sets of images, while Shocher et al. (2020) mixes images in deep feature space while training the generator. Rather than creating models specifically for image composition or scene completion objectives, we investigate the ability of a pre-trained GAN to mix-and-match parts of its generated images. Related to our work, Besserve et al. (2018) estimates the modular structure of GANs by learning a casual model of latent representations, whereas we investigate the GAN’s compositional properties using image inversion. Due to the imprecise nature of image collages, compositing image parts also involves inpainting misaligned regions. However, in contrast to inpainting, in which regions have to be filled in otherwise globally consistent images (Pathak et al., 2016; Iizuka et al., 2017; Yu et al., 2018; Zeng et al., 2020), the composition problem involves correcting inconsistencies as well as filling in missing pixels.\nImage Editing. A recent topic of interest is editing images using generative models. A number of works propose linear attribute vector editing directions to perform image manipulation operations (Goetschalckx et al., 2019; Jahanian et al., 2020; Shen et al., 2019; Kingma & Dhariwal, 2018; Karras et al., 2019a; Radford et al., 2015). It is also possible to identify concepts learned in the generator’s intermediate layers by clustering intermediate representations, either using segmentation labels (Bau et al., 2018) or unsupervised clustering (Collins et al., 2020), and change these representations to edit the desired concepts in the output image. Suzuki et al. (2018) use a spatial feature blending approach which mixes properties of target images in the intermediate feature space of a generator. On faces, editing can be achieved using a 3D parametric model to supervise the modification (Tewari et al., 2017; 2020). In our work, we do not require clusters or concepts in intermediate layers to be defined a priori, nor do we need distinct input and output domains for approximate collages and real images, as in image translation tasks (Zhu et al., 2017; Almahairi et al., 2018). Unlike image manipulation using semantic maps (Park et al., 2019; Gu et al., 2019b), our approach respects the style of the manipulation (e.g. the specific color of the sky), which is lost in the semantic map representation. Our method shares commonalities with Richardson et al. (2020), although we focus on investigating compositional properties rather than image-to-image translation. In our approach, we only require a single example of the approximate target property we want to modify and use regression into the latent space as a fast image prior to create a coherent output. This allows us to create edits that are not contingent on labelled concepts, and we do not need to modify or train the generator." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 LATENT CODE RECOVERY IN GANS", "text": "GANs provide a mapping from a predetermined input distribution to a complex output distribution, e.g. from a standard normal Z to the image manifold X , but they are not easily invertible. In other\nwords, given an image sample from the output distribution, it is not trivial to recover the sample from the input distribution that generated it. The image inversion objective aims to find the latent code z of GAN G that best recovers the desired target image x:\nz∗ = argmin z (dist(G(z), x)), (1)\nusing some metric of image distance dist, such as pixel-wise L1 error or a metric based on deep features. This objective can be solved iteratively, using L-BFGS (Liu & Nocedal, 1989) or other optimizers. However, iterative optimization is slow – it takes a large number of iterations to converge, is prone to local minima, and must be performed for each target image x independently.\nAn alternative way of recovering the latent code z is to train a neural network to directly predict it from a given image x. In this case, the recovered latent code is simply the result of a feed-forward pass through a trained regressor network, z∗ = E(x), where E can be used for any x ∈ X . To train the regressor (or encoder) network E, we use the latent encoder loss\nL = Ez∼N(0,1), x=G(z) [ ||x−G(E(x))||22 + Lp(x,G(E(x))) + Lz(z, E(x)) ] . (2)\nWe sample z randomly from the latent distribution, and pass it through a pretrained generator G to obtain the target image x = G(z). Between the target image x and the recovered image G(E(x)), we use a mean square error loss to guide reconstruction and a perceptual loss Lp (Zhang et al., 2018) to recover details. Between the original latent code z and the recovered latent code E(x), we use a latent recovery loss Lz . We use mean square error or a variant of cosine similarity for latent recovery, depending on the GAN’s input normalization. Additional details can be found in Supp. Sec. A.1.1.\nThroughout this paper the generators are frozen, and we only optimize the weights of the encoder E. When using ProGAN (Karras et al., 2017), we train the encoder network to directly invert to the latent code z. For StyleGAN (Karras et al., 2019b), we encode to an expandedW+ latent space (Abdal et al., 2019). Once trained, the output of the latent regressor yields a latent code such that the reconstructed image looks perceptually similar to the target image." }, { "heading": "3.2 LEARNING WITH MISSING DATA", "text": "When investigating the effect of localized parts of the input images, we might want to treat some image regions explicitly as “unknown”, either to create buffer zones to avoid seams between different pasted parts or to explicitly let the image prior fill in unknown regions. In optimization approaches using Eqn. 1, this can be handled by optimizing only over the known pixels. However, a regressor network cannot handle this naively – it cannot distinguish between unknown pixels and known pixels, and will try to fit the values of the unknown pixels. This can be mitigated with a small modification to the regression network, by indicating which pixels are known versus unknown as input (Fig. 3):\nL = Ez∼N(0,1), x=G(z)||x−G(E(xm,m))||22 + Lp(x,G(E(xm,m))) + Lz(z, E(xm,m)) (3)\nInstead of taking an image x as input, the encoder takes a masked image xm and a mask m, where xm = x⊗m, andm is an additional channel of input. Intuitively, this masking operation is analogous to “dropout” (Srivastava et al., 2014) on pixels – it encourages the encoder to learn a flexible way of recovering a latent code that still allows the generator to reconstruct the image. Thus, given only partial images as input, the encoder is encouraged to map from the known pixels to a latent code that is semantically consistent with the rest of the image. This allows the generator to reproduce an image that is both likely under its prior and consistent with the observed region.\nTo obtain the masked image during training we take a small patch of random uniform noise u, upsample this noise to the full resolution using bilinear interpolation, and mask out all pixels where the upsampled noise is smaller than a sampled threshold t ∼ U(0, 1) to simulate arbitrarily shaped mask boundaries. However, at test time, the exact form of the mask does not matter – the mask simply indicates where the generator should try to reconstruct or inpaint, and does not distinguish between the different image parts of the input. We provide additional details in Supp. Sec. A.1.1 and A.2.3.\nThe regressor and generator pair enforces global coherence: when we obscure or modify parts of the input, the generator will create an output that is still overall consistent. By masking out arbitrary parts of the image (Eqn. 3), we allow the GAN to imagine a realistic completion of the missing pixels, which can vary based on the given context (Fig. 2). This suggests that the regressor inherently learns an unsupervised object representation, allowing it to complete objects from only partial hints even though the generator and regressor are never provided with structured concept labels during training." }, { "heading": "3.3 IMAGE COMPOSITION USING LATENT REGRESSION", "text": "The regressor E and generator G form an image prior to project any input image xinput onto the manifold of generated imagesX , even if xinput /∈ X . We leverage this to investigate the compositional properties of the latent code. We extract parts of images (either generated by G or from real images), and combine them to form a collaged image xclg. This extraction process does not need to be precise and can have obvious seams and missing pixels. At the same time, while xclg is often not realistic, our encoder is aware of these missing pixels and can properly process them, as described in Sec. 3.2. We can therefore use the E and G to blend the seams and produce a realistic composite output. To create xclg, we sample base images xi and masks maski, and combine them via union; once we have formed the collage xclg, we reproject via the regressor and generator to obtain the composite xrec:\nxclg = ⋃ i maski ⊗ xi;\nxrec = G(E(xclg,∪imaski)). (4)\nNote that each maski used to extract individual image parts in Eqn. 4 is not available to the encoder, only the union is provided in the form of a binary mask. Also, the regressor is trained solely for the latent recovery objective (Eqn. 3) and has never seen collaged images during training. To automate the process of extracting masked images, we use a pretrained segmentation network (Xiao et al., 2018) and sample from the output classes (see Supp. Sec. A.1.2). However, the masked regressor is agnostic to how image parts are extracted; we also experiment with a saliency network (Liu et al., 2018), approximate rectangles, and user-defined masks in Supp. Sec. A.2.1 and A.2.4." }, { "heading": "4 EXPERIMENTS", "text": "Using pre-trained Progressive GAN (Karras et al., 2017) and StyleGAN2 (Karras et al., 2019b) generators, we conduct experiments on CelebA-HQ and FFHQ faces and LSUN cars, churches, living rooms, and horses to investigate the compositional properties that GANs learn from data." }, { "heading": "4.1 IMAGE COMPOSITION FROM APPROXIMATE COLLAGES", "text": "The masked regressor and the pretrained GAN form an image prior which converts unrealistic input images into realistic scenes while maintaining properties of the input image. We use this property to investigate the ability of GANs to recombine synthesized collages; i.e., to join parts of different input images into a coherent composite output image. The goal of a truly “compositional” GAN would be to both preserve the input parts and unify them realistically in the output. As we do not have ground-truth composite images, we create them automatically using randomly extracted image parts. The regressor and generator must then simultaneously 1) blend inconsistencies between disparate image parts 2) correct misalignments between the parts and 3) inpaint missing regions, balancing global coherence of the output image with its similarity to the input collage.\nUsing extracted and combined image parts (Eqn. 4), we show qualitative examples of these input collages and the corresponding generated composite across a variety of domains (Fig. 5); note that the inputs are not realistic, often with imperfect detections and misalignments. However, the learned image prior from the generator and encoder fixes these inconsistencies to create realistic outputs.\nTo measure the tradeoff between the networks’ ability to preserve the input and the realism of the composite image, we compute masked L1 distance as a metric of reconstruction (lower is better)\navg(m⊗ |x−G(E(x,m))|) (5)\nand FID score (Heusel et al., 2017) over 50k samples as a metric of image quality (lower is better). To isolate the realism of the composite image from the regressor network’s native reconstruction capability (i.e. the ability to recreate a single image generated by G), we compare the difference in FID between the reconstructed composites (xrec in Eqn. 4), and re-encoded images G(E(G(z)). In Fig. 4, we plot these two metrics for both ProGAN and StyleGAN across various dataset domains. Here, an ideal composition would attain zero L1 error (perfect reconstruction of the input) and zero FID increase (preserves realism), but this is impossible, hence the generators demonstrate a balance of these two ideals along a Pareto front. We show full results on FID, density & coverage (Naeem et al., 2020), and L1 reconstruction error and additional random samples in Supp. Sec. A.2.4." }, { "heading": "4.2 COMPARING COMPOSITIONAL PROPERTIES ACROSS ARCHITECTURES", "text": "Given approximate and unrealistic input collages, the combination of regressor and generator imposes a strong image prior, thereby correcting the output so that it becomes realistic. How much does the\npretrained GAN and the regression network each contribute to this outcome? Here, we investigate a number of different image reconstruction methods spanning three major categories: autoencoder architectures without a pretrained GAN, optimization-based methods of recovering a GAN latent code without an encoder, and encoder-based methods paired with a pretrained GAN. For comparison, we use the same set of collages to compare the methods, generated from parts of random real images of the church and face domains. As some methods take several minutes to reconstruct a single image, we use 200 collages for each domain. Due to the smaller sample size, we use density here as a measure of realism (higher is better), which measures proximity to the real-image manifold (Naeem et al., 2020) and compare to L1 reconstruction (Eqn. 5); a perfect composite has high density and low L1. We report additional metrics in Tab. 4-5.\nFor the church domain, we first compare to autoencoding methods that train the generator and encoder portions jointly: DIP (Ulyanov et al., 2018), Inpainting (Yu et al., 2018), CycleGAN (Zhu et al., 2017), and SPADE (Park et al., 2019). For iterative optimization methods using only the pretrained generator, we compare direct LBFGS optimization of the latent code (Liu & Nocedal, 1989), Multi-Code Prior (Gu et al., 2019a), and StyleGAN projection (Karras et al., 2019a). Third, we use our regressor network to directly predict the latent code in a feed-forward pass (Encode), and additionally further optimize the predicted latent to decrease reconstruction error (Enc+LBFGS). We provide additional details on each method in Supplementary Sec. A.2.4. Qualitatively, the different methods have varying degrees of realism when trying to reconstruct unrealistic input collages (we show examples in Supp. Fig. 18); optimization-based methods such as Deep Image Prior, Multi-Code Prior, and StyleGAN projection tend to overfit and lead to unrealistic reconstructions with low density, whereas segmentation-based methods such as SPADE are not trained to reconstruct the input, leading to high L1. Our StyleGAN encoder yields the most realistic composites with highest density, at the cost of distorting the unrealistic inputs. Fig 6-(left) illustrates this compositional tradeoff, where the encoder based methods perform slightly worse in L1 reconstruction compared to optimization approaches, but maintain more realistic output and can reconstruct with lower computational time.\nOn the face domain, we compare the realism/reconstruction tradeoff of composite outputs of optimization-based Im2StyleGAN (Abdal et al., 2019), Inpainting (Yu et al., 2018), autoencoder ALAE (Pidhorskyi et al., 2020), and different regression networks including In-Domain Inversion (Zhu et al., 2020), Pixel2Style2Pixel (Richardson et al., 2020), and our regressor networks. We show qualitative examples in Supplementary Fig. 19 and a comparison of reconstruction L1 and density in Fig. 6-(right): our ProGAN and StyleGAN masked encoders can maintain closer proximity to the real image manifold (higher density) compared to the alternative methods, with much faster inference time compared to optimization-based methods such as Im2StyleGAN. On these same inputs, ALAE exhibits interesting compositional properties and is qualitatively able to correct misalignments in face patches, but the density of generated images is lower than that of the pretrained GANs. Again, no method can achieve both reconstruction and realism perfectly due to the imprecise nature of the input, and each method demonstrates different balances between the two factors." }, { "heading": "4.3 HOW DOES COMPOSITION DIFFER FROM INTERPOLATION?", "text": "Combining images in pixel space and using the encoder bottleneck to rectify the input is only one way that a generator can mix different images. Another commonly used method is to simply use a\nlinear interpolation between two latent codes to obtain an output that has properties of both input images. Here, we investigate how these two approaches differ. When composing parts of two images, we desire that part of a context image stays the same while the remaining portion changes to match a chosen target: to achieve this composition, we select the desired pixels from our context image and the target modification, and pass the result through the encoder to obtain the blended image.\nG(E(m1 ⊗ x1 +m2 ⊗ x2)) / composition (6)\nHow does this compare to directly interpolating in latent space? We compute the latent α-blend by performing a weighted average of the context and target latent codes:\nG(α ∗ E(x1) + (1− α) ∗ E(x2)) / latent α-blend (7)\nand the pixel α-blend by blending inputs in pixel space and using the encoder bottleneck to make the output more realistic:\nG(E(α ∗ x1 + (1− α) ∗ x2)). / pixel α-blend (8)\nWe select the weight α to be proportional to the area of the target modification. Qualitatively, the composition method is better able to change the target region while keeping the context region fixed, e.g., add white windows while reducing changes in the fireplace or couch in Fig. 7, whereas the other two α-blending methods are less faithful in preserving image content. To quantify this effect, we compute the masked L1 distance (Eqn. 5) of the interpolated images to (1) the original masked context region, (2) to the original masked target region, and (3) to the input collage over 500 random samples. The composition method obtains lower distance from the target and context, and is also closer to the desired collage. Unlike interpolations using attribute vectors, composition manipulations do not need to be learned from data - they are based on a single context and target image, and also allow multiple possible directions of manipulation. We show additional interpolations and comparison to learned attribute vectors in Supp. Sec. A.2.2, and additional applications such as dataset rebalancing and one-shot image editing in Supp. Sec. A.2.1." }, { "heading": "4.4 USING REGRESSION TO INVESTIGATE INDEPENDENCE OF IMAGE COMPONENTS", "text": "Given these unsupervised objected representations, we seek to investigate the independence of individual components by minimizing “leakage” of the desired edits. For example, if we change a facial expression from a frown to a smile, the hair style should not change. We quantify the independence of parts under the global coherence constraint imposed by the regressor and generator pair by first parsing a given face image x into separate semantic components (such as eyes, nose, etc.) represented as masks mc. For each component, we generate N new faces xn, and replace the component region mc in x by the corresponding region in xn, yielding (after regression and generation) for each component c a set of N images xc,n = G(E(mc ⊗ xn + (1−mc)⊗ x)). We can now measure how much changing component c changes each pixel location by computing the normalized pixel-wise standard deviation of xc,n across the n different replacements as vc = σc/ ∑ c σc, where\nσc = √ En[(xn,c − En[xn,c])2]. For a given component c, we measure independence as the average\nvariation outside of c that results from modifying c as sc = E[(1 −mc) ⊗ vc] (a lower sc means higher independence). We repeat this experiment 100 times and use N = 20 samples.\nTable 1 shows the variations of ProGAN and StyleGAN. StyleGAN better separates the background from the face and reduces leakage when changing the hair; for the face parts, leakage is small for both networks. A notable exception is the “skin” area, which for StyleGAN is more globally entangled. This might be because StyleGAN is generally better able to reason about global effects such as illumination, which are strongly reflected in the appearance of the skin, yet have a global impact on the image. Figure 7(a) and (b) qualitatively show examples for the variation maps vc for different parts for ProGAN (a) and StyleGAN (b); the replaced part is marked in red. Lastly, this method can be utilized for unsupervised part discovery, as shown in Fig. 7(c). Here, changing the color of the rear door (top left) changes the appearance of the whole car body; a change of the tire (top right) is very localized, and the foreground (bottom left) and background (bottom right) are large parts varying together, but distinct from the car. More examples of part variations are shown in Supp. Sec. A.2.5." }, { "heading": "5 CONCLUSION", "text": "Using a simple latent space regression model, we investigate the compositional properties of pretrained GAN generators. We train a regressor model to predict the latent code given an input image with the additional objective of learning to complete a scene with missing pixels. With this regressor, we can probe various properties and biases that the GAN learns from data. We find that, in creating scenes, the GAN allows for local degrees of freedom but maintains an overall degree of global consistency; in particular, this compositional representation of local structures is already present at the level of the latent code. This allows us to input approximate templates to the regressor model, such as partially occluded scenes or collages extracted from parts of images, and use the regressor and generator as image prior to regenerate a realistic output. The latent regression approach allows us to investigate how the GAN enforces independence of image parts, while being trained without knowledge of objects or explicit attribute labels. It only requires a single forward pass on the models, which enables real time image editing based on single image examples." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank David Bau, Richard Zhang, Tongzhou Wang, and Luke Anderson for helpful discussions and feedback. Thanks to Antonio Torralba, Alyosha Efros, Richard Zhang, Jun-Yan Zhu, Wei-Chiu Ma, Minyoung Huh, and Yen-Chen Lin for permission to use their photographs in Fig. 23. LC is supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1745302. JW is supported by a grant from Intel Corp." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 SUPPLEMENTARY METHODS", "text": "" }, { "heading": "A.1.1 ADDITIONAL TRAINING DETAILS", "text": "The loss function of the encoder contains image loss terms to ensure that the output of the generator approximates the target image, and a latent recovery loss term to ensure that the predicted latent code matches the original latent code. On the image side, we use mean square error loss in conjunction with LPIPS perceptual loss (Zhang et al., 2018). The latent recovery loss depends on the type of GAN. Due to pixel normalization in ProGAN, we use a latent recovery loss based on cosine similarity, as the exact magnitude of the recovered latent code does not matter after normalization:\nLz = 1− z ||z||2 · E(x) ||E(x)||2 . (9)\nFor StyleGAN, we invert to an intermediate latent space, as it is known that in this space semantic properties are better disentangled than in Z (Karras et al., 2019a). Furthermore, allowing the latents to differ on different scales has been shown to better capture the variability of real images (Abdal et al., 2019). During training, we therefore generate different latents for different scales, and train the encoder to estimate different styles, i.e. estimate w ∈ W+. Unlike the latent space of ProGAN, however, w ∈ W+ is not normalized to the hypersphere. Instead of a cosine loss, we therefore use a mean square error loss as the latent recovery loss:\nLw = ||w − E(x)||2. (10)\nWe train the encoders using a ResNet backbone (ResNet-18 for ProGAN, and ResNet-34 for Stylegan; He et al. (2016)), modifying the output dimensionality to match the number of latent dimensions for each GAN. The encoders are trained with the Adam optimizer (Kingma & Ba, 2014) with learning rate lr = 0.0001. Training takes from two days to about a week on a single GPU, depending on the resolution of the GAN. For ProGAN encoders, we use batch size 16 for the 256 resolution generators, and train for 500K batches. For the 1024 resolution generator, we use batch size 4 and 400K batches. We train the StyleGAN encoders for 680k batches (256 and 512 resolution) or 580k batches (1024 resolution), and add identity loss (Richardson et al., 2020) with weight λ = 1.0 on the FFHQ encoder.\nWhen training with masks, we take a small 6x6 patch of random uniform noise u, upsample to the generator’s resolution, and sample a threshold t from the uniform distribution in range [0.3, 1.0] to create the binary mask:\nm = 1 [Upsample(u) > t]\nxm = m⊗ x (11)\nWe also experimented with masks comprised of random rectangular patches (Zhang et al., 2017), but obtained qualitatively similar results. At inference time, the exact shape of the mask does not matter: we can use hard coded rectangles, hand-drawn masks, or masks based on other pretrained networks. Note that the mask does not distinguish between input image parts – it is a binary mask with value 1 where the generator should try to reconstruct, and 0 where the generator should fill in the missing region." }, { "heading": "A.1.2 ADDITIONAL DETAILS ON COMPOSITION", "text": "When creating automated collages from image parts, we use a pretrained segmentation network (Xiao et al., 2018) to extract parts from randomly sampled individual images. We manually define a set of segmentation class for a given image class, and, to handle overlap between parts, specify an order to these segmentation classes. To generate a collage, we then sample one random image per class. For church scenes, we use an ordering of (from back to front) sky, building, tree, and foreground layers – this ensures that a randomly sampled building patch will appear in front of the randomly sampled sky patch. For living rooms, the ordering we use is floor, ceiling, wall, painting, window, fireplace, sofa, and coffee table – again ensuring that the more foreground elements are layered on top. For cars, we use sky, building, tree, foreground, and car. For faces we order by background, skin, eye, mouth, nose, and hair. In Sec. A.2.4 we investigate using other methods to extract patches from images, rather than image parts derived from segmentation classes." }, { "heading": "A.2 SUPPLEMENTARY RESULTS", "text": "" }, { "heading": "A.2.1 ADDITIONAL APPLICATIONS", "text": "In the main text, we primarily focus on using the regression network as a tool to understand how the generator composes scenes from missing regions and disparate image parts. However, because the regressor allows for fast inference, it enables a number of other real-time image synthesis applications.\nImage Completion From Masked Inputs. Using the masked latent space regressor in Eqn. 3, we investigate the GAN’s ability to automatically fill in unknown portions of a scene given incomplete context. These reconstructions are done on parts of real images to ensure that the regressor is not simply memorizing the input, as could be the case with a generated input (the regressor is never trained on real images). For example, when a headless horse is shown to the masked regressor, the rest of the horse can be filled in by the GAN. In contrast, a regressor that is unaware of missing pixels (Eqn. 2; RGB) is unable to produce a realistic output. We show qualitative examples in Fig. 9.\nMultimodal Editing. Because the regressor only requires a single example of the property we want to reconstruct, it is possible to achieve multimodal manipulations simply by overlaying different desired properties on a given context image. Here, we demonstrate an example of adding different styles of trees to a church scene. In each of the context images, there is originally no tree on the right-hand side. We can add different types of trees to each context image simply by replacing some pixels with tree pixels from a different image, and performing image inversion to create the composite. Here, we use a rectangular region as a mask, rather than following the boundary of the tree precisely. However, note that, after inversion, the color of the sky remains consistent in the composite image, and so does the building color in second tree example. This image editing approach does not require learning separate clusters or having labelled attributes, and therefore can be done with a single pair of images without known cluster definitions, unlike the methods of Bau et al. (2018) and Collins et al. (2020). Furthermore, unlike methods based on segmentation maps (Park et al., 2019; Gu et al., 2019b), styles within each individual semantic category, e.g. the color of the sky, is also changeable based on the provided input to the encoder.\nAttribute Editing With A Single Unlabeled Example. Typically in attribute editing using generative models, one requires a collection of labeled examples of the attribute to modify; for example, we take a collection of generated images with and without the attribute based on an attribute classifier, and find the average difference in their latent codes (Jahanian et al., 2020; Radford et al., 2015; Goetschalckx et al., 2019; Shen et al., 2019). Here, we demonstrate an approach towards attribute editing without using attribute labels (Fig. 11). We overlay a single example of our target attribute (e.g. a smiling mouth or a round church tower), on top of the context image we want to modify, and encode it to get the corresponding latent code. We then subtract the difference between the original and modified latent codes, and add it to the latent code of the secondary context image: z1,modified − z1 + z2. This bypasses the need for an attribute classifier, such as one that identifies round or square church towers.\nDataset Rebalancing. Because the latent regression inverter only requires a forward pass through the network, we can quickly generate a large number of reconstructions. Here, we use this property to investigate rebalancing a dataset (Fig. 12). Using pretrained attribute detectors from Karras et al. (2019a), we first note that there is a smaller fraction of smiling males than smiling females in CelebAHQ (first panel), although the detector itself is biased and tends to overestimate smiling in general\ncompared to ground truth labels (second panel). The detections in the GAN generated images mimic this imbalance (third panel). Next, using a fixed set of generated images, we randomly swap the mouth regions among them in accordance to our desired proportion of smiling rates – we give each face a batch of 16 possible swaps and taking the one that yields the strongest detection as smiling/not smiling based on our desired characteristic (if no swaps are successful, the face is just used as is). We use a hardcoded rectangular box region around the mouth, and encode the image through the generator to blend the modified mouth to the face. After performing swapping on generated images, this allows us rebalance the smiling rates, better equalizing smiling among males and females (fourth panel). Finally, we use the latent regression to perform mouth swapping on the dataset, and train a secondary GAN on this modified dataset – this also improves the smiling imbalance, although the effect is stronger in males than females (fifth panel). We note that a limitation of this method is that the rebalanced proportion uses the attribute detector as supervision, and therefore a biased or inaccurate attribute detector will still result in biases in the rebalanced dataset." }, { "heading": "A.2.2 COMPARING COMPOSITION WITH LATENT SPACE INTERPOLATION", "text": "In the main text, we compare our composition approach to two types of interpolations – latent α-blending and pixel α-blending – on a living room generator. Here, we show equivalent examples in church scenes. In Fig. 13, we demonstrate a “tree” edit, in which we want the background church to remain the same but trees to be added in the foreground. Compared to latent and pixel α-blending, the composition approach better preserves the church while adding trees, which we quantify using masked L1 distance. Similarly, we can change the sky of context scene – e.g. by turning a clear sky into a cloudy one in Fig. 14, where again, using composition better preserves the details of the church as the sky changes, compared to α-blending.\nIn Fig 15, we show the result of changing the smile on a generated face image. For the smile attribute, we also compare to a learned smile edit vector using labelled images from a pretrained smile classifier. We additionally measure the the facial embedding distance using a pretrained face-identification network2 – the goal of the interpolation is to change the smile of the image while reducing changes to the rest of the face identity, and thus minimize embedding distance. Because the mouth region is small, choosing the interpolation weight α by the target area minimally changes the interpolated image (α > 0.99), so instead we use an α = 0.7 weight so that all methods have similar distance to the target. While the composition approach and applying learned attribute vector perform similarly, learning the attribute vector requires labelled examples, and cannot perform multimodal modifications such as applying different smiles to a given face." }, { "heading": "A.2.3 LOSS ABLATIONS", "text": "When training the regression network, we use a combination of three losses: pixel loss, perceptual loss (Zhang et al., 2018), and a latent recovery loss. In this section, we investigate the reconstruction result using different combinations of losses on the ProGAN church generator. In the first case, we do not enforce latent recovery, so the encoded latent code does not have to be close to the ground truth latent code but the encoder and generator just need to reconstruct the masked input image. In the second case, we investigate omitting the perceptual loss, and simply use an L2 loss in image space. Since the encoders are trained with L2 loss and the VGG variant of perceptual loss, we evaluate with L1 reconstruction and the AlexNet variant of perceptual loss. Training with all losses leads to reconstructions that are more perceptually similar (lower LPIPS loss) compared to the two other variants, while per-pixel L1 reconstruction is not greatly affected (Tab. 2). We show qualitative examples of the reconstructions on masked input in Fig. 16." }, { "heading": "A.2.4 ADDITIONAL COMPOSITION RESULTS", "text": "Comparing different composition approaches. In Fig 17, we show examples of extracted image parts, the collage formed by overlaying the image parts, and the result of poisson blending the images according to the outline of each extracted part. We further compare to variations of the encoder setup, where (1) RGB: the encoder is not aware of missing pixels in the collaged input, (2) RGB Fill: we fill the missing pixels with the average color of the background layer, and (3) RGBM: we allow the encoder and generator to inpaint on missing regions. Table 3 shows image quality metrics of FID (lower is better; Heusel et al. (2017)) and density and coverage (higher is better; Naeem et al. (2020)) and masked L1 reconstruction (lower is better) for each generator and domain. To obtain feature representations for the density and coverage metrics, we resize all images to 256px and use pretrained VGG features prior to the final classification layer (Simonyan & Zisserman, 2015). While the composite input collages are highly unrealistic with high FID and low density/coverage, the inverted images are closer to the image manifold. Of the three inversion methods, the RGBM inversion tends to yield lower FID and higher density and coverage, while minimizing L1 reconstruction error. We compare the GAN inversion methods to Poisson blending (Pérez et al., 2003), in which we incrementally blend the 4-8 image layers on top of each other using their respective masks. As\nPoisson blending does not naturally handle inpainting, we do not mask the bottom-most image layer, but rather use it to fill in any remaining holes in the blended image. We find that Poisson blending is unable to create realistic composites, due to the several overlapping, yet misaligned, image layers used to create the composites.\nComparing different image reconstruction generators. How are image priors different across different image reconstruction pipelines? Our encoder method relies on a pretrained generator, and trains an encoder network to predict the generator’s latent code of a given image. Therefore, it can take advantage of the image priors learned by the generator to keep the result close to the image manifold. Here, we compare to different image reconstruction approaches, such as autoencoder architectures or optimization methods rather than feed-forward inference. We construct the same set of input collages using parts of real images and compare a variety of image reconstruction methods encompassing feed-forward networks, pretrained GAN models, encoder networks. Since some reconstruction methods are optimization-based and thus take several minutes, we use a set of 200 images. We then compute the L1 reconstruction error in the valid region of the input, density (measures proximity to the real image manifold; Naeem et al. (2020)), and FID (measures realism; Heusel et al. (2017); but note that we are limited to small sample sizes due to optimization time).\nFor the church domain, we first compare to four methods that do not rely on a pretrained GAN network; rather, the generator and encoder is jointly learned using an autoencoder-like architecture. (1) We train a CycleGAN (Zhu et al., 2017) between image collages and the real image domain, creating a network that is explicitly trained for image composition in an unpaired manner, as there are no ground-truth “composited” images for the randomly generated image collages. (2) We use a pretrained SPADE (Park et al., 2019) network which creates images from segmentation maps, but information about object style (e.g. color and texture) is lost in the segmentation input. (3) We use a pretrained inpainting model that is trained to fill in small holes in the image (Yu et al., 2018), but does not correct for misalignments or global color inconsistencies between image parts. (4) We train Deep Image Prior (DIP) networks (Ulyanov et al., 2018) which performs test-time optimization on an autoencoder to reconstruct a single image, where using a masked loss allows it to inpaint missing regions.\nNext, we use the ProGAN and StyleGAN2 pretrained generators, and experiment with different ways of inverting into the latent code. Methods that leverage a pretrained GAN for inversion, but are optimization-based rather than feed-forward include (5&6) LBFGS methods (Liu & Nocedal, 1989) on ProGAN and StyleGAN, which iteratively optimizes for the best latent code starting from the best latent among 500 random samples, (7) Multi-Code Prior (Gu et al., 2019a), which combines multiple latent codes in the intermediate layers of the GAN, and (8) a StyleGAN2 projection method using perceptual loss (Karras et al., 2019a). For all optimization-based GAN inversion methods, we modify the objective with a masked loss to only reconstruct valid regions of the input.\nWe use our trained regressor network for the remaining comparisons. (9&10) We use our ProGAN and StyleGAN regressors to encode the input image as initialization, and then perform LBFGS optimization. (11&12) We use our ProGAN and StyleGAN regressors in a fully feed-forward manner.\nTab. 4 summarizes the methods and illustrates the tradeoff between reconstruction (L1), realism (Density and FID), and optimization time. Due to the unrealistic nature of the input collages, a method that reduces reconstruction error is less realistic, whereas a more realistic output may offer a worse reconstruction of the input. Furthermore, methods that are not feed-forward incur substantially more time per image. Fig 18 shows qualitative results, where in particular the third example is a challenging input where the lower part of the tower is missing. The two encoders demonstrate an image prior in which the bottom of the tower is reconstructed on output. While DIP can inpaint missing regions, it cannot leverage learned image prior. CycleGAN can fill in missing patterns, but with visible artifacts, whereas SPADE changes the style of each input component. Iteratively optimizing on a pretrained generator can lose semantic priors as it optimizes towards an unrealistic input.\nOn the face domain, we compare (1) the inpainting method of Yu et al. (2018) pretrained on faces, (2) the Im2StyleGAN method (Abdal et al., 2019) which optimizes within theW+ latent space of StyleGAN, and (3) the ALAE model (Pidhorskyi et al., 2020) which jointly trained the encoder and generator. More similar to our approach, (4&5) the In-domain encoding and diffusion methods (Zhu et al., 2020) encodes and optimizes for a latent code of StyleGAN that matches the target image. (6) We modify and retrain the Pixel2Style2Pixel network (PSP; Richardson et al. (2020)), which also leverages the StyleGAN generator, to perform regression with arbitrarily masked regions. The PSP network uses a feature pyramid to predict latent codes. (7&8) We use our regressor network on ProGAN and StyleGAN, which uses a ResNet backbone and predicts the latent code after pooling all spatial features.\nQualitatively, we find that the optimization-based Im2StyleGAN (Abdal et al., 2019) algorithm is not able to realistically inpaint missing regions in the input collage. While the ALAE autoencoder (Pidhorskyi et al., 2020) exhibits characteristics of blending the collage into a cohesive output image, the reconstruction error is higher than that of the GAN-based approaches. The In-domain encoder\nmethod (Zhu et al., 2020) does not correct for misalignments in the inputs, resulting in low density, although the subsequent optimization step is able to further reduce L1 distance (Fig. 19). The PSP network modified with the masking objective is conceptually similar to our regressor; we find that it is better able to reconstruct the input image, but produces less realistic output. This suggests that an encoder which processes the input image globally before predicting the latent code output can help retain realism in output images. We measure reconstruction (L1) and realism (Density and FID) over 200 samples in Tab. 5.\nComposing images using alternative definitions of image parts. In the main text, we focus on creating composite images using a pretrained segmentation network. However, we note that the exact process of extracting image parts does not matter, as the encoder and generator form an image prior that will remove inconsistencies in the input. In Fig. 20 we show composites generated using the output of a pretrained saliency network (Liu et al., 2018), and in Fig. 21 we show compositions using hand-selected compositional parts, where the parts extracted from each image do not have to correspond precisely with object boundaries.\nEditing with global illumination consistency. A property of the regressor network is that it enforces global coherence of the output, despite an unrealistic input, by learning a mapping from image to latent domains that is averaged over many samples. Thus, it is unable to perform exact reconstructions of the input image, but rather exhibits error-correcting properties when the input is imprecise, e.g. in the case of unaligned parts or abrupt changes in color. In Fig. 22, we investigate the ability of the regressor network to perform global adjustments to accommodate a desired change in lighting – such as adding a reflections or changing illumination outside of the manipulated region – to maintain realism at the tradeoff of higher reconstruction error.\nImproving compositions on real images. As the regressor network is trained to minimize reconstruction on average over all images, it can cause slight distortions on any given input image. To retain the compositionality effect of the regressor network, yet better fit a specific input image, we can finetune the weights of the regressor towards the given input image. Generally, a few seconds of finetuning suffices (30 optimizer steps; < 5 seconds), and subsequent editing on the image only requires a forward pass. We demonstrate this application in Fig. 23.\nRandom composition samples. We show additional random samples of the generated composites across the ProGAN and StyleGAN2 generators for a variety of image domains in Fig. 24 and Fig. 25.\nInput CycleGAN SPADE Inpaint DIP ProGAN LBFGS\nStyleGAN LBFGS\nMulti-Code Prior\nStyleGAN Projection" }, { "heading": "ProGAN", "text": "Enc+LBFGS\nStyleGAN Enc+LBFGS\nOurs: ProGAN Encode Ours: StyleGAN Encode\nInput CycleGAN SPADE Inpaint DIP ProGAN LBFGS\nStyleGAN LBFGS\nMulti-Code Prior\nStyleGAN Projection" }, { "heading": "ProGAN", "text": "Enc+LBFGS\nStyleGAN Enc+LBFGS\nOurs: ProGAN Encode Ours: StyleGAN Encode\nInput CycleGAN SPADE Inpaint DIP ProGAN LBFGS\nStyleGAN LBFGS\nMulti-Code Prior\nStyleGAN Projection" }, { "heading": "ProGAN", "text": "Enc+LBFGS\nStyleGAN Enc+LBFGS\nOurs: ProGAN Encode Ours: StyleGAN Encode" }, { "heading": "ProGAN Living Room StyleGAN Church", "text": "" }, { "heading": "A.2.5 ADDITIONAL PART VARIATION RESULTS", "text": "Here, we show additional qualitative results similar to those in Fig. 8 in the main paper. In each case, the heatmap shows appearance variation when changing the part marked in red. In Fig. 26, it can be seen that the variation is usually strongest in the face part that is changed, indicating that the composition of face parts learned by the model is a good match for our intuitive understanding of a face. In Fig. 27, we vary a single superpixel of a car. The resulting variations show regions of the images that commonly vary together (such as the floor, the body of the car, or the windows), which can be interpreted as a form of unsupervised part discovery." } ]
2,021
USING LATENT SPACE REGRESSION TO ANALYZE AND LEVERAGE COMPOSITIONALITY IN GANS
SP:c0e827c33dbc9378404fe2a0949198cb74f13688
[ "The authors propose a new way to aggregate the embeddings of elements in a set (or sequence) by comparing it with respect to (trainable) reference set(s) via Optimal Transport (OT). The motivation to build such a pooling operation is derived from self-attention and the authors suggest an OT spin to that (e.g., the different reference sets/measures can be thought of as different heads in attention). This is, however, done in a principled way with the help of kernel embeddings and not just ad-hoc using the transport plan as the attention matrix." ]
We address the problem of learning on sets of features, motivated by the need of performing pooling operations in long biological sequences of varying sizes, with long-range dependencies, and possibly few labeled data. To address this challenging task, we introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference. Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost. Our aggregation technique admits two useful interpretations: it may be seen as a mechanism related to attention layers in neural networks, or it may be seen as a scalable surrogate of a classical optimal transport-based kernel. We experimentally demonstrate the effectiveness of our approach on biological sequences, achieving state-of-the-art results for protein fold recognition and detection of chromatin profiles tasks, and, as a proof of concept, we show promising results for processing natural language sequences. We provide an open-source implementation of our embedding that can be used alone or as a module in larger learning models at https://github.com/claying/OTK.
[ { "affiliations": [], "name": "Grégoire Mialon" }, { "affiliations": [], "name": "Dexiong Chen" }, { "affiliations": [], "name": "Alexandre d’Aspremont" }, { "affiliations": [], "name": "Julien Mairal" } ]
[]
[ { "heading": "1 INTRODUCTION", "text": "Many scientific fields such as bioinformatics or natural language processing (NLP) require processing sets of features with positional information (biological sequences, or sentences represented by a set of local features). These objects are delicate to manipulate due to varying lengths and potentially long-range dependencies between their elements. For many tasks, the difficulty is even greater since the sets can be arbitrarily large, or only provided with few labels, or both.\nDeep learning architectures specifically designed for sets have recently been proposed (Lee et al., 2019; Skianis et al., 2020). Our experiments show that these architectures perform well for NLP tasks, but achieve mixed performance for long biological sequences of varying size with few labeled data. Some of these models use attention (Bahdanau et al., 2015), a classical mechanism for aggregating features. Its typical implementation is the transformer (Vaswani et al., 2017), which has shown to achieve state-of-the-art results for many sequence modeling tasks, e.g, in NLP (Devlin et al., 2019) or in bioinformatics (Rives et al., 2019), when trained with self supervision on large-scale data. Beyond sequence modeling, we are interested in this paper in finding a good representation for sets of features of potentially diverse sizes, with or without positional information, when the amount of training data may be scarce. To this end, we introduce a trainable embedding, which can operate directly on the feature set or be combined with existing deep approaches.\nMore precisely, our embedding marries ideas from optimal transport (OT) theory (Peyré & Cuturi, 2019) and kernel methods (Schölkopf & Smola, 2001). We call this embedding OTKE (Optimal\n∗Equal contribution. †Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France. ‡D.I., UMR 8548, École normale supérieure, Paris, France.\nTransport Kernel Embedding). Concretely, we embed feature vectors of a given set to a reproducing kernel Hilbert space (RKHS) and then perform a weighted pooling operation, with weights given by the transport plan between the set and a trainable reference. To gain scalability, we then obtain a finite-dimensional embedding by using kernel approximation techniques (Williams & Seeger, 2001). The motivation for using kernels is to provide a non-linear transformation of the input features before pooling, whereas optimal transport allows to align the features on a trainable reference with fast algorithms (Cuturi, 2013). Such combination provides us with a theoretically grounded, fixed-size embedding that can be learned either without any label, or with supervision. Our embedding can indeed become adaptive to the problem at hand, by optimizing the reference with respect to a given task. It can operate on large sets with varying size, model long-range dependencies when positional information is present, and scales gracefully to large datasets. We demonstrate its effectiveness on biological sequence classification tasks, including protein fold recognition and detection of chromatin profiles where we achieve state-of-the-art results. We also show promising results in natural language processing tasks, where our method outperforms strong baselines. Contributions. In summary, our contribution is three-fold. We propose a new method to embed sets of features of varying sizes to fixed size representations that are well adapted to downstream machine learning tasks, and whose parameters can be learned in either unsupervised or supervised fashion. We demonstrate the scalability and effectiveness of our approach on biological and natural language sequences. We provide an open-source implementation of our embedding that can be used alone or as a module in larger learning models." }, { "heading": "2 RELATED WORK", "text": "Kernel methods for sets and OT-based kernels. The kernel associated with our embedding belongs to the family of match kernels (Lyu, 2004; Tolias et al., 2013), which compare all pairs of features between two sets via a similarity function. Another line of research builds kernels by matching features through the Wasserstein distance. A few of them are shown to be positive definite (Gardner et al., 2018) and/or fast to compute (Rabin et al., 2011; Kolouri et al., 2016). Except for few hyperparameters, these kernels yet cannot be trained end-to-end, as opposed to our embedding that relies on a trainable reference. Efficient and trainable kernel embeddings for biological sequences have also been proposed by Chen et al. (2019a;b). Our work can be seen as an extension of these earlier approaches by using optimal transport rather than mean pooling for aggregating local features, which performs significantly better for long sequences in practice. Deep learning for sets. Deep Sets (Zaheer et al., 2017) feed each element of an input set into a feed-forward neural network. The outputs are aggregated following a simple pooling operation before further processing. Lee et al. (2019) propose a Transformer inspired encoder-decoder architecture for sets which also uses latent variables. Skianis et al. (2020) compute some comparison costs between an input set and reference sets. These costs are then used as features in a subsequent neural network. The reference sets are learned end-to-end. Unlike our approach, such models do not allow unsupervised learning. We will use the last two approaches as baselines in our experiments. Interpretations of attention. Using the transport plan as an ad-hoc attention score was proposed by Chen et al. (2019c) in the context of network embedding to align data modalities. Our paper goes beyond and uses the transport plan as a principle for pooling a set in a model, with trainable parameters. Tsai et al. (2019) provide a view of Transformer’s attention via kernel methods, yet in a very different fashion where attention is cast as kernel smoothing and not as a kernel embedding." }, { "heading": "3 PROPOSED EMBEDDING", "text": "" }, { "heading": "3.1 PRELIMINARIES", "text": "We handle sets of features in Rd and consider sets x living in X = { x | x = {x1, . . . ,xn} such that x1, . . . ,xn ∈ Rd for some n ≥ 1 } .\nElements of X are typically vector representations of local data structures, such as k-mers for sequences, patches for natural images, or words for sentences. The size of x denoted by n may vary, which is not an issue since the methods we introduce may take a sequence of any size as input, while providing a fixed-size embedding. We now revisit important results on optimal transport and kernel methods, which will be useful to describe our embedding and its computation algorithms.\nOptimal transport. Our pooling mechanism will be based on the transport plan between x and x′ seen as weighted point clouds or discrete measures, which is a by-product of the optimal transport problem (Villani, 2008; Peyré & Cuturi, 2019). OT has indeed been widely used in alignment problems (Grave et al., 2019). Throughout the paper, we will refer to the Kantorovich relaxation of OT with entropic regularization, detailed for example in (Peyré & Cuturi, 2019). Let a in ∆n (probability simplex) and b in ∆n ′ be the weights of the discrete measures ∑ i aiδxi and ∑ j bjδx′j with respective locations x and x′, where δx is the Dirac at position x. Let C in Rn×n ′\nbe a matrix representing the pairwise costs for aligning the elements of x and x′. The entropic regularized Kantorovich relaxation of OT from x to x′ is\nmin P∈U(a,b) ∑ ij CijPij − εH(P), (1)\nwhere H(P) = − ∑ ij Pij(log(Pij) − 1) is the entropic regularization with parameter ε, which controls the sparsity of P, and U is the space of admissible couplings between a and b:\nU(a,b) = {P ∈ Rn×n ′ + : P1n = a and P >1n′ = b}.\nThe problem is typically solved by using a matrix scaling procedure known as Sinkhorn’s algorithm (Sinkhorn & Knopp, 1967; Cuturi, 2013). In practice, a and b are uniform measures since we consider the mass to be evenly distributed between the points. P is called the transport plan, which carries the information on how to distribute the mass of x in x′ with minimal cost. Our method uses optimal transport to align features of a given set to a learned reference.\nKernel methods. Kernel methods (Schölkopf & Smola, 2001) map data living in a space X to a reproducing kernel Hilbert space H, associated to a positive definite kernel K through a mapping function ϕ : X → H, such that K(x,x′) = 〈ϕ(x), ϕ(x′)〉H. Even though ϕ(x) may be infinitedimensional, classical kernel approximation techniques (Williams & Seeger, 2001) provide finitedimensional embeddings ψ(x) in Rk such that K(x,x′) ≈ 〈ψ(x), ψ(x′)〉. Our embedding for sets relies in part on kernel method principles and on such a finite-dimensional approximation." }, { "heading": "3.2 OPTIMAL TRANSPORT EMBEDDING AND ASSOCIATED KERNEL", "text": "We now present the OTKE, an embedding and pooling layer which aggregates a variable-size set or sequence of features into a fixed-size embedding. We start with an infinite-dimensional variant living in a RKHS, before introducing the finite-dimensional embedding that we use in practice.\nInfinite-dimensional embedding in RKHS. Given a set x and a (learned) reference z in X with p elements, we consider an embedding Φz(x) which performs the following operations: (i) initial embedding of the elements of x and z to a RKHS H; (ii) alignment of the elements of x to the elements of z via optimal transport; (iii) weighted linear pooling of the elements x into p bins, producing an embedding Φz(x) inHp, which is illustrated in Figure 1. Before introducing more formal details, we note that our embedding relies on two main ideas:\n• Global similarity-based pooling using references. Learning on large sets with long-range interactions may benefit from pooling to reduce the number of feature vectors. Our pooling rule follows an inductive bias akin to that of self-attention: elements that are relevant to each other for the task at hand should be pooled together. To this end, each element in the reference set corresponds to a pooling cell, where the elements of the input set are aggregated through a weighted sum. The weights simply reflect the similarity between the vectors of the input set and the current vector in the reference. Importantly, using a reference set enables to reduce the size of the “attention matrix” from quadratic to linear in the length of the input sequence. • Computing similarity weights via optimal transport. A computationally efficient similarity score between two elements is their dot-product (Vaswani et al., 2017). In this paper, we rather consider that elements of the input set should be pooled together if they align well with the same part of the reference. Alignment scores can efficiently be obtained by computing the transport plan between the input and the reference sets: Sinkhorn’s algorithm indeed enjoys fast solvers (Cuturi, 2013).\nWe are now in shape to give a formal definition. Definition 3.1 (The optimal transport kernel embedding). Let x = (x1, . . . ,xn) in X be an input set of feature vectors and z = (z1, . . . , zp) in X be a reference set with p elements. Let κ be a\npositive definite kernel, e.g., Gaussian kernel, with RKHSH and ϕ : Rd → H, its associated kernel embedding. Let κ be the n× p matrix which carries the comparisons κ(xi, zj), before alignment. Then, the transport plan between x and z, denoted by the n × p matrix P(x, z), is defined as the unique solution of (1) when choosing the cost C = −κ, and our embedding is defined as\nΦz(x) := √ p× ( n∑ i=1 P(x, z)i1ϕ(xi), . . . , n∑ i=1 P(x, z)ipϕ(xi) ) = √ p×P(x, z)>ϕ(x),\nwhere ϕ(x) := [ϕ(x1), . . . , ϕ(xn)]>.\nInterestingly, it is easy to show that our embedding Φz(x) is associated to the positive definite kernel\nKz(x,x ′) := n∑ i,i′=1 Pz(x,x ′)ii′κ(xi,x ′ i′) = 〈Φz(x),Φz(x′)〉, (2)\nwith Pz(x,x′) := p × P(x, z)P(x′, z)>. This is a weighted match kernel, with weights given by optimal transport in H. The notion of pooling in the RKHS H of κ arises naturally if p ≤ n. The elements of x are non-linearly embedded and then aggregated in “buckets”, one for each element in the reference z, given the values of P(x, z). This process is illustrated in Figure 1. We acknowledge here the concurrent work by Kolouri et al. (2021), where a similar embedding is used for graph representation. We now expose the benefits of this kernel formulation, and its relation to classical non-positive definite kernel. Kernel interpretation. Thanks to the gluing lemma (see, e.g., Peyré & Cuturi, 2019), Pz(x,x′) is a valid transport plan and, empirically, a rough approximation of P(x,x′). Kz can therefore be seen as a surrogate of a well-known kernel (Rubner et al., 2000), defined as\nKOT(x,x ′) := n∑ i,i′=1 P(x,x′)ii′κ(xi,x ′ i′). (3)\nWhen the entropic regularization ε is equal to 0, KOT is equivalent to the 2-Wasserstein distance W2(x,x\n′) with ground metric dκ induced by kernel κ. KOT is generally not positive definite (see Peyré & Cuturi (2019), Chapter 8.3) and computationally costly (the number of transport plans to compute is quadratic in the number of sets to process whereas it is linear for Kz). Now, we show the relationship between this kernel and our kernel Kz, which is proved in Appendix B.1. Lemma 3.1 (Relation between P(x,x′) and Pz(x,x′) when ε = 0). For any x, x′ and z in X with lengths n, n′ and p, by denoting W z2 (x,x ′) := 〈Pz(x,x′), d2κ(x,x′)〉 1/2 we have\n|W2(x,x′)−W z2 (x,x′)| ≤ 2 min(W2(x, z),W2(x′, z)). (4)\nThis lemma shows that the distanceW z2 resulting fromKz is related to the Wasserstein distanceW2; yet, this relation should not be interpreted as an approximation error as our goal is not to approximate W2, but rather to derive a trainable embedding Φz(x) with good computational properties. Lemma 3.1 roots our features and to some extent self-attention in a rich optimal transport literature. In fact, W z2 is equivalent to a distance introduced by Wang et al. (2013), whose properties are further studied by Moosmüller & Cloninger (2020). A major difference is that W z2 crucially relies on Sinkhorn’s algorithm so that the references can be learned end-to-end, as explained below." }, { "heading": "3.3 FROM INFINITE-DIMENSIONAL KERNEL EMBEDDING TO FINITE DIMENSION", "text": "In some cases, ϕ(x) is already finite-dimensional, which allows to compute the embedding Φz(x) explicitly. This is particularly useful when dealing with large-scale data, as it enables us to use our method for supervised learning tasks without computing the Gram matrix, which grows quadratically in size with the number of samples. When ϕ is infinite or high-dimensional, it is nevertheless possible to use an approximation based on the Nyström method (Williams & Seeger, 2001), which provides an embedding ψ : Rd → Rk such that\n〈ψ(xi), ψ(x′j)〉Rk ≈ κ(xi,x′j).\nConcretely, the Nyström method consists in projecting points from the RKHS H onto a linear subspace F , which is parametrized by k anchor points F = Span(ϕ(w1), . . . , ϕ(wk)). The corresponding embedding admits an explicit form ψ(xi) = κ(w,w)−1/2κ(w,xi), where κ(w,w) is the k × k Gram matrix of κ computed on the set w = {w1, . . . ,wk} of anchor points and κ(w,xi) is in Rk. Then, there are several ways to learn the anchor points: (a) they can be chosen as random points from data; (b) they can be defined as centroids obtained by K-means, see Zhang et al. (2008); (c) they can be learned by back-propagation for a supervised task, see Mairal (2016).\nUsing such an approximation within our framework can be simply achieved by (i) replacing κ by a linear kernel and (ii) replacing each element xi by its embedding ψ(xi) in Rk and considering a reference set with elements in Rk. By abuse of notation, we still use z for the new parametrization. The embedding, which we use in practice in all our experiments, becomes simply\nΦz(x) = √ p× ( n∑ i=1 P(ψ(x), z)i1ψ(xi), . . . , n∑ i=1 P(ψ(x), z)ipψ(xi) ) = √ p×P(ψ(x), z)>ψ(x) ∈ Rp×k, (5)\nwhere p is the number of elements in z. Next, we discuss how to learn the reference set z.\n3.4 UNSUPERVISED AND SUPERVISED LEARNING OF PARAMETER z\nUnsupervised learning. In the fashion of the Nyström approximation, the p elements of z can be defined as the centroids obtained by K-means applied to all features from training sets in X . A corollary of Lemma 3.1 suggests another algorithm: a bound on the deviation term between W2 and W z2 for m samples (x 1, . . . ,xm) is indeed\nE2 := 1 m2 m∑ i,j=1 |W2(xi,xj)−W z2 (xi,xj)|2 ≤ 4 m m∑ i=1 W 22 (x i, z). (6)\nThe right-hand term corresponds to the objective of the Wasserstein barycenter problem (Cuturi & Doucet, 2013), which yields the mean of a set of empirical measures (here the x’s) under the OT metric. The Wasserstein barycenter is therefore an attractive candidate for choosing z. K-means can be seen as a particular case of Wasserstein barycenter when m = 1 (Cuturi & Doucet, 2013; Ho et al., 2017) and is faster to compute. In practice, both methods yield similar results, see Appendix C, and we thus chose K-means to learn z in unsupervised settings throughout the experiments. The anchor points w and the references z may be then computed using similar algorithms; however, their mathematical interpretation differs as exposed above. The task of representing features (learning w in Rd for a specific κ) is decoupled from the task of aggregating (learning the reference z in Rk).\nSupervised learning. As mentioned in Section 3.1, P(ψ(x), z) is computed using Sinkhorn’s algorithm, recalled in Appendix A, which can be easily adapted to batches of samples x, with possibly varying lengths, leading to GPU-friendly forward computations of the embedding Φz. More important, all Sinkhorn’s operations are differentiable, which enables z to be optimized with stochastic gradient descent through back-propagation (Genevay et al., 2018), e.g., for minimizing a classification or regression loss function when labels are available. In practice, a small number of Sinkhorn iterations (e.g., 10) is sufficient to compute P(ψ(x), z). Since the anchors w in the embedding layer below can also be learned end-to-end (Mairal, 2016), our embedding can thus be used as a module injected into any model, e.g, a deep network, as demonstrated in our experiments." }, { "heading": "3.5 EXTENSIONS", "text": "Integrating positional information into the embedding. The discussed embedding and kernel do not take the position of the features into account, which may be problematic when dealing with structured data such as images or sentences. To this end, we borrow the idea of convolutional kernel networks, or CKN (Mairal, 2016; Mairal et al., 2014), and penalize the similarities exponentially with the positional distance between a pair of elements in the input and reference sequences. More precisely, we multiply P(ψ(x), z) element-wise by a distance matrix S defined as:\nSij = e − 1 σ2pos (i/n−j/p)2 ,\nand replace it in the embedding. With similarity weights based both on content and position, the kernel associated to our embedding can be viewed as a generalization of the CKNs (whose similarity weights are based on position only), with feature alignment based on optimal transport. When dealing with multi-dimensional objects such as images, we just replace the index scalar i with an index vector of the same spatial dimension as the object, representing the positions of each dimension.\nUsing multiple references. A naive reconstruction using different references z1, . . . , zq in X may yield a better approximation of the transport plan. In this case, the embedding of x becomes\nΦz1,...,zq (x) = 1/ √ q (Φz1(x), . . . ,Φzq (x)) , (7)\nwith q the number of references (the factor 1/√q comes from the mean). Using Eq. (4), we can obtain a bound similar to (6) for a data set of m samples (x1, . . . ,xm) and q references (see Appendix B.2 for details). To choose multiple references, we tried a K-means algorithm with 2-Wasserstein distance for assigning clusters, and we updated the centroids as in the single-reference case. Using multiple references appears to be useful when optimizing z with supervision (see Appendix C)." }, { "heading": "4 RELATION BETWEEN OUR EMBEDDING AND SELF-ATTENTION", "text": "Our embedding and a single layer of transformer encoder, recalled in Appendix A, share the same type of inductive bias, i.e, aggregating features relying on similarity weights. We now clarify their relationship. Our embedding is arguably simpler (see respectively size of attention and number of parameters in Table 1), and may compete in some settings with the transformer self-attention as illustrated in Section 5.\nShared reference versus self-attention. There is a correspondence between the values, attention matrix in the transformer and ϕ, P in Definition 3.1, yet also noticeable differences. On the one hand, Φz aligns a given sequence x with respect to a reference z, learned with or without supervision, and shared across the data set. Our weights are computed using optimal transport. On the other hand, a transformer encoder performs self-alignment: for a given xi, features are aggregated depending on a similarity score between xi and the elements of x only. The similarity score is a matrix product between queries Q and keys K matrices, learned with supervision and shared across the data set. In this regard, our work complements a recent line of research questioning the dotproduct, learned self-attention (Raganato et al., 2020; Weiqiu et al., 2020). Selfattention-like weights can also be obtained\nwith our embedding by computing P(x, zi)P(x, zi)> for each reference i. In that sense, our work is related to recent research on efficient self-attention (Wang et al., 2020; Choromanski et al., 2020), where a low-rank approximation of the self-attention matrix is computed.\nPosition smoothing and relative positional encoding. Transformers can add an absolute positional encoding to the input features (Vaswani et al., 2017). Yet, relative positional encoding (Dai et al., 2019) is a current standard for integrating positional information: the position offset between the query element and a given key can be injected in the attention score (Tsai et al., 2019), which is equivalent to our approach. The link between CKNs and our kernel, provided by this positional encoding, stands in line with recent works casting attention and convolution into a unified framework (Andreoli, 2019). In particular, Cordonnier et al. (2020) show that attention learns convolution in the setting of image classification: the kernel pattern is learned at the same time as the filters.\nMultiple references and attention heads. In the transformer architecture, the succession of blocks composed of an attention layer followed by a fully-connected layer is called a head, with each head potentially focusing on different parts of the input. Successful architectures have a few heads in parallel. The outputs of the heads are then aggregated to output a final embedding. A layer of our embedding with non-linear kernel κ can be seen as such a block, with the references playing the role of the heads. As some recent works question the role of attention heads (Voita et al., 2019; Michel et al., 2019), exploring the content of our learned references z may provide another perspective on this question. More generally, visualization and interpretation of the learned references could be of interest for biological sequences." }, { "heading": "5 EXPERIMENTS", "text": "We now show the effectiveness of our embedding OTKE in tasks where samples can be expressed as large sets with potentially few labels, such as in bioinformatics. We evaluate our embedding alone in unsupervised or supervised settings, or within a model in the supervised setting. We also consider NLP tasks involving shorter sequences and relatively more labels." }, { "heading": "5.1 DATASETS, EXPERIMENTAL SETUP AND BASELINES", "text": "In unsupervised settings, we train a linear classifier with the cross entropy loss between true labels and predictions on top of the features provided by our embedding (where the references z and Nyström anchors w have been learned without supervision), or an unsupervised baseline. In supervised settings, the same model is initialized with our unsupervised method and then trained end-to-end (including z and w) by minimizing the same loss. We use an alternating optimization strategy to update the parameters for both SCOP and SST datasets, as used by Chen et al. (2019a;b). We train for 100 epochs with Adam on both data sets: the initial learning rate is 0.01, and get halved as long as there is no decrease in the validation loss for 5 epochs. The hyper-parameters we tuned include number of supports and references p, q, entropic regularization in OT ε, the bandwidth of Gaussian kernels and the regularization parameter of the linear classifier. The best values of ε and the bandwidth were found stable across tasks, while the regularization parameter needed to be more carefully cross-validated. Additional results and implementation details can be found in Appendix C.\nProtein fold classification on SCOP 1.75. We follow the protocol described by Hou et al. (2019) for this important task in bioinformatics. The dataset contains 19, 245 sequences from 1, 195 different classes of fold (hence less than 20 labels in average per class). The sequence lengths vary from tens to thousands. Each element of a sequence is a 45-dimensional vector. The objective is to classify the sequences to fold classes, which corresponds to a multiclass classification problem. The features fed to the linear classifier are the output of our embedding with ϕ the Gaussian kernel mapping on k-mers (subsequences of length k) with k fixed to be 10, which is known to perform well in this task (Chen et al., 2019a). The number of anchor points for Nyström method is fixed to 1024 and 512 respectively for unsupervised and supervised setting. In the unsupervised setting, we compare our method to state-of-the-art unsupervised method for this task: CKN (Chen et al., 2019a), which performs a global mean pooling in contrast to the global adaptive pooling performed by our embedding. In the supervised setting, we compare the same model to the following supervised models: CKN, Recurrent Kernel Networks (RKN) (Chen et al., 2019b), a CNN with 10 convolutional layers named DeepSF (Hou et al., 2019), Rep the Set (Skianis et al., 2020) and Set Transformer (Lee et al., 2019), using the public implementations by their authors. Rep the Set and Set Transformer are used on the top of a convolutional layer of the same filter size as CKN to extract k-mer features. Their model hyper-parameters, weight decay and learning rate are tuned in the same way as for our models\n(see Appendix for details). The default architecture of Set Transformer did not perform well due to overfitting. We thus used a shallower architecture with one Induced Set Attention Block (ISAB), one Pooling by Multihead Attention (PMA) and one linear layer, similar to the one-layer architectures of CKN and our model. The results are shown in Table 2.\nDetection of chromatin profiles. Predicting the chromatin features such as transcription factor (TF) binding from raw genomic sequences has been studied extensively in recent years. CNNs with max pooling operations have been shown effective for this task. Here, we consider DeepSEA dataset (Zhou & Troyanskaya, 2015) consisting in simultaneously predicting 919 chromatin profiles, which can be formulated as a multi-label classification task. DeepSEA contains 4, 935, 024 DNA sequences of length 1000 and each of them is associated with 919 different labels (chromatin profiles). Each sequence is represented as a 1000× 4 binary matrix through one-hot encoding and the objective is to predict which profiles a given sequence possesses. As this problem is very imbalanced for each profile, learning an unsupervised model could require an extremely large number of parameters. We thus only consider our supervised embedding as an adaptive pooling layer and inject it into a deep neural network, between one convolutional layer and one fully connected layer, as detailed in Appendix C.4. In our embedding, ϕ is chosen to be identity and the positional encoding described in Section 3 is used. We compare our model to a state-of-the-art CNN with 3 convolutional layers and two fully-connected layers (Zhou & Troyanskaya, 2015). The results are shown in Table 3.\nSentiment analysis on Stanford Sentiment Treebank. SST-2 (Socher et al., 2013) belongs to the NLP GLUE benchmark (Wang et al., 2019) and consists in predicting whether a movie review is positive. The dataset contains 70, 042 reviews. The test predictions need to be submitted on the GLUE leaderboard, so that we remove a portion of the training set for validation purpose and report accuracies on the actual validation set used as a test set. Our model is one layer of our embedding with ϕ a Gaussian kernel mapping with 64 Nyström filters in the supervised setting, and a linear mapping in the unsupervised setting. The features used in our model and all baselines are word vectors with dimension 768 provided by the HuggingFace implementation (Wolf et al., 2019) of the transformer BERT (Devlin et al., 2019). State-of-the-art accuracies are usually obtained after supervised fine-tuning of pre-trained transformers. Training a linear model on pre-trained features after simple pooling (e.g, mean) also yields good results. [CLS], which denotes the BERT embedding used for classification, is also a common baseline. The results are shown in Table 4." }, { "heading": "5.2 RESULTS AND DISCUSSION", "text": "In protein fold classification, our embedding outperforms all baselines in both unsupervised and supervised settings. Surprisingly, our unsupervised model also achieves better results than most supervised baselines. In contrast, Set Transformer does not perform well, possibly because its implementation was not designed for sets with varying sizes, and tasks with few annotations. In detection of chromatin profiles, our model (our embedding within a deep network) has fewer layers than state-of-the-art CNNs while outperforming them, which advocates for the use of attentionbased models for such applications. Our results also suggest that positional information is important\n(Appendix C.4;C.2), and our Gaussian position encoding outperforms the sinusoidal one introduced in Vaswani et al. (2017). Note that in contrast to a typical transformer, which would have stored a 1000×1000 attention matrix, our attention score with a reference of size 64 is only 1000×64, which illustrates the discussion in Section 4. In NLP, an a priori less favorable setting since sequences are shorter and there are more data, our supervised embedding gets close to a strong state-of-the-art, i.e. a fully-trained transformer. We observed our method to be much faster than RepSet, as fast as Set Transformer, yet slower than ApproxRepSet (C.3). Using the OT plan as similarity score yields better accuracies than the dot-product between the input sets and the references (see Table 2; 4). Choice of parameters. This paragraph sums up the impact of hyper-parameter choices. Experiments justifying our claims can be found in Appendix C.\n• Number of references q: for biological sequences, a single reference was found to be enough in the unsupervised case, see Table 11. In the supervised setting, Table 14 suggests that using q = 5 provides slightly better results but q = 1 remains a good baseline, and that the sensitivity to number of references is moderate. • Number of supports p in a reference: Table 11 and Table 14 suggest that the sensitivity of the model to the number of supports is also moderate. • Nyström anchors: an anchor can be seen as a neuron in a feed-forward neural network (see expression of ψ in 3.3). In unsupervised settings, the more anchors, the better the approximation of the kernel matrix. Then, the performance saturates, see Table 12. In supervised settings, the optimal number of anchors points is much smaller, as also observed by Chen et al. (2019a), Fig 6. • Bandwidth σ in gaussian kernel: σ was chosen as in Chen et al. (2019b) and we did not try to optimize it in this work, as it seemed to already provide good results. Nevertheless, slightly better results can be obtained when tuning this parameter, for instance in SST-2.\nOTKE and self-supervised methods. Our approach should not be positioned against selfsupervision and instead brings complementary features: the OTKE may be plugged in state-ofthe-art models pre-trained on large unannotated corpus. For instance, on SCOP 1.75, we use ESM1 (Rives et al., 2019), pretrained on 250 millions protein sequences, with mean pooling followed by a linear classifier. As we do not have the computational ressources to fine-tune ESM1-t34, we only train a linear layer on top of the extracted features. Using the same model, we replace the mean pooling by our (unsupervised) OTKE layer, and also only train the linear layer. This results in accuracy improvements as showed in Table 5. While training huge self-supervised learning models on large datasets is very effective, ESM1-t34 admits more than 2500 times more parameters than our single-layer OTKE model (260k parameters versus 670M) and our single-layer OTKE outperforms smaller versions of ESM1 (43M parameters). Finally, self-supervised pre-training of a deep model including OTKE on large data sets would be interesting for fair comparison.\nMulti-layer extension. Extending the OTKE to a multi-layer embedding is a natural yet not straightforward research direction: it is not clear how to find a right definition of intermediate feature aggregation in a multi-layer OTKE model. Note that for DeepSEA, our model with single-layer OTKE already outperforms a multi-layer CNN, which suggests that a multi-layer OTKE is not always needed." }, { "heading": "ACKNOWLEDGMENTS", "text": "JM, GM and DC were supported by the ERC grant number 714381 (SOLARIS project) and by ANR 3IA MIAI@Grenoble Alpes, (ANR-19-P3IA-0003). AA would like to acknowledge support from the ML and Optimisation joint research initiative with the fonds AXA pour la recherche and Kamet Ventures, a Google focused award, as well as funding by the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute). DC and GM thank Laurent Jacob, Louis Martin, François-Pierre Paty and Thomas Wolf for useful discussions." }, { "heading": "A ADDITIONAL BACKGROUND", "text": "This section provides some background on attention and transformers, Sinkhorn’s algorithm and the relationship between optimal transport based kernels and positive definite histogram kernels.\nA.1 SINKHORN’S ALGORITHM: FAST COMPUTATION OF Pκ(x, z)\nWithout loss of generality, we consider here κ the linear kernel. We recall that Pκ(x, z) is the solution of an optimal transport problem, which can be efficiently solved by Sinkhorn’s algorithm (Peyré & Cuturi, 2019) involving matrix multiplications only. Specifically, Sinkhorn’s algorithm is an iterative matrix scaling method that takes the opposite of the pairwise similarity matrix K with entry Kij := 〈xi, zj〉 as input C and outputs the optimal transport plan Pκ(x, z) = Sinkhorn(K, ε). Each iteration step ` performs the following updates\nu(`+1) = 1/n\nEv(`) and v(`+1) =\n1/p\nE>u(`) , (8)\nwhere E = eK/ε. Then the matrix diag(u(`))Ediag(v(`)) converges to Pκ(x, z) when ` tends to ∞. However when ε becomes too small, some of the elements of a matrix product Ev or E>u become null and result in a division by 0. To overcome this numerical stability issue, computing the multipliers u and v is preferred (see e.g. (Peyré & Cuturi, 2019, Remark 4.23)). This algorithm can be easily adapted to a batch of data points x, and with possibly varying lengths via a mask vector masking on the padding positions of each data point x, leading to GPU-friendly computation. More importantly, all the operations above at each step are differentiable, which enables z to be optimized through back-propagation. Consequently, this module can be injected into any deep networks.\nA.2 ATTENTION AND TRANSFORMERS\nWe clarify the concept of attention — a mechanism yielding a context-dependent embedding for each element of x — as a special case of non-local operations (Wang et al., 2017; Buades et al., 2011), so that it is easier to understand its relationship to the OTK. Let us assume we are given a set x ∈ X of length n. A non-local operation on x is a function Φ : X 7→ X such that\nΦ(x)i = n∑ j=1 w(xi,xj)v(xj) = W(x) > i V(x),\nwhere W(x)i denotes the i-th column of W(x), a weighting function, and V(x) = [v(x1), . . . , v(xn)]\n>, an embedding. In contrast to operations on local neighborhood such as convolutions, non-local operations theoretically account for long range dependencies between elements in the set. In attention and the context of neural networks, w is a learned function reflecting the relevance of each other elements xj with respect to the element xi being embedded and given the task at hand. In the context of the paper, we compare to a type of attention coined as dot-product selfattention, which can typically be found in the encoder part of the transformer architecture (Vaswani et al., 2017). Transformers are neural network models relying mostly on a succession of an attention layer followed by a fully-connected layer. Transformers can be used in sequence-to-sequence tasks — in this setting, they have an encoder with self-attention and a decoder part with a variant of self-attention —, or in sequence to label tasks, with only the encoder part. The paper deals with the latter. The name self-attention means that the attention is computed using a dot-product of linear\ntransformations of xi and xj , and x attends to itself only. In its matrix formulation, dot-product self-attention is a non-local operation whose matching vector is\nW(x)i = Softmax ( WQxix\n>W>K√ dk\n) ,\nwhere WQ ∈ Rn×dk and WK ∈ Rn×dk are learned by the network. In order to know which xj are relevant to xi, the network computes scores between a query for xi (WQxi) and keys of all the elements of x (WKx). The softmax turns the scores into a weight vector in the simplex. Moreover, a linear mapping V(x) = WV x, the values, is also learned. WQ and WK are often shared (Kitaev et al., 2020). A drawback of such attention is that for a sequence of length n, the model has to store an attention matrix W with size O(n2). More details can be found in Vaswani et al. (2017)." }, { "heading": "B PROOFS", "text": "B.1 PROOF OF LEMMA 3.1 Proof. First, since ∑n′ j=1 pP(x ′, z)jk = 1 for any k, we have\nW2(x, z) 2 = n∑ i=1 p∑ k=1 P(x, z)ikd 2 κ(xi, zk)\n= n∑ i=1 p∑ k=1 n′∑ j=1 pP(x′, z)jkP(x, z)ikd 2 κ(xi, zk)\n= ‖u‖22,\nwith u a vector in Rnn′p whose entries are √ pP(x′, z)jkP(x, z)ikdκ(xi, zk) for i = 1, . . . , n, j = 1, . . . , n′ and k = 1, . . . , p. We can also rewrite W z2 (x,x ′) as an `2-norm of a vector v in\nRnn′p whose entries are √ pP(x′, z)jkP(x, z)ikdκ(xi,x ′ j). Then by Minkowski inequality for the `2-norm, we have\n|W2(x, z)−W z2 (x,x′)| = |‖u‖2 − ‖v‖2| ≤ ‖u− v‖2\n= n∑ i=1 p∑ k=1 n′∑ j=1 pP(x′, z)jkP(x, z)ik(dκ(xi, zk)− dκ(xi,x′j))2 1/2\n≤ n∑ i=1 p∑ k=1 n′∑ j=1 pP(x′, z)jkP(x, z)ikd 2 κ(x ′ j , zk) 1/2\n= p∑ k=1 n′∑ j=1 P(x′, z)jkd 2 κ(x ′ j , zk) 1/2 = W2(x ′, z),\nwhere the second inequality is the triangle inequality for the distance dκ. Finally, we have\n|W2(x,x′)−W z2 (x,x′)| ≤|W2(x,x′)−W2(x, z)|+ |W2(x, z)−W z2 (x,x′)| ≤W2(x′, z) +W2(x′, z) =2W2(x ′, z),\nwhere the second inequality is the triangle inequality for the 2-Wasserstein distance. By symmetry, we also have |W2(x,x′)−W z2 (x,x′)| ≤ 2W2(x, z), which concludes the proof.\nB.2 RELATIONSHIP BETWEEN W2 AND W z2 FOR MULTIPLE REFERENCES\nUsing the relation prooved in Appendix B.1, we can obtain a bound on the error term between W2 and W z2 for a data set of m samples (x 1, . . . ,xm) and q references (z1, . . . , zq)\nE2 := 1 m2 m∑ i,j=1 |W2(xi,xj)−W z 1,...,zq 2 (x i,xj)|2 ≤ 4 mq m∑ i=1 q∑ j=1 W 22 (x i, zj). (9)\nWhen q = 1, the right-hand term in the inequality is the objective to minimize in the Wasserstein barycenter problem (Cuturi & Doucet, 2013), which further explains why we considered it: Once W z2 is close to the Wasserstein distance W2, Kz will also be close to KOT. We extend here the bound in equation 6 in the case of one reference to the multiple-reference case. The approximate 2-Wasserstein distance W z2 (x,x ′) thus becomes\nW z 1,...,zq 2 (x,x ′) :=\n〈 1\nq q∑ j=1 Pzj (x,x ′), d2κ(x,x ′)\n〉1/2 = 1 q q∑ j=1 W z j 2 (x,x ′)2 1/2 . Then by Minkowski inequality for the `2-norm we have\n|W2(x,x′)−W z 1,...,zq 2 (x,x ′)| = ∣∣∣∣∣∣∣ 1 q q∑ j=1 W2(x,x ′)2 1/2 − 1 q q∑ j=1 W z j 2 (x,x ′)2 1/2 ∣∣∣∣∣∣∣\n≤ 1 q q∑ j=1 (W2(x,x ′)−W z j 2 (x,x ′))2 1/2 , and by equation 6 we have\n|W2(x,x′)−W z 1,...,zq 2 (x,x ′)| ≤ 4 q q∑ j=1 min(W2(x, z j),W2(x ′, zj))2 1/2 . Finally the approximation error in terms of Frobenius is bounded by\nE2 := 1 m2 m∑ i,j=1 |W2(xi,xj)−W z 1,...,zq 2 (x i,xj)|2 ≤ 4 mq m∑ i=1 q∑ j=1 W 22 (x i, zj).\nIn particular, when q = 1 that is the case of single reference, we have\nE2 ≤ 4 m m∑ i=1 W 22 (x i, z),\nwhere the right term equals to the objective of the Wasserstein barycenter problem, which justifies the choice of z when learning without supervision." }, { "heading": "C ADDITIONAL EXPERIMENTS AND SETUP DETAILS", "text": "This section contains additional experiments on CIFAR-10, whose purpose is to illustrate the kernel associated with our embedding with respect to other classical or optimal transport based kernels, and test our embedding on another data modality; additional results for the experiments of the main section; details on our setup, in particular hyper-parameter tuning for our methods and the baselines.\nC.1 EXPERIMENTS ON KERNEL MATRICES (ONLY FOR SMALL DATA SETS).\nHere, we compare the optimal transport kernel KOT (3) and its surrogate Kz (2) (with z learned without supervision) to common and other OT kernels. Although our embedding Φz is scalable, the exact kernel require the computation of Gram matrices. For this toy experiment, we therefore use\n5000 samples only of CIFAR-10 (images with 32 × 32 pixels), encoded without supervision using a two-layer convolutional kernel network (Mairal, 2016). The resulting features are 3 × 3 patches living in Rd with d = 256 or 8192. KOT and Kz aggregate existing features depending on the ground cost defined by −κ (Gaussian kernel) given the computed weight matrix P. In that sense, we can say that these kernels work as an adaptive pooling. We therefore compare it to kernel matrices corresponding to mean pooling and no pooling at all (linear kernel). We also compare to a recent positive definite and fast optimal transport based kernel, the Sliced Wasserstein Kernel (Kolouri et al., 2016) with 10, 100 and 1000 projection directions. We add a positional encoding to it so as to have a fair comparison with our kernels. A linear classifier is trained from this matrices. Although we cannot prove that KOT is positive definite, the classifier trained on the kernel matrix converges when ε is not too small. The results can be seen on Table 6. Without positional information, our kernels do better than Mean pooling. When the positions are encoded, the Linear kernel is also outperformed. Note that including positions in Mean pooling and Linear kernel means interpolating between these two kernels: in the Linear kernel, only patches with same index are compared while in the Mean pooling, all patches are compared. All interpolations did worse than the Linear kernel. The runtimes illustrate the scalability of Kz.\nC.2 CIFAR-10\nHere, we test our embedding on the same data modality: we use CIFAR-10 features, i.e., 60, 000 images with 32 × 32 pixels and 10 classes encoded using a two-layer CKN (Mairal, 2016), one of the baseline architectures for unsupervised learning of CIFAR-10, and evaluate on the standard test set. The very best configuration of the CKN yields a small number (3 × 3) of high-dimensional (16, 384) patches and an accuracy of 85.8%. We will illustrate our embedding on a configuration which performs slightly less but provides more patches (16×16), a setting for which it was designed. The input of our embedding are unsupervised features extracted from a 2-layer CKN with kernel sizes equal to 3 and 3, and Gaussian pooling size equal to 2 and 1. We consider the following configurations of the number of filters at each layer, to simulate two different input dimensions for our embedding:\n• 64 filters at first and 256 at second layer, which yields a 16 × 16 × 256 representation for each image.\n• 256 filters at first and 1024 at second layer, which yields a 16 × 16 × 1024 representation for each image.\nSince the features are the output of a Gaussian embedding, κ in our embedding will be the linear kernel. The embedding is learned with one reference and various supports using K-means method described in Section 3, and compared to several classical pooling baselines, including the original CKN’s Gaussian pooling with pooling size equal to 6. The hyper-parameters are the entropic regularization ε and bandwidth for position encoding σpos. Their search grids are shown in Table 7 and the results in Table 8. Without supervision, the adaptive pooling of the CKN features by our embedding notably improves their performance. We notice that the position encoding is very important to this task, which substantially improves the performance of its counterpart without it.\nC.3 PROTEIN FOLD RECOGNITION\nDataset description. Our protein fold recognition experiments consider the Structural Classification Of Proteins (SCOP) version 1.75 and 2.06. We follow the data preprocessing protocols in Hou et al. (2019), which yields a training and validation set composed of 14699 and 2013 sequences from SCOP 1.75, and a test set of 2533 sequences from SCOP 2.06. The resulting protein sequences belong to 1195 different folds, thus the problem is formulated as a multi-classification task. The input sequence is represented as a 45-dimensional vector at each amino acid. The vector consists of a 20-dimensional one-hot encoding of the sequence, a 20-dimensional position-specific scoring matrix (PSSM) representing the profile of amino acids, a 3-class secondary structure represented by a one-hot vector and a 2-class solvent accessibility. The lengths of the sequences are varying from tens to thousands.\nModels setting and hyperparameters. We consider here the one-layer models followed by a global mean pooling for the baseline methods CKN (Chen et al., 2019a) and RKN (Chen et al., 2019b). We build our model on top of the one-layer CKN model, where κ can be seen as a Gaussian kernel on the k-mers in sequences. The only difference between our model and CKN is thus the pooling operation, which is given by our embedding introduced in Section 3. The bandwidth parameter of the Gaussian kernel κ on k-mers is fixed to 0.6 for unsupervised models and 0.5 for supervised models, the same as used in CKN which were selected by the accuracy on the validation set. The filter size k is fixed to 10 and different numbers of anchor points in Nyström for κ are considered in the experiments. The other hyperparameters for our embedding are the entropic regularization parameter ε, the number of supports in a reference p, the number of references q, the number of iterations for Sinkhorn’s algorithm and the regularization parameter λ in the linear classifier. The search grid for ε and λ is shown in Table 9 and they are selected by the accuracy on validation set. ε plays an important role in the performance and is observed to be stable for the same dataset. For this dataset, it is selected to be 0.5 for all the unsupervised and supervised models. The effect of other hyperparameters will be discussed below.\nFor the baseline methods, the accuracies of PSI-BLAST and DeepSF are taken from Hou et al. (2019). The hyperparameters for CKN and RKN can be found in Chen et al. (2019b). For Rep the Set (Skianis et al., 2020) and Set Transformer (Lee et al., 2019), we use the public implementations by the authors. These two models are used on the top of a convolutional layer of the same filter size as CKN to extract k-mer features. As the exact version of Rep the Set does not provide any implementation for back-propagation to a bottom layer of it, we consider the approximate version of\nRep the Set only, which also scales better to our dataset. The default architecture of Set Transformer did not perform well due to overfitting. We therefore used a shallower architecture with one ISAB, one PMA and one linear layer, similar to the one-layer architectures of CKN and our model. We tuned their model hyperparameters, weight decay and learning rate. The search grids for these hyperparameters are shown in Table 10.\nUnsupervised embedding. The kernel embedding ϕ, which is infinite dimensional for the Gaussian kernel, is approximated with the Nyström method using K-means on 300000 k-mers extracted from the same training set as in Chen et al. (2019b). The reference measures are learned by using either K-means or Wasserstein to update centroids in 2-Wasserstein K-means on 3000 subsampled sequences for RAM-saving reason. We evaluate our model on top of features extracted from CKNs of different dimensions, representing the number of anchor points used to approximate κ. The number of iterations for Sinkhorn is fixed to 100 to ensure the convergence. The results for different combinations of q and p are provided in Table 11. Increasing the number of supports p can improve the performance and then saturate it when p is too large. On the other hand, increasing the number of references while keeping the embedding dimension (i.e. p × q) constant is not significantly helpful in this unsupervised setting. We also notice that Wasserstein Barycenter for learning the references does not outperform K-means, while the latter is faster in terms of computation.\nSupervised embedding. Our supervised embedding is initialized with the unsupervised method and then trained in an alternating fashion which was also used for CKN: we use an Adam algorithm to update anchor points in Nyström and reference measures z, and the L-BFGS algorithm to optimize" }, { "heading": "1024 85.8/95.3/96.8", "text": "" }, { "heading": "2048 86.6/95.9/97.2", "text": "" }, { "heading": "3072 87.8/96.1/97.4", "text": "the classifier. The learning rate for Adam is initialized with 0.01 and halved as long as there is no decrease of the validation loss for 5 successive epochs. In practice, we notice that using a small number of Sinkhorn iterations can achieve similar performance to a large number of iteration, while being much faster to compute. We thus fix it to 10 throughout the experiments. The accuracy results are obtained by averaging on 10 runs with different seeds following the setting in Chen et al. (2019b). The results are shown in Table 13 with error bars. The effect of the number of supports q is similar to the unsupervised case, while increasing the number of references can indeed improve performance.\nC.4 DETECTION OF CHROMATIN PROFILES\nDataset description. Predicting the functional effects of noncoding variants from only genomic sequences is a central task in human genetics. A fundamental step for this task is to simultaneously predict large-scale chromatin features from DNA sequences (Zhou & Troyanskaya, 2015). We consider here the DeepSEA dataset, which consists in simultaneously predicting 919 chromatin profiles including 690 transcription factor (TF) binding profiles for 160 different TFs, 125 DNase I sensitivity profiles and 104 histone-mark profiles. In total, there are 4.4 million, 8000 and 455024 samples for training, validation and test. Each sample consists of a 1000-bp DNA sequence from the human GRCh37 reference. Each sequence is represented as a 1000× 4 binary matrix using onehot encoding on DNA characters. The dataset is available at http://deepsea.princeton. edu/media/code/deepsea_train_bundle.v0.9.tar.gz. Note that the labels for each profile are very imbalanced in this task with few positive samples. For this reason, learning unsu-\npervised models could be intractable as they may require an extremely large number of parameters if junk or redundant sequences cannot be filtered out.\nModel architecture and hyperparameters. For the above reason and fair comparison, we use here our supervised embedding as a module in Deep NNs. The architecture of our model is shown in Table 15. We use an Adam optimizer with initial learning rate equal to 0.01 and halved at epoch 1, 4, 8 for 15 epochs in total. The number of iterations for Sinkhorn is fixed to 30. The whole training process takes about 30 hours on a single GTX2080TI GPU. The dropout rate is selected to be 0.4 from the grid [0.1; 0.2; 0.3; 0.4; 0.5] and the weight decay is 1e-06, the same as Zhou & Troyanskaya (2015). The σpos for position encoding is selected to be 0.1, by the validation accuracy on the grid [0.05; 0.1; 0.2; 0.3; 0.4; 0.5]. The checkpoint with the best validation accuracy is used to evaluate on the test set. Area under ROC (auROC) and area under precision curve (auPRC), averaged over 919 chromatin profiles, are used to measure the performance. The hidden size d is chosen to be either 1024 or 1536.\nResults and importance of position encoding. We compare our model to the state-of-the-art CNN model DeepSEA (Zhou & Troyanskaya, 2015) with 3 convolutional layers, whose best hyperparameters can be found in the corresponding paper. Our model outperforms DeepSEA, while requiring fewer layers. The positional information is known to be important in this task. To show the efficacy of our position encoding, we compare it to the sinusoidal encoding used in the original transformer (Vaswani et al., 2017). We observe that our encoding with properly tuned σpos requires fewer layers, while being interpretable from a kernel point of view. We also find that larger hidden size d performs better, as shown in Table 16. ROC and PR curves for all the chromatin profiles and stratified by transcription factors, DNase I-hypersensitive sites and histone-marks can also be found in Figure 2.\nC.5 SST-2\nDataset description. The data set contains 67,349 training samples and 872 validation samples and can be found at https://gluebenchmark.com/tasks. The test set contains 1,821 samples for which the predictions need to be submitted on the GLUE leaderboard, with limited number of submissions. As a consequence, our training and validation set are extracted from the original training set (80% of the original training set is used for our training set and the remaining 20% is used for our validation set), and we report accuracies on the standard validation set, used as a test set. The reviews are padded with zeros when their length is shorter than the chosen sequence length (we choose 30 and 66, the latter being the maximum review length in the data set) and the BERT\nimplementation requires to add special tokens [CLS] and [SEP] at the beginning and the end of each review.\nModel architecture and hyperparameters. In most transformers such as BERT, the embedding associated to the token [CLS] is used for classification and can be seen in some sense as an embedding of the review adapted to the task. The features we used are the word features provided by the BERT base-uncased version, available at https://huggingface.co/transformers/ pretrained_models.html. For this version, the dimension of the word features is 768. Our model is one layer of our embedding, with ϕ the Gaussian kernel mapping with varying number of Nyström filters in the supervised setting, and the Linear kernel in the unsupervised setting. We do not add positonnal encoding as it is already integrated in BERT features. In the unsupervised setting, the output features are used to train a large-scale linear classifier, a Pytorch linear layer. We choose the best hyper-parameters based on the accuracy of a validation set. In the supervised case, the parameters of the previous model, w and z, are optimized end-to-end. In this case, we tune the learning rate. In both case, we tune the entropic regularization parameter of optimal transport and the bandwidth of the Gaussian kernel. The parameters in the search grid are summed up in Table 18. The best entropic regularization and Gaussian kernel bandwidth are typically and respectively 3.0 and 0.5 for this data set. The supervised training process takes between half an hour for smaller models (typically 128 filters in w and 3 supports in z) and a few hours for larger models (256 filters and 100 supports) on a single GTX2080TI GPU. The hyper-parameters of the baselines were similarly tuned, see 19. Mean Pooling and [CLS] embedding did not require any tuning except for the regularization λ of the classifier, which followed the same grid as in Table 18.\nResults and discussion. As explained in Section 5, our unsupervised embedding improves the BERT pre-trained features while still using a simple linear model as shown in Table 17, and its supervised counterpart enables to get even closer to the state-of-the art (for the BERT base-uncased model) accuracy, which is usually obtained after fine-tuning of the BERT model on the whole data set. This can be seen in Tables 20; 21. We also add a baseline consisting of one layer of classical self-attention, which did not do well hence was not reported in the main text." } ]
2,021
A TRAINABLE OPTIMAL TRANSPORT EMBEDDING
SP:a85b6d598513c8e03a013fd20da6b19a1108f71e
[ "This paper extends and explains how to apply the \"free energy principle\" and active inference to RL and imitation learning. They implement a neural network approximation of losses derived this way and test on some control tasks. Importantly the tasks focus on here are imitation + control tasks. That is, there is both a reward signal but also demonstration trajectories. The demonstrations may be suboptimal. The compare against PLaNet, a latent planning based approach." ]
Imitation Learning (IL) and Reinforcement Learning (RL) from high dimensional sensory inputs are often introduced as separate problems, but a more realistic problem setting is how to merge the techniques so that the agent can reduce exploration costs by partially imitating experts at the same time it maximizes its return. Even when the experts are suboptimal (e.g. Experts learned halfway with other RL methods or human-crafted experts), it is expected that the agent outperforms the suboptimal experts’ performance. In this paper, we propose to address the issue by using and theoretically extending Free Energy Principle, a unified brain theory that explains perception, action and model learning in a Bayesian probabilistic way. We find that both IL and RL can be achieved based on the same free energy objective function. Our results show that our approach is promising in visual control tasks especially with sparse-reward environments.
[]
[ { "authors": [ "Karl Friston" ], "title": "The free-energy principle: a unified brain theory", "venue": "Nature reviews neuroscience,", "year": 2010 }, { "authors": [ "Karl Friston", "James Kilner", "Lee Harrison" ], "title": "A free energy principle for the brain", "venue": "Journal of Physiology-Paris,", "year": 2006 }, { "authors": [ "Karl Friston", "Spyridon Samothrakis", "Read Montague" ], "title": "Active inference and agency: optimal control without cost functions", "venue": "Biological cybernetics,", "year": 2012 }, { "authors": [ "Karl Friston", "Francesco Rigoli", "Dimitri Ognibene", "Christoph Mathys", "Thomas Fitzgerald", "Giovanni Pezzulo" ], "title": "Active inference and epistemic value", "venue": "Cognitive neuroscience,", "year": 2015 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Jimmy Ba", "Mohammad Norouzi" ], "title": "Dream to control: Learning behaviors by latent imagination", "venue": "arXiv preprint arXiv:1912.01603,", "year": 2019 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Todd Hester", "Matej Vecerik", "Olivier Pietquin", "Marc Lanctot", "Tom Schaul", "Bilal Piot", "Dan Horgan", "John Quan", "Andrew Sendonaris", "Ian Osband" ], "title": "Deep q-learning from demonstrations", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Steven Kapturowski", "Georg Ostrovski", "Will Dabney", "John Quan", "Remi Munos" ], "title": "Recurrent experience replay in distributed reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Sergey Levine" ], "title": "Reinforcement learning and control as probabilistic inference: Tutorial and review", "venue": "arXiv preprint arXiv:1805.00909,", "year": 2018 }, { "authors": [ "Beren Millidge" ], "title": "Deep active inference as variational policy gradients", "venue": "arXiv preprint arXiv:1907.03876,", "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Ashvin Nair", "Bob McGrew", "Marcin Andrychowicz", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Overcoming exploration in reinforcement learning with demonstrations", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "Tom Le Paine", "Caglar Gulcehre", "Bobak Shahriari", "Misha Denil", "Matt Hoffman", "Hubert Soyer", "Richard Tanburn", "Steven Kapturowski", "Neil Rabinowitz", "Duncan Williams" ], "title": "Making efficient use of demonstrations to solve hard exploration problems", "venue": "arXiv preprint arXiv:1909.01387,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Mark Pfeiffer", "Samarth Shukla", "Matteo Turchetta", "Cesar Cadena", "Andreas Krause", "Roland Siegwart", "Juan Nieto" ], "title": "Reinforced imitation: Sample efficient deep reinforcement learning for mapless navigation by leveraging prior demonstrations", "venue": "IEEE Robotics and Automation Letters,", "year": 2018 }, { "authors": [ "Aravind Rajeswaran", "Vikash Kumar", "Abhishek Gupta", "Giulia Vezzani", "John Schulman", "Emanuel Todorov", "Sergey Levine" ], "title": "Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations", "venue": "In Proceedings of Robotics: Science and Systems (RSS),", "year": 2018 }, { "authors": [ "Siddharth Reddy", "Anca D Dragan", "Sergey Levine" ], "title": "Sqil: imitation learning via regularized behavioral cloning", "venue": "arXiv preprint arXiv:1905.11108,", "year": 2019 }, { "authors": [ "Wen Sun", "J. Andrew Bagnell", "Byron Boots" ], "title": "Truncated horizon policy search: Combining reinforcement learning & imitation learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kai Ueltzhöffer" ], "title": "Deep active inference", "venue": "Biological cybernetics,", "year": 2018 }, { "authors": [ "Mel Vecerik", "Todd Hester", "Jonathan Scholz", "Fumin Wang", "Olivier Pietquin", "Bilal Piot", "Nicolas Heess", "Thomas Rothörl", "Thomas Lampe", "Martin Riedmiller" ], "title": "Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards", "venue": "arXiv preprint arXiv:1707.08817,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Imitation Learning (IL) is a framework to learn a policy to mimic expert trajectories. As the expert specifies model behaviors, there is no need to do exploration or to design complex reward functions. Reinforcement Learning (RL) does not have these features, so RL agents have no clue to realize desired behaviors in sparse-reward settings and even when RL succeeds in reward maximization, the policy does not necessarily achieve behaviors that the reward designer has expected. The key drawbacks of IL are that the policy never exceeds the suboptimal expert performance and that the policy is vulnerable to distributional shift. Meanwhile, RL can achieve super-human performance and has potentials to transfer the policy to new tasks. As real-world applications often needs high sample efficiency and little preparation (rough rewards and suboptimal experts), it is important to find a way to effectively combine IL and RL.\nWhen the sensory inputs are high-dimensional images as in the real world, behavior learning such as IL and RL would be difficult without representation or model learning. Free Energy Principle (FEP), a unified brain theory in computational neuroscience that explains perception, action and model learning in a Bayesian probabilistic way (Friston et al., 2006; Friston, 2010), can handle behavior learning and model learning at the same time. In FEP, the brain has a generative model of the world and computes a mathematical amount called Free Energy using the model prediction and sensory inputs to the brain. By minimizing the Free Energy, the brain achieves model learning and behavior learning. Prior work about FEP only dealt with limited situations where a part of the generative model is given and the task is very low dimensional. As there are a lot in common between FEP and variational inference in machine learning, recent advancements in deep learning and latent variable models could be applied to scale up FEP agents to be compatible with high dimensional tasks.\nRecent work in model-based reinforcement learning succeeds in latent planning from highdimensional image inputs by incorporating latent dynamics models. Behaviors can be derived either by imagined-reward maximization (Ha & Schmidhuber, 2018; Hafner et al., 2019a) or by online planning (Hafner et al., 2019b). Although solving high dimensional visual control tasks with modelbased methods is becoming feasible, prior methods have never tried to combine with imitation.\nIn this paper, we propose Deep Free Energy Network (FENet), an agent that combines the advantages of IL and RL so that the policy roughly learns from suboptimal expert data without the need of exploration or detailed reward crafting in the first place, then learns from sparsely specified reward functions to exceed the suboptimal expert performance.\nThe key contributions of this work are summarized as follows:\n• Extension of Free Energy Principle: We theoretically extend Free Energy Principle, introducing policy prior and policy posterior to combine IL and RL. We implement the proposed method on top of Recurrent State Space Model (Hafner et al., 2019b), a latent dynamics model with both deterministic and stochastic components.\n• Visual control tasks in realistic problem settings: We solve Cheetah-run, Walker-walk, and Quadruped-walk tasks from DeepMind Control Suite (Tassa et al., 2018). We do not only use the default problem settings, we also set up problems with sparse rewards and with suboptimal experts. We demonstrate that our agent outperforms model-based RL using Recurrent State Space Model in sparse-reward settings. We also show that our agent can achieve higher returns than Behavioral Cloning (IL) with suboptimal experts." }, { "heading": "2 BACKGROUNDS ON FREE ENERGY PRINCIPLE", "text": "" }, { "heading": "2.1 PROBLEM SETUPS", "text": "We formulate visual control as a partially observable Markov decision process (POMDP) with discrete time steps t, observations ot, hidden states st, continuous action vectors at, and scalar rewards rt. The goal is to develop an agent that maximizes expected return E[ ∑T t=1 rt]." }, { "heading": "2.2 FREE ENERGY PRINCIPLE", "text": "Perception, action and model learning are all achieved by minimizing the same objective function, Free Energy (Friston et al., 2006; Friston, 2010). In FEP, the agent is equipped with a generative model of the world, using a prior p(st) and a likelihood p(ot|st).\np(ot, st) = p(ot|st)p(st) (1) Perceptual Inference Under the generative model, the posterior probability of hidden states given observations is calculated with Bayes’ theorem as follows.\np(st|ot) = p(ot|st)p(st)\np(ot) , p(ot) =\n∫ p(ot|st)p(st)ds (2)\nSince we cannot compute p(ot) due to the integral, we think of approximating p(st|ot) with a variational posterior q(st) by minimizing KL divergence KL(q(st)||p(st|ot)).\nKL(q(st)||p(st|ot)) = ln p(ot) +KL(q(st)||p(ot, st)) (3) Ft = KL(q(st)||p(ot, st)) (4)\nWe define the Free Energy as (eq.4). Since p(ot) does not depend on st, we can minimize (eq.3) w.r.t. the parameters of the variational posterior by minimizing the Free Energy. Thus, the agent can infer the hidden states of the observations by minimizing Ft. This process is called ’perceptual inference’ in FEP.\nPerceptual Learning Free Energy is the same amount as negative Evidence Lower Bound (ELBO) in variational inference often seen in machine learning as follows.\np(ot) ≥ −Ft (5) By minimizing Ft w.r.t. the parameters of the prior and the likelihood, the generative model learns to best explain the observations. This process is called ’perceptual learning’ in FEP.\nActive Inference We can assume that the prior is conditioned on the hidden states and actions at the previous time step as follows.\np(st) = p(st|st−1, at−1) (6)\nThe agent can change the future by choosing actions. Suppose the agent chooses at when it is at st, the prior can predict the next hidden state st+1. Thus, we can think of the Expected Free Energy Gt+1 at the next time step t+ 1 as follows (Friston et al., 2015).\nGt+1 = KL(q(st+1)||p(ot+1, st+1)) = Eq(st+1)[ln q(st+1)− ln p(ot+1, st+1)] = Eq(st+1)p(ot+1|st+1)[ln q(st+1)− ln p(ot+1, st+1)] (7) = Eq(st+1)p(ot+1|st+1)[ln q(st+1)− ln p(st+1|ot+1)− ln p(ot+1)] ≈ Eq(ot+1,st+1)[ln q(st+1)− ln q(st+1|ot+1)− ln p(ot+1)] (8) = Eq(ot+1)[−KL(q(st+1|ot+1)||q(st+1))− ln p(ot+1)] (9)\nSince the agent has not experienced time step t+ 1 yet and has not received observations ot+1, we take expectation over ot+1 using the likelihood p(ot+1|st+1) as (eq.7). In (eq.8), we approximate p(ot+1|st+1) as q(ot+1|st+1) and p(st+1|ot+1) as q(st+1|ot+1). According to the complete class theorem (Friston et al., 2012), any scalar rewards can be encoded as observation priors using p(o) ∝ exp r(o) and the second term in (eq.9) becomes a goal-directed value. This observation prior p(ot+1) can also be regarded as the probability of optimality variable p(Ot+1 = 1|ot+1), where the binary optimality variableOt+1 = 1 denotes that time step t+1 is optimal andOt+1 = 0 denotes that it is not optimal as introduced in the context of control as probabilistic inference(Levine, 2018). The first term in (eq.9) is called epistemic value that works as intrinsic motivation to further explore the world. Minimization of −KL(q(st+1|ot+1)||q(st+1)) means that the agent tries to experience as different states st+1 as possible given some imagined observations ot+1. By minimizing the Expected Free Energy, the agent can infer the actions that explores the world and maximize rewards. This process is called ’active inference’." }, { "heading": "3 DEEP FREE ENERGY NETWORK (FENET)", "text": "Perceptual learning deals with learning the generative model to best explain the agent’s sensory inputs. If we think of not only observations but also actions given by the expert as a part of the sensory inputs, we can explain imitation leaning by using the concept of perceptual learning. Active inference deals with exploration and reward maximization, so it is compatible with reinforcement learning. By minimizing the same objective function, the Free Energy, we can deal with both imitation and RL.\nIn this section, we first introduce a policy prior for imitation and a policy posterior for RL. Second, we extend the Free Energy Principle to be able to accommodate these two policies in the same objective function, the Free Energy. Finally, we explain a detailed network architecture to implement the proposed method for solving image control tasks." }, { "heading": "3.1 INTRODUCING A POLICY PRIOR AND A POLICY POSTERIOR", "text": "Free Energy We extend the Free Energy from (eq.4) so that actions are a part of sensory inputs that the generative model tries to explain.\nFt = KL(q(st)||p(ot, st, at)) = KL(q(st)||p(ot|st)p(at|st)p(st|st−1, at−1)) (10)\n= Eq(st)[ln q(st)\np(ot|st)p(at|st)p(st|st−1, at−1) ] (11)\n= Eq(st)[− ln p(ot|st)− ln p(at|st) + ln q(st)− ln p(st|st−1, at−1)] (12) = Eq(st)[− ln p(ot|st)− ln p(at|st)] +KL(q(st)||p(st|st−1, at−1)) (13)\nWe define p(at|st) as a policy prior. When the agent observes expert trajectories, by minimizing Ft, the policy prior will be learned so that it can best explain the experts. Besides the policy prior, we introduce and define a policy posterior q(at|st), which is the very policy that the agent samples from when interacting with its environments. We explain how to learn the policy posterior in the following.\nExpected Free Energy for imitation In a similar manner to active inference in Section 2.2, we think of the Expected Free EnergyGt+1 at the next time step t+1, but this time we take expectation over the policy posterior q(at|st) becauseGt+1 is a value expected under the next actions. Note that\nin Section 2.2 at was given as a certain value, but here at is sampled from the policy posterior. We calculate the expected variational posterior at time step t+ 1 as follows.\nq(st+1) = Eq(st)q(at|st)[p(st+1|st, at)] (14) q(ot+1, st+1, at+1) = Eq(st+1)[p(ot+1|st+1)q(at+1|st+1)] (15)\nWe extend the Expected Free Energy from (eq.12) so that the variational posterior makes inference on actions as follows.\nGILt+1 = Eq(ot+1,st+1,at+1)[− ln p(ot+1|st+1)− ln p(at+1|st+1) + ln q(st+1, at+1) − ln p(st+1|st, at)] (16)\n= Eq(ot+1,st+1,at+1)[− ln p(ot+1|st+1)− ln p(at+1|st+1) + ln q(at+1|st+1)] +KL(q(st+1)||p(st+1|st, at)) (17) = Eq(ot+1,st+1)[− ln p(ot+1|st+1) +KL(q(at+1|st+1)||p(at+1|st+1))] +KL(q(st+1)||p(st+1|st, at)) (18) = Eq(ot+1,st+1)[− ln p(ot+1|st+1) +KL(q(at+1|st+1)||p(at+1|st+1))] + 0 (19) = Eq(st+1)[H[p(ot+1|st+1)] +KL(q(at+1|st+1)||p(at+1|st+1))] (20)\nIn (eq.20), the first term is the entropy of the observation likelihood, and the second term is the KL divergence between the policy prior and the policy posterior. By minimizing GILt+1, the agent learns the policy posterior so that it matches the policy prior which has been learned through minimizing Ft to encode the experts’ behavior.\nExpected Free Energy for RL We can get the Expected Free Energy in a different way that has a reward component r(ot+1) leading to the policy posterior maximizing rewards. We extend the Expected Free Energy from (eq.8) so that the variational posterior makes inference on actions as follows.\nGRLt+1 = Eq(ot+1,st+1,at+1)[ln q(st+1, at+1) − ln p(at+1|st+1)− ln q(st+1|ot+1)− ln p(ot+1)] (21)\n= Eq(ot+1,st+1)[ln q(st+1)− ln q(st+1|ot+1) +KL(q(at+1|st+1)||p(at+1|st+1))− ln p(ot+1)] (22) = Eq(ot+1)[−KL(q(st+1|ot+1)||q(st+1))− ln p(ot+1)] + Eq(st+1)[KL(q(at+1|st+1)||p(at+1|st+1))] (23) ≈ Eq(ot+1)[−KL(q(st+1|ot+1)||q(st+1))− r(ot+1)] + Eq(st+1)[KL(q(at+1|st+1)||p(at+1|st+1))] (24)\nIn a similar manner to active inference in Section 2.2, we use p(o) ∝ exp r(o) in (eq.24). The first KL term is the epistemic value that lets the agent explore the world, the second term is the expected reward under the action sampled from the policy posterior, and the last KL term is the KL divergence between the policy prior and the policy posterior. The last KL term can be written as follows (eq.25), meaning that minimizing this term leads to maximizing the entropy of the policy posterior at the same time the policy posterior tries to match the policy prior. Thus, the expected free energy can be regarded as one of entropy maximizing RL methods.\nKL(q(at+1|st+1)||p(at+1|st+1)) = −H[q(at+1|st+1)]− Eq(at+1|st+1)[ln p(at+1|st+1)] (25)\nNote that q(ot+1) in (eq.24) can be calculated as follows.\nq(ot+1) = Eq(st+1)[p(ot+1|st+1)] (26)\nBy minimizing GRLt+1, the agent learns the policy posterior so that it explores the world and maximizes the reward as long as it does not deviate too much from the policy prior which has encoded experts’ behavior through minimizing Ft." }, { "heading": "3.2 IMITATION AND RL OBJECTIVES", "text": "To account for the long term future, the agent has to calculate the Expected Free Energy at t+ 1 to ∞.\nF = Ft + ∞∑\nτ=t+1\nγτ−t−1Gτ (27)\nWe define this curly F to be the objective that the Deep Free Energy Network should minimize. Note that γ is a discount factor as in the case of general RL algorithms. As it is impossible to sum over infinity time steps, we introduce an Expected Free Energy Value function V (st+1) to estimate the cumulative Expected Free Energy. Similarly to the case of Temporal Difference learning of Deep Q Network (Mnih et al., 2013), we use a target network Vtarg(st+2) to stabilize the learning process and define the loss for the value function as follows.\nL = ||Gt+1 + γVtarg(st+2)− V (st+1)||2 (28) We made a design choice that the agent uses the value function only for RL, and not for imitation. In imitation, we use only the real value of the Expected Free EnergyGt+1 at the next time step t+1. This is because imitation learning can be achieved without long term prediction as the agent is given the experts’ all time series data available. On the other hand, in RL, using the value function to predict rewards in the long-term future is essential to avoid a local minimum and achieve the desired goal.\nIn conclusion, the objective functions of Deep Free Energy Network (FENet) for a data sequence (ot, at, rt, ot+1) are as follows.\nFIL = Ft +GILt+1 (29) FRL = Ft +GRLt+1 + γVωtarg (st+2) (30) L = ||GRLt+1 + γVtarg(st+2)− V (st+1)||2 (31)\nThe overall Free Energy calculation process is shown in Figure 1." }, { "heading": "3.3 NETWORK ARCHITECTURE AND CALCULATION", "text": "For implementation, we made a design choice to use Recurrent State Space Model (Hafner et al., 2019b), a latent dynamics model with both deterministic and stochastic components. In this model, the hidden states st are split into two parts: stochastic hidden states st and deterministic hidden states ht. The deterministic transition of ht is modeled using Recurrent Neural Networks (RNN) f as follows.\nht = f(ht−1, st−1, at−1) (32)\nWe model the probabilities in Deep Free Energy Networks as follows.\nState prior pθ(st|ht) (33) Observation likelihood pθ(ot|st, ht) (34) Reward likelihood pθ(rt−1|st, ht) (35) State posterior qφ(st|ht, ot) (36) Policy prior pθ(at|st, ht) (37) Policy posterior qψ(at|st, ht) (38) Value network Vω(st) (39) Target Value Network Vωtarg (st) (40)\nWe model these probabilities as feedforward Neural Networks that output the mean and standard deviation of the random variables according to the Gaussian distribution. The parameters θ, φ, ψ, ω are network parameters to be learned. Using the network parameters, the objective loss functions can be written as follows.\nFIL = Ft +GILt+1 (41) FRL = Ft +GRLt+1 + γVωtarg (st+2) (42) L = ||GRLt+1 + γVωtarg (st+2)− Vω(st+1)||2 (43)\nwhen Ft = Eqφ(st|ht,ot)[− ln pθ(ot|st, ht)− ln pθ(at|st, ht)] +KL(qφ(st|ht, ot)||pθ(st|ht))\n(44)\nGILt+1 = Eq(st+1)[H[pθ(ot+1|st+1, ht+1)] +KL(qψ(at+1|st+1, ht+1)||pθ(at+1|st+1, ht+1))] (45)\nGRLt+1 = Eq(ot+1)[−KL(qφ(st+1|ht+1, ot+1)||q(st+1))− pθ(rt|st+1, ht+1)] + Eq(st+1)[KL(qψ(at+1|st+1, ht+1)||pθ(at+1|st+1, ht+1))] + γVωtarg (st+2) (46) q(st+1) = Eqφ(st|ht,ot)qψ(at|st,ht)[pθ(st+1|ht+1)] (47) q(ot+1) = Eq(st+1)[pθ(ot+1|st+1, ht+1)] (48) Algorithm 1 in Appendix shows overall calculations using these losses. The agent minimizes FIL for expert data DE and the agent minimizes FRL for agent data DA that the agent collects on its own." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate FENet on three continuous control tasks from images. We compare our model with model-based RL and model-based imitation RL in dense and sparse reward setting when optimal expert is available. Then we compare our model with imitation learning methods when only suboptimal experts are available. Finally, we investigate the merits of combining imitation and RL as an ablation study.\nControl tasks We used Cheetah-run, Walker-walk, and Quadruped-walk tasks, image-based continuous control tasks of DeepMind Control Suite (Tassa et al., 2018) shown in Figure 6. The agent gets rewards ranging from 0 to 1. Quadruped-walk is the most difficult as it has more action dimensions than the others. Walker-walk is more challenging than Cheehtah-run because an agent first has to stand up and then walk, meaning that the agent easily falls down on the ground, which is difficult to predict. The episode length is 1000 steps starting from randomized initial states. We use action repeatR = 4 for the Cheetah-run task, andR = 2 for the Walker-walk task and the Quadruped-walk task." }, { "heading": "4.1 PERFORMANCE IN STANDARD VISUAL CONTROL TASKS", "text": "We compare the performance of FENet to PlaNet (RL) and ”PlaNet with demonstrations” (imitation RL) in standard visual control tasks mentioned above. We use PlaNet as a baseline method because PlaNet is one of the most basic methods using Recurrent State Space Model, on top of which\nwe build our model. As FENet uses expert data, we create ”PlaNet with demonstrations” for fair comparison. This variant of PlaNet has an additional experience replay pre-populated with expert trajectories and minimize a loss calculated from the expert data in addition to PlaNet’s original loss.\nFigure 2 shows that ”PlaNet with demonstrations” is always better than PlaNet and that FENet is ranked higher as the difficulty of tasks gets higher. In Cheetah-run, FENet gives competitive performance with PlaNet. In Walker-walk, FENet and ”PlaNet with demonstrations” are almost competitive, both of which are substantially better than PlaNet thanks to expert knowledge being leveraged to increase sample efficiency. In Quadruped-walk, FENet is slightly better than the other two baselines." }, { "heading": "4.2 PERFORMANCE IN SPARSE-REWARD VISUAL CONTROL TASKS", "text": "In real-world robot learning, it is demanding to craft a dense reward function to lead robots to desired behaviors. It would be helpful if an agent could acquire desired behaviors simply by giving sparse signals. We compare the performance of FENet to PlaNet and ”PlaNet with demonstrations” in sparse-reward settings, where agents do not get rewards less than 0.5 per time step (Note that in the original implementation of Cheetah-run, Walker-walk and Quadruped-walk, agents get rewards ranging from 0 to 1 per time step). Figure 3 shows that FENet outperforms PlaNet and ”PlaNet with demonstrations” in all three tasks. In Cheetah-run, PlaNet and ”PlaNet with demonstrations” are not able to get even a single reward." }, { "heading": "4.3 PERFORMANCE WITH SUBOPTIMAL EXPERTS", "text": "In real-world robot learning, expert trajectories are often given by human experts. It is natural to assume that expert trajectories are suboptimal and that there remains much room for improvement. We compare the performance of FENet to Behavioral Cloning imitation methods. We use two types of networks for behavioral cloning methods: recurrent policy and recurrent decoder policy. The recurrent policy πR(at|ot) is neural networks with one gated recurrent unit cell and three dense layers. The recurrent decoder policy πR(at, ot+1|ot) is neural networks with one gated recurrent unit cell and four dense layers and deconvolution layers as in the decoder of PlaNet. Both networks does not get raw pixel observations but take observations encoded by the same convolutional encoder as PlaNet’s.\nFigure 4 shows that while imitation methods overfit to the expert and cannot surpass the suboptimal expert performance, FENet is able to substantially surpass the suboptimal expert’s performance." }, { "heading": "4.4 LEARNING STRATEGIES", "text": "Figure 5 compares learning strategies of FENet in Cheetah-run and Walker-walk (ablation study). ’Imitation RL’ is the default FENet agent that does imitation learning and RL at the same time, minimizing FIL + FRL. ’Imitation-pretrained RL’ is an agent that first learns the model only with imitation (minimizing FIL) and then does RL using the pre-trained model (minimizing FRL). ’RL only’ is an agent that does RL only, minimizing FRL. ’Imitation only’ is an agent that does imitation only, minimizing FIL. While ’imitation only’ gives the best performance and ’imitation RL’ gives the second best in Cheetah-run, ’imitation RL’ gives the best performance and ’imitation only’ gives the worst performance in Walker-walk. We could say ’imitation RL’ is the most robust to the properties of tasks." }, { "heading": "5 RELATED WORK", "text": "Active Inference Friston, who first proposed Active Inference, has evaluated the performance in simple control tasks and a low-dimensional maze (Friston et al., 2012; 2015). Ueltzhoffer implemented Active Inference with Deep Neural Networks and evaluated the performance in a simple control task (Ueltzhöffer, 2018). Millidge proposed a Deep Active Inference framework with value functions to estimate the correct Free Energy and succeeded in solving Gym environments (Millidge, 2019). Our approach extends Deep Active Inference to combine imitation and RL, solving more challenging tasks.\nRL from demonstration Reinforced Imitation Learning succeeds in reducing sample complexity by using imitation as pre-training before RL (Pfeiffer et al., 2018). Adding demonstrations into a replay buffer of off policy RL methods also demonstrates high sample efficiency (Vecerik et al., 2017; Nair et al., 2018; Paine et al., 2019). Demo Augmented Policy Gradient mixes the policy gradient with a behavioral cloning gradient (Rajeswaran* et al., 2018). Deep Q-learning from Demonstrations (DQfD) not only use demonstrations for pre-training but also calculates gradients\nfrom demonstrations and environment interaction data (Hester et al., 2018). Truncated HORizon Policy Search uses demonstrations to shape rewards so that subsequent planning can achieve superior performance to RL even when experts are suboptimal (Sun et al., 2018). Soft Q Imitation Learning gives rewards that encourage the agent to return to demonstrated states in order to avoid policy collapse (Reddy et al., 2019). Our approach is similar to DQfD in terms of mixing gradients calculated from demonstrations and from environment interaction data. One key difference is that FENet concurrently learns the generative model of the world so that it can be robust to wider environment properties.\nControl with latent dynamics model World Models acquire latent spaces and dynamics over the spaces separately, and evolve simple linear controllers to solve visual control tasks (Ha & Schmidhuber, 2018). PlaNet learns Recurrent State Space Model and does planning with Model Predictive Control at test phase (Hafner et al., 2019b). Dreamer, which is recently built upon PlaNet, has a policy for latent imagination and achieved higher performance than PlaNet (Hafner et al., 2019a). Our approach also uses Recurrent State Space Model to describe variational inference, and we are the first to combine imitation and RL over latent dynamics models to the best of our knowledge." }, { "heading": "6 CONCLUSION", "text": "We present FENet, an agent that combines Imitation Learning and Reinforcement Learning using Free Energy objectives. For this, we theoretically extend the Free Energy Principle and introduce a policy prior that encodes experts’ behaviors and a policy posterior that learns to maximize expected rewards without deviating too much from the policy prior. FENet outperforms model-based RL and imitation RL especially in visual control tasks with sparse rewards and FENet also outperforms suboptimal experts’ performance unlike Behavioral cloning. Strong potentials in sparse environment with suboptimal experts are important factors for real-world robot learning.\nDirections for future work include learning the balance between imitation and RL, i.e. Free Energy and Expected Free Energy so that the agent can select the best approach to solve its confronting tasks by monitoring the value of Free Energy. It is also important to evaluate FENet in real-world robotics tasks to show that our method is effective in more realistic settings that truly appear in the real world." }, { "heading": "A APPENDIX", "text": "A.1 FENET ALGORITHM\nSee Algorithm 1.\nA.2 IMPLEMENTATION\nTo stabilize the learning process, we adopt burn-in, a technique to recover initial states of RNN’s hidden variables ht (Kapturowski et al., 2019). As shown in Algorithm 1, the agent calculates the Free Energy with mini batches sampled from the expert or agent experience replay buffer D, which means that ht is initialized randomly in every mini batch calculation. Since the Free Energy heavily depends on ht, it is crucial to estimate the accurate hidden states. We set a burn-in period when a portion of the replay sequence is used only for unrolling the networks to produce initial states. After the period, we update the networks only on the remaining part of the sequence.\nWe use PyTorch (Paszke et al., 2017) to write neural networks and run experiments using NVIDIA GeForce GTX 1080 Ti / RTX 2080 Ti / Tesla V100 GPU (1 GPU per experiment). The training time for our FENet implementation is about 24 hours on the Control Suite environment. As for the hyper parameters, we use the convolutional encoder and decoder networks from (Ha & Schmidhuber, 2018) and Recurrent State Space Model from (Hafner et al., 2019b) and implement all other functions as three dense layers of size 200 with ReLU activations (Nair & Hinton, 2010). We made a design choice to make the policy prior, the policy posterior, and the observation likelihood, the reward likelihood deterministic functions while the state prior and the state posterior are stochastic. We use the batch size B = 25 for ’imitation RL’ with FENet, and B = 50 for other types and baseline methods. We use the chunk length L = 50, the burn-in period 20. We use seed episodes S = 40, expert episodes N = 10000 trained with PlaNet (Hafner et al., 2019b), collect interval C = 100 and action exploration noise Normal(0, 0.3). We use the discount factor γ = 0.99 and the\nAlgorithm 1 Deep Free Energy Network (FENet)\nInput: Seed episodes S Collect interval C Batch size B Chunk length L Expert episodes N Target smoothing rate ρ Learning rate α State prior pθ(st|ht) State posterior qφ(st|ht, ot) Policy prior pθ(at|st, ht) Policy posterior qψ(at|st, ht) Likelihood pθ(ot|st, ht), pθ(rt−1|st, ht) Value function Vω(st) Target value function Vωtarg (st)\nInitialize expert dataset DE with N expert trajectories Initialize agent dataset DA with S random episodes Initialize neural network parameters θ, φ, ψ, ω randomly while not converged do\nfor update step c = 1..C do // Imitation Learning Draw expert data {(ot, at, rt, ot+1)k+Lt=k }Bi=1 ∼ DE Compute Free Energy FIL from equation 41 // Reinforcement Learning Draw agent data {(ot, at, rt, ot+1)k+Lt=k }Bi=1 ∼ DA Compute Free Energy FRL from equation 42 Compute V function’s Loss L from equation 43 // Update parameters θ ← θ − α∇θ(FIL + FRL) φ← φ− α∇φ(FIL + FRL) ψ ← ψ − α∇ψ(FIL + FRL) ω ← ω − α∇ωL ωtarg ← ρωtarg + (1− ρ)ω end for // Environment interaction o1 ← env.reset() for time step t = 1..T do\nInfer hidden states st ← qφ(st|ht, ot) Calculate actions at ← qψ(at|st, ht) Add exploration noise to actions rt, ot+1 ← env.step (at)\nend for DA ← DA ∪ {(ot, at, rt, ot+1)Tt=1}\nend while\ntarget smoothing rate ρ = 0.01. We use Adam (Kingma & Ba, 2014) with learning rates α = 10−3 and scale down gradient norms that exceed 1000. We scale the reward-related loss by 100, the policy-prior-related loss by 10. We clip KL loss between the hidden states below 3 free nats and clip KL loss between the policies below 0.6." } ]
2,020
null
SP:69855e0bec141e9d15eec5cc37022f313e6600b2
[ "By the first look, this work itself does not introduce any new architecture or novel algorithm. It takes what is considered as the popular choices in generating classifier saliency masks, and conducts quite extensive sets of experiments to dissect the components by their importance. The writing is pretty clear in narrative and the experimental findings are surprising and significant. " ]
Saliency maps that identify the most informative regions of an image for a classifier are valuable for model interpretability. A common approach to creating saliency maps involves generating input masks that mask out portions of an image to maximally deteriorate classification performance, or mask in an image to preserve classification performance. Many variants of this approach have been proposed in the literature, such as counterfactual generation and optimizing over a Gumbel-Softmax distribution. Using a general formulation of masking-based saliency methods, we conduct an extensive evaluation study of a number of recently proposed variants to understand which elements of these methods meaningfully improve performance. Surprisingly, we find that a well-tuned, relatively simple formulation of a masking-based saliency model outperforms many more complex approaches. We find that the most important ingredients for high quality saliency map generation are (1) using both masked-in and masked-out objectives and (2) training the classifier alongside the masking model. Strikingly, we show that a masking model can be trained with as few as 10 examples per class and still generate saliency maps with only a 0.7-point increase in localization error.
[ { "affiliations": [], "name": "SIMPLIFYING MASKING-BASED" } ]
[ { "authors": [ "Julius Adebayo", "Justin Gilmer", "Michael Muelly", "Ian Goodfellow", "Moritz Hardt", "Been Kim" ], "title": "Sanity checks for saliency maps", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Chirag Agarwal", "Anh Nguyen" ], "title": "Explaining an image classifier’s decisions using generative models", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Rodrigo Benenson", "Stefan Popov", "Vittorio Ferrari" ], "title": "Large-scale interactive object segmentation with human annotators", "venue": null, "year": 2019 }, { "authors": [ "Ali Borji", "Ming-Ming Cheng", "Huaizu Jiang", "Jia Li" ], "title": "Salient object detection: A survey", "venue": "CoRR, abs/1411.5878,", "year": 2014 }, { "authors": [ "Chun-Hao Chang", "Elliot Creager", "Anna Goldenberg", "David Duvenaud" ], "title": "Explaining image classifiers by counterfactual generation", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Junsuk Choe", "Seong Joon Oh", "Seungho Lee", "Sanghyuk Chun", "Zeynep Akata", "Hyunjung Shim" ], "title": "Evaluating weakly supervised object localization methods right", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Piotr Dabkowski", "Yarin Gal" ], "title": "Real time image saliency for black box classifiers", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Finale Doshi-Velez", "Been Kim" ], "title": "Towards a rigorous science of interpretable machine learning, 2017", "venue": null, "year": 2017 }, { "authors": [ "Lijie Fan", "Shengjia Zhao", "Stefano Ermon" ], "title": "Adversarial localization network. 2017", "venue": "URL http://lijiefan.me/files/ALN-nips17-LLD.pdf", "year": 2017 }, { "authors": [ "Ruth Fong", "Andrea Vedaldi" ], "title": "Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks", "venue": null, "year": 2018 }, { "authors": [ "Ruth C Fong", "Andrea Vedaldi" ], "title": "Interpretable explanations of black boxes by meaningful perturbation", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Xin Hong", "Pengfei Xiong", "Renhe Ji", "Haoqiang Fan" ], "title": "Deep fusion network for image completion", "venue": "In ACM MM,", "year": 2019 }, { "authors": [ "Sara Hooker", "Dumitru Erhan", "Pieter-Jan Kindermans", "Been Kim" ], "title": "Evaluating feature importance estimates", "venue": null, "year": 2018 }, { "authors": [ "K.J. Hsu", "Y.Y. Lin", "Y.Y. Chuang" ], "title": "Weakly supervised salient object detection by learning a classifier-driven map generator", "venue": "IEEE Transactions on Image Processing,", "year": 2019 }, { "authors": [ "Kuang-Jui Hsu", "Yen-Yu Lin", "Yung-Yu Chuang" ], "title": "Weakly supervised saliency detection with A category-driven map generator", "venue": "In British Machine Vision Conference 2017,", "year": 2017 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Scott M. Lundberg", "Gabriel G. Erion", "Su-In Lee" ], "title": "Consistent individualized feature attribution for tree ensembles", "venue": null, "year": 2018 }, { "authors": [ "Chris J. Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Vitali Petsiuk", "Abir Das", "Kate Saenko" ], "title": "Rise: Randomized input sampling for explanation of black-box models", "venue": "In BMVC,", "year": 2018 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "why should I trust you?”: Explaining the predictions of any classifier", "venue": null, "year": 2016 }, { "authors": [ "Wojciech Samek", "Thomas Wiegand", "Klaus-Robert Müller" ], "title": "Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models", "venue": "CoRR, abs/1708.08296,", "year": 2017 }, { "authors": [ "Ramprasaath R. Selvaraju", "Abhishek Das", "Ramakrishna Vedantam", "Michael Cogswell", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization", "venue": null, "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "venue": "arXiv: 1312.6034,", "year": 2013 }, { "authors": [ "Daniel Smilkov", "Nikhil Thorat", "Been Kim", "Fernanda B. Viégas", "Martin Wattenberg" ], "title": "Smoothgrad: removing noise by adding noise", "venue": "CoRR, abs/1706.03825,", "year": 2017 }, { "authors": [ "Jost Tobias Springenberg", "Alexey Dosovitskiy", "Thomas Brox", "Martin Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Mukund Sundararajan", "Ankur Taly", "Qiqi Yan" ], "title": "Axiomatic attribution for deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott E. Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Mingxing Tan", "Quoc V. Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "In ICML,", "year": 2019 }, { "authors": [ "L. Wang", "H. Lu", "Y. Wang", "M. Feng", "D. Wang", "B. Yin", "X. Ruan" ], "title": "Learning to detect salient objects with image-level supervision", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Wenguan Wang", "Qiuxia Lai", "Huazhu Fu", "Jianbing Shen", "Haibin Ling" ], "title": "Salient object detection in the deep learning era: An in-depth survey", "venue": "URL http: //arxiv.org/abs/1904.09146", "year": 1904 }, { "authors": [ "Mengjiao Yang", "Been Kim" ], "title": "BIM: towards quantitative evaluation of interpretability methods with ground truth", "venue": null, "year": 1907 }, { "authors": [ "Jiahui Yu", "Zhe Lin", "Jimei Yang", "Xiaohui Shen", "Xin Lu", "Thomas S Huang" ], "title": "Generative image inpainting with contextual attention", "venue": null, "year": 2018 }, { "authors": [ "Luisa M. Zintgraf", "Taco Cohen", "Tameem Adel", "Max Welling" ], "title": "Visualizing deep neural network decisions: Prediction difference analysis", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Konrad Zolna", "Krzysztof J. Geras", "Kyunghyun Cho" ], "title": "Classifier-agnostic saliency map extraction", "venue": "Computer Vision and Image Understanding,", "year": 2020 }, { "authors": [ "Following Zolna" ], "title": "L1 mask regularization if we are using masked-in objective and the masked-in image is correctly classified, or we have a masked-out objective and the masked-out image is incorrectly classified–otherwise, no L1 regularization is applied for that example. In cases where we have both masked-in and masked-out objective, we have separate λM,in and λM,out regularization coefficients", "venue": null, "year": 2020 }, { "authors": [ "Chang" ], "title": "Saliency maps using various infilling methods for counterfactual generation", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The success of CNNs (Krizhevsky et al., 2012; Szegedy et al., 2015; He et al., 2016; Tan & Le, 2019) has prompted interest in improving understanding of how these models make their predictions. Particularly in applications such as medical diagnosis, having models explain their predictions can improve trust in them. The main line of work concerning model interpretability has focused on the creation of saliency maps–overlays to an input image that highlight regions most salient to the model in making its predictions. Among these, the most prominent are gradient-based methods (Simonyan et al., 2013; Sundararajan et al., 2017; Selvaraju et al., 2018) and masking-based methods (Fong & Vedaldi, 2017; Dabkowski & Gal, 2017; Fong & Vedaldi, 2018; Petsiuk et al., 2018; Chang et al., 2019; Zintgraf et al., 2017). In recent years, we have witnessed an explosion of research based on these two directions. With a variety of approaches being proposed, framed and evaluated in different ways, it has become difficult to assess and fairly evaluate their additive contributions.\nIn this work, we investigate the class of masking-based saliency methods, where we train a masking model to generate saliency maps based on an explicit optimization objective. Using a general formulation, we iteratively evaluate the extent to which recently proposed ideas in the literature improve performance. In addition to evaluating our models against the commonly used Weakly Supervised Object Localization (WSOL) metrics, the Saliency Metric (SM), and the more recently introduced Pixel Average Precision (PxAP; Choe et al., 2020), we also test our final models against a suite of “sanity checks” for saliency methods (Adebayo et al., 2018; Hooker et al., 2018).\nConcretely, we make four major contributions. (1) We find that incorporating both masked-in classification maximization and masked-out entropy maximization objectives leads to the best saliency maps, and continually training the classifier improves the quality of generated maps. (2) We find that the masking model requires only the top layers of the classifier to effectively generate saliency maps. (3) Our final model outperforms other masking-based methods on WSOL and PxAP metrics. (4) We find that a small number of examples—as few as ten per class—is sufficient to train a masker to within the ballpark of our best performing model." }, { "heading": "2 RELATED WORK", "text": "Interpretability of machine learning models has been an ongoing topic of research (Ribeiro et al., 2016; Doshi-Velez & Kim, 2017; Samek et al., 2017; Lundberg et al., 2018). In this work, we focus on interpretability methods that involve generating saliency maps for image classification models. An overwhelming majority of the methods for generating saliency maps for image classifiers can be assigned to two broad families: gradient-based methods and masking-based methods.\nGradient-based methods, such as using backpropagated gradients (Simonyan et al., 2013), Guided Backprop (Springenberg et al., 2015), Integrated Gradients (Sundararajan et al., 2017), GradCam (Selvaraju et al., 2018), SmoothGrad (Smilkov et al., 2017) and many more, directly use the backpropagated gradients through the classifier to the input to generate saliency maps.\nMasking-based methods modify input images to alter the classifier behavior and use the regions of modifications as the saliency map. Within this class of methods, one line of work focuses on optimizing over the masks directly: Fong & Vedaldi (2017) optimize over a perturbation mask for an image, Petsiuk et al. (2018) aggregates over randomly sampled masks, Fong & Vedaldi (2018) performs an extensive search for masks of a given size, while Chang et al. (2019) includes a counterfactual mask-infilling model to make the masking objective more challenging. The other line of work trains a separate masking model to produce saliency maps: Dabkowski & Gal (2017) trains a model that optimizes similar objectives to Fong & Vedaldi (2017), Zolna et al. (2020) use a continually trained pool of classifiers and an adversarial masker to generate model-agnostic saliency maps, while Fan et al. (2017) identifies super-pixels from the image and then trains the masker similarly in an adversarial manner.\nSalient Object Detection (Borji et al., 2014; Wang et al., 2019) is a related line of work that concerns identifying salient objects within an image as an end in itself, and not for the purpose of model interpretability. While it is not uncommon for these methods to incorporate a pretrained image classification model to extract learned visual features, they often also incorporate techniques for improving the quality of saliency maps that are orthogonal to model interpretability. Salient object detection methods that are trained on only image-level labels bear the closest similarity to saliency map generation methods for model interpretability. Hsu et al. (2017) and follow-up Hsu et al. (2019) train a masking model to confuse a binary image-classification model that predicts whether an image contains an object or is a ‘background’ image. Wang et al. (2017) apply a smooth pooling operation\nand a Foreground Inference Network (a masking model) while training an image classifier to generate saliency maps as a secondary output.\nEvaluation of saliency maps The wave of saliency map research has also ignited research on evaluation methods for these saliency maps as model explanations. Adebayo et al. (2018) and Hooker et al. (2018) propose sanity checks and benchmarks for the saliency maps. Choe et al. (2020) propose Pixel Average Precision (PxAP), a pixel-wise metric for scoring saliency maps that accounts for mask binarization thresholds, while Yang & Kim (2019) create a set of metrics as well as artificial datasets interleaving foreground and background objects for evaluating the saliency maps. These works have shown that a number of gradient-based methods fail the sanity checks or perform no better than simple edge detectors. Hence, we choose to focus on masking-based methods in this paper." }, { "heading": "3 MASKING-BASED SALIENCY MAP METHODS", "text": "We start by building a general formulation of masking-based saliency map methods. We take as given a trained image classifier F : x → y, that maps from image inputs x ∈ RH×W×C to class predictions ŷ ∈ [0, 1]K , evaluated against ground-truth y ∈ {1 · · ·K}. Our goal is to generate a mask m ∈ [0, 1]H×W for each image x such that the masked-in image x m or the masked-out image x (1−m) maximizes some objective based on output of a classifier given the modified image. For instance, we could attempt to mask out parts of the image to maximally deteriorate the classifier’s performance. This mask m then serves as a saliency map for the image x. Concretely, the per-image objective can be expressed as:\narg min m\nλoutLout ( F (x (1−m); θF ), y ) + λinLin ( F (x m; θF ), y ) +R(m),\nwhere Lout, Lin are the masked-out and masked-in objectives over the classifier output, λout, λin are hyperparameters controlling weighting of these two objectives, θF the classifier parameters, and R(m) a regularization term over the mask. The masked-in and masked-out losses, Lout and Lin, correspond to finding the smallest destroying region and smallest sufficient region as described in Dabkowski & Gal (2017). Candidates for Lout include negative classification cross-entropy and prediction entropy. For Lin, the obvious candidate is the classification cross-entropy of the masked-in image. We set λin = 0 or λout = 0 if we only have either a masked-in or masked-out objective.\nThe above formulation subsumes a number of masking-based methods, such as Fong & Vedaldi (2017); Dabkowski & Gal (2017); Zolna et al. (2020). We follow Dabkowski & Gal, amortize the optimization by training a neural network masker M : x→ m, and solve for:\narg min θM\nλoutLout ( F (x (1−M(x; θM )); θF ), y ) +λinLin ( F (x M(x; θM ); θF ), y ) +R(M(x; θM )),\nwhere M is the masking model and θM its parameters. In our formulation, we do not provide the masker with the ground-truth label, which differs from certain other masking-based saliency works (Dabkowski & Gal, 2017; Chang et al., 2019; Fong & Vedaldi, 2018). In practice, we often desire model explanations without the availability of ground-truth information, so we focus our investigation on methods that require only an image as input." }, { "heading": "3.1 MASKER ARCHITECTURE", "text": "We use a similar architecture to Dabkowski & Gal and Zolna et al.. The masker takes as input activations across different layers of the classifier, meaning it has access to the internal representation of the classifier for each image. Each layer of activations is fed through a convolutional layer and upsampled (with nearest neighbor interpolation) so they all share the same spatial resolution. All transformed layers are then concatenated and fed through another convolutional layer, upsampled, and put through a sigmoid operation to obtain a mask of the same resolution as the input image. In all our experiments, we use a ResNet-50 (He et al., 2016) as our classifier, and the masker has access to the outputs of the five major ResNet blocks. Figure 1B shows the architecture of our models.\nFollowing prior work (Fong & Vedaldi, 2017), we apply regularization on the generated masks to avoid trivial solutions such as masking the entire image. We apply L1 regularization to limit the size of masks and Total Variation (TV) to encourage smoothness. Details can be found in Appendix A.1." }, { "heading": "Model OM ↓ LE ↓ SM ↓ PxAP ↑", "text": "" }, { "heading": "Train-Validation Set", "text": "" }, { "heading": "Validation Set", "text": "" }, { "heading": "3.2 CONTINUAL TRAINING OF THE CLASSIFIER", "text": "Because neural networks are susceptible to adversarial perturbations (Goodfellow et al., 2015), masking models can learn to perturb an input to maximize the above objectives for a given fixed classifier without producing intuitive saliency maps. While directly regularizing the masks is one potential remedy, Zolna et al. (2020) propose to train the masker against a diverse set of classifiers. In practice, they simulate this by continually training the classifier on masked images, retain a pool of past model checkpoints, and sample from the pool when training the masker.\nWe adopt their approach and distinguish between a masker trained against a fixed classifier (FIX) and against a pool of continually trained classifiers (CA, for Classifier-Agnostic). We highlight that saliency maps for FIX and CA address fundamentally different notions of saliency. Whereas a FIX approach seeks a saliency map that explains what regions are most salient to a given classifier, a CA approach tries to identify all possible salient regions for any hypothetical classifier (hence, classifier-agnostic). In other words, a CA approach may be inadequate for interpreting a specific classifier and is better suited for identifying salient regions for a class of image classification models." }, { "heading": "4 EXPERIMENTAL SETUP", "text": "We perform our experiments on the official ImageNet training and validation set (Deng et al., 2009) and use bounding boxes from the ILSVRC’14 localization task. Because we perform a large number of experiments with hyperparameter search to evaluate different model components, we construct a separate held-out validation set of 50,000 examples (50 per class) from the training set with bounding box data that we use as validation for the majority of our experiments (which we refer to as our “TrainValidation” set) and use the remainder of the training set for training. For each model configuration, we train the models 5 times on different random seeds and report the mean and standard error of the results. We reserve the official validation set for the final evaluation." }, { "heading": "4.1 EVALUATION METRICS", "text": "Weakly-supervised object localization task metrics (WSOL) is a common task for evaluating saliency maps. It involves generating bounding boxes for salient objects in images and scoring them against the ground-truth bounding boxes. To generate bounding boxes from our saliency maps, we binarize the saliency map based on the average mask pixel value and use the tightest bounding box around the largest connected component of our binarized saliency map. We follow the evaluation protocol in ILSVRC ’14 computing the official metric (OM), localization error (LE) and pixel-wise F1 score between the predicted and ground-truth bounding boxes.\nSaliency metric (SM) proposed by Dabkowski & Gal (2017) consists of generating a bounding box from the saliency map, upsampling the region of the image within the bounding box and then evaluating the classifier accuracy on the upsampled salient region. The metric is defined as s(a, p) = log(max(a, 0.05)) − log(p), where a is the size of the bounding box, and p is the probability the classifier assigns to the true class. This metric can be seen as measuring masked-in and upsampled classification accuracy with a penalty for the mask size. We use the same bounding boxes as described in WSOL for consistency.\nPixel Average Precision (PxAP) proposed by Choe et al. (2020) scores the pixel-wise masks against the ground-truth masks and computes the area under the precision-recall curve. This metric is computed over mask pixels rather than bounding boxes and removes the need to threshold and binarize the mask pixels. PxAP is computed over the OpenImages dataset (Benenson et al., 2019) rather than ImageNet because it requires pixel-level ground-truth masks." }, { "heading": "5 EVALUATION OF SALIENCY MAP METHODS", "text": "To determine what factors and methods contribute to improved saliency maps, we perform a series of evaluation experiments in a cascading fashion. We isolate and vary one design choice at a time, and use the optimal configuration from one set of experiments in all subsequent experiments. Our baseline models consist of a masker trained with either a fixed classifier (FIX) or a pool of continually trained classifiers (CA). As WSOL is the most common task for evaluating saliency maps, we use LE as the metric for determining the ‘best’ model for model selection. We show our model scores across experiments in Table 3. Each horizon block represents a set of experiments varying one design choice. The top half of the table is evaluated on our Train-Validation split, while the bottom half is evaluated on the validation data from ILSVRC ’14.\nMasking Objectives (Rows a–f) We first consider varying the masker’s training objective, using only one objective at a time. We use the three candidate objectives described in Section 3: maximizing masked-in accuracy, minimizing masked-out accuracy and maximizing masked-out entropy. For a masked-out objective, we set λout = 1, λin = 0, and the opposite for masked-in objectives. For each configuration, we perform a random hyperparameter search over the L1 mask regularization coefficients λM and λTV as well as the learning rate and report results from the best configuration from the Train-Validation set. More details on hyperparameter choices can be found in Table 2.\nConsistent with Zolna et al. (2020), we find that training the classifier along with the masker improves the masker, with CA models generally outperforming FIX models, particularly for the WSOL metrics. However, the classification-maximization FIX model still performs comparably with its CA counterpart and in fact performs best overall when measured by SM given the similarity between the training objective and the second term of the SM metric. Among the CA models, entropymaximization and classification-minimization perform the best, while the classification-maximization\nobjective performs worst. On the other hand, both mask-out objectives perform extremely poorly for a fixed classifier. We show how different masking objectives affect saliency map generation in Figure 2.\nEuropean fire salamander\nInput\nAppenzeller\nInput\nFIX MaxClass (I) FIX MinClass (O) FIX MaxEnt (O) CA MaxClass (I) CA MinClass (O) CA MaxEnt (O)\nFIX MinClass (O) +MaxClass (I) FIX MaxEnt (O) +MaxClass (I) CA MinClass (O) +MaxClass (I) CA MaxEnt (O) +MaxClass (I)\nCombining Masking Objectives (Rows g–j) Next, we combine both masked-in and masked-out objective during training, setting λout = λin = 0.5. Among the dual-objective models, entropymaximization still outperforms classification-minimization as a masked-out objective. Combining both masked-in classification-maximization and masked-out entropy-maximization performs best for both FIX and CA models, consistent with Dabkowski & Gal (2017). From our hyperparameter search, we also find that separately tuning λM,in and λM,out is highly beneficial (see Table 2). We use the classification-maximization and entropy-maximization dual objectives for both FIX (Row g) and CA (Row i) models in subsequent experiments.\nVarying Observed Classifier Layers (Rows k–r) We now vary which hidden layers of the classifier the masker has access to. As described in Section 3.1, the masking model has access to hidden activations from five different layers of a ResNet-50 classifier. To identify the contribution of information from each layer to the masking model, we train completely new masking models with access to only a subset of the classifier layers. We number the layers from 1 to 5, with 1 being the earliest layer with the highest resolution (56× 56) and 5 being the latest (7× 7). We show a relevant subset of the results from varying the observed classifier layers in Table 3. The full results can be found in the Table 4 and we show examples of the generated masks in Figure 3.\nMasking models with access to activations of later layers starkly outperform those using activations from earlier layers. Whereas the Layer[3], Layer[4] and Layer[5] models are still effective, the Layer[1] and Layer[2] models tend to perform poorly. Similarly, we find that the best cascading combination of layers is layers 4 and 5 CA models, and 3–5 for FIX models (Rows n, r), slightly but consistently outperforming the above models with all layers available to the masker. This suggests that most of the information relevant for generating saliency maps is likely contained within the later layers. For simplicity, we use only classifier layers 4 and 5 for subsequent experiments.\nCounterfactual Infilling and Binary Masks (Rows s–z) Chang et al. (2019) proposed generating saliency maps by learning a Bernoulli distribution per masking pixel and additionally incorporating a counterfactual infiller. Agarwal & Nguyen (2019) similarly uses an infiller when producing saliency maps. First, we consider applying counterfactual infilling to the masked images before feeding them to the classifier. The modified inputs are Infill(X (1 −m), (1 −m)) and Infill(X m,m) for" }, { "heading": "Radiator", "text": "masked-out and masked-in infilling respectively, where Infill is the infilling function that takes as input the masked input as well as the mask. We consider three infillers: the Contextual Attention GAN (Yu et al., 2018) as used in Chang et al.1, DFNet (Hong et al., 2019), and a Gaussian blur infiller as used in Fong & Vedaldi (2018). Both neural infillers are pretrained and frozen.\nFor each infilling model, we also train a model variant that outputs a discrete binary mask by means of a Gumbel-Softmax layer (Jang et al., 2017; Maddison et al., 2017). We experiment with both soft and hard (Straight-Through) Gumbel estimators, and temperatures of {0.01, 0.05, 0.1, 0.5}. We show a relevant subset of the results in the fourth and fifth blocks of Table 3, and examples in Figure 6 and Figure 7. We do not find major improvements from incorporating infilling or discrete masking based on WSOL metrics, although we do find improvements from using the DFN infiller for SM. Particularly for CA, because the classifier is continually trained to classify masked images, it is able to learn to both classify unnaturally masked images as well as to perform classification based on masked-out evidence. As a result, the benefits of incorporating the infiller may be diminished." }, { "heading": "5.1 EVALUATION ON VALIDATION SET", "text": "Based on the above, we identify a simple recipe for a good saliency map generation model: (1) use both masked-in classification maximization and masked-out entropy maximization objectives, (2) use only the later layers of the classifier as input to the masker, and (3) continually train the classifier. To validate the effectiveness of this simple setup, we train a new pair of FIX and CA models based on this configuration on the full training set and evaluate on the actual ILSVRC ’14 validation set. We compare the results to other models in the literature in the bottom block of Table 3. Consistent with above, the CA model outperforms the FIX model. It also outperforms other saliency map extraction methods on WSOL metrics and PxAP. We highlight that some models we compare to (Rows E, G) are provided with the ground-truth target class, whereas our models are not–this may explain the underperformance on certain metrics such as SM, which is partially based on classification accuracy." }, { "heading": "5.2 SANITY CHECKS", "text": "Adebayo et al. (2018) and Hooker et al. (2018) propose “sanity checks” to verify whether saliency maps actually reflect what information classifiers use to make predictions and show that many proposed interpretability methods fail these simple tests. We apply these tests to our saliency map models to verify their efficacy. On the left of Figure 4, we show the RemOve-and-Retrain (ROaR) test proposed by Hooker et al., where we remove the top t% of pixels from training images based on our generated saliency maps and use them to train entirely new classifiers. If our saliency maps truly identify salient portions of the image, we should see large drops in classifier performance as t increases. Both FIX and CA methods pass this test, with classifier accuracy falling precipitously as we mask out more pixels. On the right of Figure 4, we perform the the Model Parameter Randomization Test (MPRT) proposed by Adebayo et al.. We randomize parameters of successive layers of the\n1The publicly released CA-GAN is only trained on rectangular masks, but Chang et al. nevertheless found positive results from applying it, so we follow their practice. DFNet is trained on irregularly shaped masks.\nclassifier, starting from upper-most logits layer to the lowest convolutional layers, and generate saliency maps using the partially randomized classifiers. We then compute the similarity of the saliency maps generated from using the partially randomized classifier, and those using the original classifier. Our saliency maps become less similar as more layers are randomized, passing the test. The results for the Data Randomization Test (DRT) can be found in Table 5." }, { "heading": "6 FEW-SHOT EXPERIMENTS", "text": "Given the relative simplicity of our best-performing saliency map models and the fact that the masker uses only the top layers of activations from the classifier, we hypothesize that learning to generate saliency maps given strong classifier is a relatively simple process.\nTo test this hypothesis, we run a set of experiments severely limiting the number of training steps and unique examples that the masker is trained on. The ImageNet dataset consists of 1,000 object classes, with up to 1,250 examples per class. We run a set of experiments restricting both the number of unique classes seen as well as the number of examples seen per class while training the masker. We also limit the number of training steps be equivalent to one epoch through the full training set. Given the randomness associated with subsampling the examples, we randomly subsample classes and/or examples 5 times for each configuration and compute the median score over the 5 runs. We report results on the actual validation set for ILSVRC ’14 (LE) and test set for OpenImages (PxAP). Examples of saliency maps for these models can be found in Figure 9.\nWe show the results for our CA model in Figure 5 and for the FIX model in Figure 8. Strikingly, we find that very few examples are actually required to train a working saliency map model. In particular, training on just 10 examples per class produces a model that gets only 0.7 LE more than using all of the training data and only 2.7 more than the fully trained model.\nOn the other hand, we find that the diversity of examples across classes is a bigger contributor to performance than the number of examples per class. For instance, training on 10 examples across all 1,000 classes gets an LE of 38.5, which is lower than training on 125 examples across only 100 classes. A similar pattern can be observed in the PxAP results.\nAbove all, these few-shot results indicate that training an effective saliency map model can be significantly simpler and more economical than previously thought. Saliency methods that require training a separate model such as Dabkowski & Gal (2017) and Zolna et al. (2020) are cheap to run at inference, but require an expensive training procedure, compared to gradient-based saliency methods or methods involving a per-example optimization. However, if training a masking model can be a lightweight procedure as we have demonstrated, then using masking models to generate saliency maps can now be a cheap and effective model interpretability technique." }, { "heading": "7 DISCUSSION AND CONCLUSIONS", "text": "In this work, we systematically evaluated the additive contribution of many proposed improvements to masking-based saliency map methods. Among the methods we tested, we identified that only the following factors meaningfully contributed to improved saliency map generation: (1) using both masked-in and masked-out objectives, (2) using the later layers of the classifier as input and (3) continually training the classifier. This simple setup outperforms other methods on WSOL metrics and PxAP, and passes a suite of saliency map sanity checks.\nStrikingly, we also found that very few examples are actually required to train a saliency map model, and training with just 10 examples per class can achieve close to our best performing model. In addition, our masker model architecture is extremely simple: a two-layer ConvNet. Together, this suggests that learning masking-based saliency map extraction may be simpler than expected when given access to the internal representations of a classifier. This unexpected observation should make us reconsider both the methods for extracting saliency maps and the metrics used to evaluate them." }, { "heading": "A MODEL DETAILS", "text": "" }, { "heading": "A.1 MASK REGULARIZATION", "text": "Without regularization, the masking model may learn to simply mask in or mask out the entire image, depending on the masking objective. We consider two forms of regularization. The first is L1 regularization over the mask pixels m, which directly encourages the masks to be small in aggregate. The second is Total Variation (TV) over the mask, which encourages smoothness:\nTV(m) = ∑ i,j (mi,j −mi,j+1)2 + ∑ i,j (mi,j −mi+1,j)2,\nwhere i, j are pixel indices. TV regularization was found to be crucial by Fong & Vedaldi (2017) and Dabkowski & Gal (2017) to avoid adversarial artifacts. Hence, we have:\nR(m) = λM ‖m‖1 + λTVTV(m). (1)\nFollowing Zolna et al. (2020), we only apply L1 mask regularization if we are using masked-in objective and the masked-in image is correctly classified, or we have a masked-out objective and the masked-out image is incorrectly classified–otherwise, no L1 regularization is applied for that example. In cases where we have both masked-in and masked-out objective, we have separate λM,in and λM,out regularization coefficients." }, { "heading": "A.2 CONTINUAL TRAINING OF THE CLASSIFIER", "text": "We largely follow the setup for training classifier-agnostic (CA) models from Zolna et al. (2020). Notably, when training on the masker objectives, we update θM but note θF , to prevent the classifier from being optimized on the masker’s objective. We maintain a pool of 30 different classifier weights in our classifier pool. We point the reader to Zolna et al. (2020) for more details." }, { "heading": "A.3 HYPERPARAMETERS", "text": "We show in Table 2 the space of hyperparameter search and hyperparameters for the best results, as show in the Table 1 in the main paper. We performed a random search over λM,out, λM,in, and λTV.\nAside from the hyperparameters shown in Table 2, we used a learning rate of 0.001 and a batch size of 72. We trained for 3.4 epochs (17 epochs on 20% of data) for all Train-Validation experiments and for 12 epochs for the Validation set experiments. Likewise, we use a learning rate decay of 5 for Train-Val experiments and 20 for Validation set experiments. For dual objective models, we used λout = λin − 0.5. We use the Adam optimize with 0.9 momentum and 1e-4 weight decay." }, { "heading": "Model Hyperparameters", "text": "" }, { "heading": "Validation Set", "text": "" }, { "heading": "B SUPPLEMENTARY RESULTS", "text": "" }, { "heading": "B.1", "text": "B.2 VARYING OBSERVED CLASSIFIER LAYERS\nWe show the full set of per-layer and layer-combination results in Table 4." }, { "heading": "B.3 COUNTERFACTUAL INFILLING AND BINARY MASKS", "text": "We show examples of saliency maps generated with counterfactual infillers in Figure 6 and incorporation of Gumbel-Softmax to generate masks with binary pixel values in Figure 7." }, { "heading": "B.4 SANITY CHECKS", "text": "We show in Table 5 the results for the Data Randomization Test (DRT) proposed by Adebayo et al. (2018). Here, we train a new classifier on the same training data but with labels shuffled across all images. We find that the similarity of saliency maps generated given a regularly trained classifier compared to those given a classifier trained on shuffled labels is low, indicating that the saliency maps reflect information learned from a well-formed image classification task." }, { "heading": "B.5 FEW-SHOT EXPERIMENTS", "text": "We show in Figure 8 the results for few-shot experiments using the FIX model configuration. We similarly find that very few examples are needed to train a good saliency map model." }, { "heading": "Model Rank Correl(Abs) Rank Correl(No Abs) HOGS Similarity SSIM", "text": "" }, { "heading": "Train-Validation Set", "text": "" }, { "heading": "Model OM ↓ LE ↓ F1 ↑ SM ↓ PxAP ↑ Mask", "text": "Red-breasted merganser\nInput\nCardigan Welsh corgi\nInput\nChainlink fence\nInput\nGordon setter\nInput\nFI X\n1000 classes 1 examples 100 classes 10 examples 1000 classes 10 examples 100 classes 125 examples 1000 classes 125 examples 100 classes 1250 examples 1000 classes 1250 examples Full Training\nCA FI\nX CA\nFI X\nCA FI\nX CA\nFigure 9: Examples of saliency maps computing from models trained with less data. Columns correspond to the models shown in Figure 5 and Figure 8. Even models trained with very few examples per class produce saliency maps similar to the fully trained model." } ]
2,020
null
SP:e4e5b4e2bee43c920ed719dc331a370129845268
[ "The authors propose a model to improve the output distribution of neural nets in image classification problems. Their model is a post hoc procedure and is based on the tree structure of WordNet. The model revises the classifier output based on the distance of the labels in the tree. Intuitively, their solution is to pick the candidate label that is located in the region of the tree with a higher accumulated probability mass value. They also experimentally show that the previous evaluation metrics are inconclusive. " ]
There has been increasing interest in building deep hierarchy-aware classifiers that aim to quantify and reduce the severity of mistakes, and not just reduce the number of errors. The idea is to exploit the label hierarchy (e.g., the WordNet ontology) and consider graph distances as a proxy for mistake severity. Surprisingly, on examining mistake-severity distributions of the top-1 prediction, we find that current state-of-the-art hierarchy-aware deep classifiers do not always show practical improvement over the standard cross-entropy baseline in making better mistakes. The reason for the reduction in average mistake-severity can be attributed to the increase in low-severity mistakes, which may also explain the noticeable drop in their accuracy. To this end, we use the classical Conditional Risk Minimization (CRM) framework for hierarchy aware classification. Given a cost matrix and a reliable estimate of likelihoods (obtained from a trained network), CRM simply amends mistakes at inference time; it needs no extra hyperparameters, and requires adding just a few lines of code to the standard cross-entropy baseline. It significantly outperforms the state-of-the-art and consistently obtains large reductions in the average hierarchical distance of top-k predictions across datasets, with very little loss in accuracy. CRM, because of its simplicity, can be used with any off-the-shelf trained model that provides reliable likelihood estimates.
[ { "affiliations": [], "name": "DEEP NETWORKS" }, { "affiliations": [], "name": "Shyamgopal Karthik" }, { "affiliations": [], "name": "Ameya Prabhu" }, { "affiliations": [], "name": "Puneet K. Dokania" } ]
[ { "authors": [ "Naoki Abe", "Bianca Zadrozny", "John Langford" ], "title": "An iterative method for multi-class cost-sensitive learning", "venue": "In KDD,", "year": 2004 }, { "authors": [ "Zeynep Akata", "Scott Reed", "Daniel Walter", "Honglak Lee", "Bernt Schiele" ], "title": "Evaluation of output embeddings for fine-grained image classification", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Björn Barz", "Joachim Denzler" ], "title": "Hierarchy-based image embeddings for semantic image retrieval", "venue": "In WACV,", "year": 2019 }, { "authors": [ "Luca Bertinetto", "Romain Mueller", "Konstantinos Tertikas", "Sina Samangooei", "Nicholas A Lord" ], "title": "Making better mistakes: Leveraging class hierarchies with deep networks", "venue": null, "year": 2020 }, { "authors": [ "Alsallakh Bilal", "Amin Jourabloo", "Mao Ye", "Xiaoming Liu", "Liu Ren" ], "title": "Do convolutional neural networks learn class hierarchy", "venue": "IEEE transactions on visualization and computer graphics,", "year": 2017 }, { "authors": [ "Clemens-Alexander Brust", "Joachim Denzler" ], "title": "Integrating domain knowledge: using hierarchies to improve deep classifiers", "venue": "In ACPR,", "year": 2019 }, { "authors": [ "Jia Deng", "Alexander C Berg", "Kai Li", "Li Fei-Fei" ], "title": "What does classifying more than 10,000 image categories tell us", "venue": "In ECCV,", "year": 2010 }, { "authors": [ "Pedro Domingos" ], "title": "Metacost: A general method for making classifiers cost-sensitive", "venue": "In KDD,", "year": 1999 }, { "authors": [ "Charles Elkan" ], "title": "The foundations of cost-sensitive learning", "venue": "In IJCAI,", "year": 2001 }, { "authors": [ "Andrea Frome", "Greg S Corrado", "Jon Shlens", "Samy Bengio", "Jeff Dean", "Marc’Aurelio Ranzato", "Tomas Mikolov" ], "title": "Devise: A deep visual-semantic embedding model", "venue": "NeurIPS,", "year": 2013 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Te-Kang Jan", "Da-Wei Wang", "Chi-Hung Lin", "Hsuan-Tien Lin" ], "title": "A simple methodology for soft cost-sensitive classification", "venue": "In KDD,", "year": 2012 }, { "authors": [ "Susan Lomax", "Sunil Vadera" ], "title": "A survey of cost-sensitive decision tree induction algorithms", "venue": "ACM Computing Surveys (CSUR),", "year": 2013 }, { "authors": [ "Jishnu Mukhoti", "Viveka Kulharia", "Amartya Sanyal", "Stuart Golodetz", "Philip H S Torr", "Puneet K Dokania" ], "title": "Calibrating deep neural networks using focal loss", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Mahdi Pakdaman Naeini", "Gregory F Cooper", "Milos Hauskrecht" ], "title": "Obtaining well calibrated probabilities using bayesian binning", "venue": "In AAAI,", "year": 2015 }, { "authors": [ "Alexandru Niculescu-Mizil", "Rich Caruana" ], "title": "Predicting good probabilities with supervised learning", "venue": "In ICML,", "year": 2005 }, { "authors": [ "John Platt" ], "title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", "venue": "Advances in large margin classifiers,", "year": 1999 }, { "authors": [ "Joseph Redmon", "Ali Farhadi" ], "title": "Yolo9000: better, faster, stronger", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Seonguk Seo", "Paul Hongsuck Seo", "Bohyung Han" ], "title": "Learning for single-shot confidence calibration in deep neural networks through stochastic inferences", "venue": null, "year": 2019 }, { "authors": [ "Carlos N Silla", "Alex A Freitas" ], "title": "A survey of hierarchical classification across different application domains", "venue": "Data Mining and Knowledge Discovery,", "year": 2011 }, { "authors": [ "Han-Hsing Tu", "Hsuan-Tien Lin" ], "title": "One-sided support vector regression for multiclass cost-sensitive classification", "venue": "In ICML,", "year": 2010 }, { "authors": [ "Nakul Verma", "Dhruv Mahajan", "Sundararajan Sellamanickam", "Vinod Nair" ], "title": "Learning hierarchical similarity metrics", "venue": "In CVPR,", "year": 2012 }, { "authors": [ "Hui Wu", "Michele Merler", "Rosario Uceda-Sosa", "John R Smith" ], "title": "Learning to make better mistakes: Semantics-aware visual food recognition", "venue": "In ACM MM,", "year": 2016 }, { "authors": [ "Yongqin Xian", "Zeynep Akata", "Gaurav Sharma", "Quynh Nguyen", "Matthias Hein", "Bernt Schiele" ], "title": "Latent embeddings for zero-shot classification", "venue": null, "year": 2016 }, { "authors": [ "Bianca Zadrozny", "Charles Elkan" ], "title": "Learning and making decisions when costs and probabilities are both unknown", "venue": "In KDD,", "year": 2001 }, { "authors": [ "Bianca Zadrozny", "Charles Elkan" ], "title": "Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers", "venue": "In ICML,", "year": 2001 }, { "authors": [ "Bianca Zadrozny", "Charles Elkan" ], "title": "Transforming classifier scores into accurate multiclass probability estimates", "venue": "In KDD,", "year": 2002 }, { "authors": [ "Bin Zhao", "Fei Li", "Eric P Xing" ], "title": "Large-scale category structure aware image categorization", "venue": "In NeurIPS,", "year": 2011 }, { "authors": [ "Zhi-Hua Zhou", "Xu-Ying Liu" ], "title": "On multi-class cost-sensitive learning", "venue": "Computational Intelligence,", "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "The conventional performance measure of accuracy for image classification treats all classes other than ground truth as equally wrong. However, some mistakes may have a much higher impact than others in real-world applications. An intuitive example being an autonomous vehicle mistaking a car for a bus is a better mistake than mistaking a car for a lamppost. Consequently, it is essential to integrate the notion of mistake severity into classifiers and one convenient way to do so is to use a taxonomic hierarchy tree of class labels, where severity is defined by a distance on the graph (e.g., height of the Lowest Common Ancestor) between the ground truth and the predicted label (Deng et al., 2010; Zhao et al., 2011). This is similar to the problem of providing a good ranking of classes in a retrieval setting. Consider the case of an autonomous vehicle ranking classes for a thin, white, narrow band (a pole, in reality). A top-3 prediction of {pole, lamppost, tree} would be a better prediction than {pole, person, building}. Notice that the top-k class predictions would have at least k − 1 incorrect predictions here, and the aim is to reduce the severity of these mistakes, measured by the average hierarchical distance of each of the top k predictions from the ground truth. Silla & Freitas (2011) survey classical methods leveraging class hierarchy when designing classifiers across various application domains and illustrate clear advantages over the flat hierarchy classification, especially when the labels have a well-defined hierarchy.\nThere has been growing interest in the problem of deep hierarchy-aware image classification (Barz & Denzler, 2019; Bertinetto et al., 2020). These approaches seek to leverage the class hierarchy\n∗shyamgopal.karthik@research.iiit.ac.in\ninherent in the large scale datasets (e.g., the ImageNet dataset is derived from the WordNet semantic ontology). Hierarchy is incorporated using either label embedding methods, hierarchical loss functions, or hierarchical architectures. We empirically found that these models indeed improve the ranking of the top-k predicted classes – ensuring that the top alternative classes are closer in the class hierarchy. However, this improvement is observed only for k > 1.\nWhile inspecting closely the top-1 predictions of these models, we observe that instead of improving the mistake severity, they simply introduce additional low-severity mistakes which in turn favours the mistake-severity metric proposed in (Bertinetto et al., 2020). This metric involves division by the number of misclassified samples, therefore, in many situations (discussed in the paper), it can prefer a model making additional low-severity mistakes over the one that does not make such mistakes. This is at odds with the intuitive notion of making better mistakes. These additional low-severity mistakes can also explain the significant drop in their top-1 accuracy compared to the vanilla crossentropy model. We also find these models to be highly miscalibrated which further limits their practical usability.\nIn this work we explore a different direction for hierarchy-aware classification where we amend mistake severity at test time by making post-hoc corrections over the class likelihoods (e.g., softmax in the case of deep neural networks). Given a label hierarchy, we perform such amendments to the likelihood by applying the very well-known and classical approach called Conditional Risk Minimization (CRM). We found that CRM outperforms state-of-the-art deep hierarchy-aware classifiers by large margins at ranking classes with little loss in the classification accuracy. As opposed to other recent approaches, CRM does not hurt the calibration of a model as the cross-entropy likelihoods can still be used for the same. CRM is simple, requires addition of just a few lines of code to the standard cross-entropy model, does not require retraining of a network, and contains no hyperparameters whatsoever.\nWe would like to emphasize that we do not claim any algorithmic novelty as CRM has been well explored in the literature (Duda & Hart, 1973, Ch. 2). Almost a decade ago, Deng et al. (2010) had proposed a very similar solution using Support Vector Machine (SVM) classifier applied on handcrafted features. However, this did not result in practically useful performance because of the lack of modern machine learning tools at that time. We intend to bring this old, simple, and extremely effective approach back into the attention before we delve deeper into the sophisticated ones requiring expensive retraining of large neural networks and designing complex loss functions. Overall, our investigation into the hierarchy-aware classification makes the following contributions:\n• We highlight a shortcoming in one of the metrics proposed to evaluate hierarchy-aware classification and show that it can easily be fooled and give the wrong impression of making better mistakes.\n• We revisit an old post-hoc correction technique (CRM) which significantly outperforms prior art when the ranking of the predictions made by the model are considered.\n• We also investigate the reliability of prior art in terms of calibration and show that these methods are severely miscalibrated, limiting their practical usefulness." }, { "heading": "2 RELATED WORKS", "text": "" }, { "heading": "2.1 COST-SENSITIVE CLASSIFICATION", "text": "Cost-sensitive classification assigns varying costs to different types of misclassification errors. The work by Abe et al. (2004) groups cost-sensitive classifiers into three main categories. The first category specifically extends one particular classification model to be cost-sensitive, such as support vector machines (Tu & Lin, 2010) or decision trees (Lomax & Vadera, 2013). The second category makes the training procedure cost-sensitive, which is typically achieved by assigning the training examples of different classes with different weights (rescaling) (Zhou & Liu, 2010) or by changing the proportions of each class while training using sampling (rebalancing) (Elkan, 2001). The third category makes the prediction procedure cost-sensitive (Domingos, 1999; Zadrozny & Elkan, 2001a). Such direct cost-sensitive decision-making is the most generic: it considers the underlying classifier as a black box and extends to any number of classes and arbitrary cost matrices. Our work comes under the third category of post-hoc amendment. We study cost-sensitive classification in\na large scale setting (e.g., ImageNet) and explore the use of a taxonomic hierarchy to obtain the misclassification costs." }, { "heading": "2.2 HIERARCHY AWARE CLASSIFICATION", "text": "There is a rich literature around exploiting hierarchies to improve the task of image classification. Embedding-based methods define each class as a soft embedding vector, instead of the typical onehot. DeViSE (Frome et al., 2013) learn a transformation over image features to maximize the cosine similarity with their respective word2vec label embeddings. The transformation is learned using ranking loss and places the image embeddings in a semantically meaningful space. Akata et al. (2015); Xian et al. (2016) explore variations of text embeddings, and ranking loss frameworks. Barz & Denzler (2019) project classes on a hypersphere, such that the correlation of class embeddings equals the semantic similarity of the classes. The semantic similarity is derived from the height of the lowest common ancestor (LCA) in a given hierarchy tree.\nAnother line of work directly alters the loss functions or the algorithms/architectures. Zhao et al. (2011) propose a weighted (hierarchy-aware) multi-class logistic regression formulation. Verma et al. (2012) optimize a context-sensitive loss to learn a separate distance metric for each node in the class taxonomy tree. Wu et al. (2016) combine losses at different hierarchies of the tree by learning separate, fully connected layers for each level post a shared feature space.Bilal et al. (2017) add branches at different depths of AlexNet architecture to fuse losses at different levels of the hierarchy. Brust & Denzler (2019) use conditional probability chains to derive a novel label encoding and a corresponding loss function.\nMost deep learning-based methods overlook the severity of mistakes, and the evaluation revolves around counting the top-k errors. Bertinetto et al. (2020) has revived the interest in this direction by jointly analyzing the top-k accuracies with the severity of errors. They propose two modifications to cross-entropy to better capture the hierarchy: one based on label embeddings (Soft-labels) and the other, which factors the cross-entropy loss into the individual terms for each of the edges in the hierarchy tree and assigns different weights to them (Hierarchical cross-entropy or HXE).\nOur method uses models trained with vanilla cross-entropy loss and alters the decision rule to pick the class that minimizes the conditional risk where the condition is being imposed using the known class-hierarchy. On similar lines, Deng et al. (2010) study the effect of minimizing conditional risk on the mean hierarchical cost. They leverage the ImageNet hierarchy for cost and compute posteriors by fitting a sigmoid function to the SVM’s output or taking the percent of neighbours from a class for Nearest Neighbour classification. Our work investigates the relevance of CRM in the deep learning era and highlights the importance of looking beyond mean hierarchical costs and jointly analyzing the role of accuracy and calibration." }, { "heading": "2.3 CALIBRATION OF DEEP NEURAL NETWORKS", "text": "Networks are said to be well-calibrated if their predicted probability estimates are representative of the true correctness likelihood. Calibrated confidence estimates are important for model interpretability and its use in downstream applications. Platt scaling (Platt et al., 1999), Histogram binning (Zadrozny & Elkan, 2001b) and Isotonic regression (Zadrozny & Elkan, 2002) are three common calibration methods. Although originally proposed for the SVM classifier, their variations are used in improving the calibration of neural networks (Guo et al., 2017). Calibrated probability estimates are particularly important when cost-sensitive decisions are to be made (Zadrozny & Elkan, 2001b) and are often measured using Expected Calibration Error (ECE) and Maximum Calibration Error (MCE) (Niculescu-Mizil & Caruana, 2005; Naeini et al., 2015; Mukhoti et al., 2020).\nWe desire models with high accuracy that have low calibration error and make less severe mistakes. However, there is often a compromise. Studies in cost-sensitive classification (Jan et al., 2012) reveal a trade-off between costs and error rates. Reliability literature aims to obtain better calibrated deep networks while retaining top-k accuracy (Seo et al., 2019). We further observe that methods like Soft-labels or Hierarchical cross-entropy successfully minimize the average top-k hierarchical cost, but result in poorly calibrated networks. In contrast, the proposed framework retains top-k accuracy and good calibration, while significantly reducing the hierarchical cost." }, { "heading": "3 APPROACH", "text": "The K-class classification problem comes with a training set S = {(xi,yi)}Ni=1, where label yi ∈ Y = {1, 2, ...,K}. The classifier is a deep neural network fθ : X → p(Y) parametrized by θ which maps the input samples to a probability distribution over the label space Y . The p(y|x) is typically derived using a softmax function on the logits obtained for an input x. Given p(y|x), the network minimizes cross-entropy with the ground truth class over samples from the training set, and uses SGD to optimize θ, forming the standard hierarchy-agnostic cross-entropy baseline. The decision rule is naturally given by argmax\nk p(y = k|x).\nThe classical CRM framework (Duda & Hart, 1973) can be adapted to image classification by taking the trained model with a given θ and incorporating the hierarchy information at deployment time. A symmetric class-relationship matrix C is created using the given hierarchy tree (which can either be drawn from the WordNet ontology or an application specific taxonomy), where Ci,j is the height of the lowest common ancestor LCA(yi,yj) between classes i and j. The height of a node is defined as the number of edges between the given node and the furthest leaf. Ci,j is zero when i = j and is bounded by the maximum height of the hierarchy tree.\nGiven an input x, the likelihood p(y|x) is obtained by passing the sample through the network fθ(x). The only modification we make is in the decision rule, which now selects the class that minimizes the conditional risk R(y = k|x), given by:\nargmin k R(y = k|x) = argmin k K∑ j=1 Ck,j · p(y = j|x) (1)\nFor the ease of the reader, we illustrate a four-class example in Figure 1a, comparing predictions obtained using the standard cross-entropy baseline (leaf nodes), and the prediction using CRM (Eq. (1)) for a given class-relationship matrix. Given the probability of each class p(y|x), argminR(y|x) is the Bayes optimal prediction. It is guaranteed to achieve the lowest possible overall cost, i.e. lowest expected cost over all possible examples weighted by their probabilities (Duda & Hart, 1973, Ch. 2).\nDepending on the cost-matrix and p(y|x), the top-1 prediction of the CRM applied on cross-entropy might differ from the top-1 prediction of the cross-entropy baseline. However, because of the overconfident nature of recent deep neural networks, we observe that the top-1 probability of p(y|x) is greater than 0.5 for significant number of test samples. Below we prove that in such situations where maxp(y|x) is higher than the sum of other probabilities, the post-hoc correction (CRM) does not change the top-1 prediction irrespective of the structure of the tree. Since the second highest probability is guaranteed to be less than 0.5 by definition, our correction can effectively re-rank the classes. Experimentally we find it to significantly reduce the hierarchical distance@k.\nTheorem 1. If max(p(y|x)) > 0.5, then argmini ∑K j=1 Ci,j · p(y = j|x) and argmax p(y|x) are identical irrespective of the tree structure and both lead to the same top-1 prediction.\nProof. Consider the tree illustrated in Figure 1b; two leaf nodes (class labels) i,j and the subtree (Tij) rooted at their Lowest Common Ancestor. Assuming the height of the LCA(i, j) = h and argmax p(y|x) = i, the risk R(y = j|x) = R(j) is given as:\nR(j) = h · p(i) + ∑\nk∈Tij\\{i} Cj,k · p(k) + ∑ ∀k 6∈Tij Cj,k · p(k)\nIgnoring the cost of other nodes inside Tij , we get R(j) ≥ h ·p(i)+ ∑ ∀k 6∈Tij Cj,k ·p(k). Similarly, for the risk of class i:\nR(i) ≤ h · (1− p(i)) + ∑ ∀k 6∈Tij Ci,k · p(k)\nOutside the subtree rooted at Ti,j , Ci,k = Cj,k∀k and therefore without loss of generality we can say that R(i) < R(j), if p(i) > 0.5." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate our method on two large-scale hierarchy-aware benchmarks: (i) tieredImageNet-H for a broad range of classes and (ii) iNaturalist-H for fine-grained classification, both of which are complex enough to cover a large number of visual concepts. We closely follow the experimental pipeline from Bertinetto et al. (2020) including the train/validation/test splits, hyperparameters for training models, and evaluation metrics.\nExperimental Details: All models are trained using a ResNet-18 architecture (pre-trained on ImageNet) using an Adam optimizer for 200K updates using a mini-batch of 256 samples, a learning rate of 10−5, and standard data augmentation of flips and randomly resized crops. We train all the hierarchy-aware models – Hierarchical cross-entropy (HXE) (Bertinetto et al., 2020), Soft-labels (Bertinetto et al., 2020), YOLO-v2 (Redmon & Farhadi, 2017), DeViSE (Frome et al., 2013), and Barz & Denzler (2019) – along with a cross-entropy baseline. We pick the epoch corresponding to the lowest loss on the validation set along with two epochs preceding and succeeding it and report the average of the results obtained from these five checkpoints on the test set. Unlike Bertinetto et al. (2020) we do not preprocess the dataset to downsample the images to 224×224 as it noticeably reduces the accuracy. Instead, we use the RandomResizedCrop() augmentation to crop the images to a 224×224 resolution. This accounts for a small, but significant improvement in performance across models, thus leading to stronger baselines.\nMetrics: We primarily focus on two major metrics: (i) top-1 error, and (ii) average hierarchical distance@k, which is the mean LCA height between the ground truth and each of the k most likely classes. These metrics capture different views of the problem: top-1 error treats all classifier mistakes the same, whereas average hierarchical distance@1 captures a notion of mistake severity, i.e., better or worse mistakes. Average hierarchical distance@k captures the notion of ranking/ordering the predicted classes closer to the ground truth class. This metric, also used in Bertinetto et al. (2020), is a natural extension of the hierarchical distance@1 proposed by Russakovsky et al. (2015) in the original ImageNet evaluation. We also investigate the average mistake-severity metric suggested in Bertinetto et al. (2020) which computes the hierarchical distance between the top-1 prediction and the ground truth for all the misclassified samples. Note that LCA is a log-scaled distance: an increment of 1.0 signifies an error of an entire level of the tree. In the simple case of a full binary tree, an increase by one level implies that the number of possible leaf nodes doubles." }, { "heading": "4.1 HIERARCHICAL DISTANCE OF TOP-1 PREDICTIONS", "text": "Hierarchy-aware classification methods typically seek to make better mistakes (less costly in terms of hierarchical distance). It is essential that the evaluation metric correctly measures this goal, i.e. a higher value of the evaluation metric should reflect that the model indeed makes better mistakes. Below we discuss a shortcoming of the average mistake-severity metric proposed in Bertinetto et al. (2020) which considers the mistake severity averaged only over the incorrectly classified samples,\nand show that it can be misleading in the sense that a model can show improved performance over this metric, while just making additional low-severity mistakes.\nIn Figure 2a, we evaluate different approaches only on the set of incorrectly classified samples (hence different test sets for different models as the mistakes will be different). It seems to indicate that recently proposed methods are able to achieve a good trade-off between top-1 error and mistake severity. We select models that show a marked trade-off in terms of the mistake-severity metric – Soft-labels with β = 4 and HXE with α = 0.6 – and analyze the frequency of mistakes at different levels of severity (illustrated in Figure 2b). Surprisingly, we observe that in these regimes, HXE and Soft-labels largely do not make better mistakes; they mostly make additional low-severity mistakes. This behaviour is better demonstrated in the histograms shown in Appendix A.1 (Figure 4). For example, in the case of Soft-labels on iNaturalist19 dataset, it is evident from Figure 4 that as β decreases, the number of less-severe mistakes increases, whereas, the high-severity mistakes remain more or less the same. Similar observations can be made for HXE. This behaviour is not captured in Figure 2a as the metric here involves division by the number of mistakes made by the model. More precisely, say the high severity mistakes made by two models are exactly the same (dh > 0) over the same number of mistakes (m > 0). Now, if the second model makes additional n > 0 mistakes with overall distance severity of dl > 0, then it is straightforward to observe that dhm ≥ dh+dl m+n if dh m ≥ dl n . This implies that the metric would prefer a model making additional low-severity mistakes as long as the impact of the severity due to these additional mistakes is less than the overall impact by the high-severity ones.\nWe avoided this shortcoming by using the hierarchical distance@1 computed over all the samples (Russakovsky et al., 2015). As shown in Figure 2c, the best-performing ones in Figure 2a now show the highest hierarchical distance@1 as we account for the additional number of low-severity mistakes made by them. Note, distance@1 was also used in Bertinetto et al. (2020), however, they also proposed the above mentioned mistake-severity metric and performed analyses over it, which, as discussed and showed empirically, can easily mislead us towards choosing a classifier that just makes additional low severity mistakes while not improving the overall mistake severity at all.\nIn this more reliable evaluation set-up, we observe CRM (ours) marginally reduces mistake severity compared to cross-entropy. We would like to emphasize that cross-entropy provides near best results. Overall, our experiments suggest that existing methods reduce the average mistake-severity metric by largely making additional low-severity mistakes. This also explains why such models provide lower test accuracy (top-1). Resolving this issue, we see that no prior art significantly out-\nperforms the cross-entropy baseline either in making better mistakes (distance@1) or in the top-1 accuracy." }, { "heading": "4.2 HIERARCHICAL DISTANCE OF TOP-K PREDICTIONS", "text": "We now compare the ordering of classes provided by each of these classifiers. Ranking predictions give us significant insight into how reliably the predictions align with the hierarchy. We measure the quality of ranking using the average hierarchical distance@k, for various values of k and present them in Figure 3a (left). We find that CRM significantly outperforms all the competing methods, giving the best hierarchically aligned models on the hierarchical distance@k. Note, for k > 1, recent approaches also provide improvement in distance@k compared to the cross-entropy model.\nA better ranking of classes often comes with a significant trade-off with top-1 accuracy. We plot the hierarchical distance@k with top-1 accuracy for k = 5 and k = 20 in Figure 3a (middle and right) to better understand this trade-off. Interestingly, we observe that CRM improves ranking with almost no loss in top-1 performance and outperforms other methods by a substantial margin.\nAn interesting extension is to analyze how dependent these approaches are on a given hierarchy, and how modifying the hierarchy might impact their behaviour. To test this, we randomly shuffle the classes at the leaf nodes of a given tree structure and compare ranking performance in Figure 3b. We observe that even though CRM does not explicitly use the hierarchy while training, it provides drastic reduction in the hierarchical distance@k compared to all the previous methods. High accuracy of CRM in this case is because of the fact that it is post-hoc and for highly confident models such as deep networks, its top-1 accuracy remains largely unchanged (refer Theorem 1). On the other hand, models depending on the tree-structure during training (directly or indirectly) will try to fit to the structure, which can be harmful in situations where the tree structure is not very reliable. For example, if the tree structure implies that ‘cat’ is closer to ‘person’ than a ‘dog’, then the models incorporating such information while learning the feature space might not be able to learn a robust classifier and might potentially end-up making more mistakes, as also validated in our experiments." }, { "heading": "4.3 IMPACT OF LABEL HIERARCHY ON THE RELIABILITY", "text": "In order for models to be useful in safety-critical scenarios, they should be calibrated so that they are not wrong with high confidence. To this end, we analyse the reliability of the output probabilities of Softlabels, HXE, label smoothing, and CRM (which is the vanilla cross-entropy likelihood) using widely accepted metrics such as ECE (Expected Calibration Error) and MCE (Maximum Calibration Error) in Table 1. Softlabels and HXE, for example, show clear trends of increasing degradation in calibration on better class ranking (as measured by distance@k), i.e., the more they attempt to adhere to the hierarchy, the less reliable their probability estimates become. We additionally experiment with improving calibration in all the above models using temperature scaling. We observe that it reduces miscalibration as measured by the ECE and MCE scores, but most models still remain far worse than the cross-entropy baseline. Changes in ECE/MCE were unnoticeable when using the probability estimates corresponding to CRM predictions (taking p(y|x) corresponding to argminR(y|x)) instead of maximum cross-entropy prediction. These experiments clearly suggest that while the focus should turn into developing models that make better mistakes, we should also make sure that such models are reliable by understanding how incorporating the label hierarchy during training might impact the likelihood estimates." }, { "heading": "5 CONCLUSION", "text": "We proposed using Conditional Risk Minimization (CRM) as a tool to amend likelihood in a posthoc fashion to obtain hierarchy-aware classifiers, an approach that is different from the three dominant paradigms: hierarchy-aware losses, hierarchy-aware architectures, and label embedding methods. We illustrated an issue with the mistake-severity metric that, otherwise, could give a wrong impression of improvement while the model might just be making additional mistakes to fool the metric.\nIn terms of better ranking predictions, our proposed post-hoc correction consistently outperforms state-of-the-art methods in deep hierarchy-aware image classification by large margins in terms of decrease in hierarchical distance@k, with little to no loss in top-1 accuracy. We find the direction of post-hoc corrections promising as it can simultaneously deliver calibration, accuracy, and better class ranking efficiently with surprisingly little trade-offs in either.\nOverall, the literature on hierarchy-aware image classification has shown the WordNet hierarchy’s effectiveness in improving performance. However, previous works assumed that all the edges in the tree are equally important. A future avenue for exploration would be to learn the weights of these edges in order to compute a more effective measure of mistake severity." }, { "heading": "ACKNOWLEDGEMENTS", "text": "SGK and AP would like to thank Saujas Vaduguru, Aurobindo Munagala and Arjun P for detailed feedback on the manuscript. PKD would like to thank Sina Samangooei for constructive comments. This work was supported by the following grants/organisations: Early Career Research Award, ECR/2017/001242, from Science and Engineering Research Board (SERB), Department of Science & Technology, Government of India; EPSRC/MURI grant EP/N019474/1; Facebook (DeepFakes grant); and Five AI Ltd., UK." }, { "heading": "A APPENDIX", "text": "A.1 VARIATION OF MISTAKE SEVERITY WITH HIERARCHY ALIGNMENT\nWe first analyze the change of the distribution of mistakes across Softlabels (left column) and HXE (right column) models in Figure 4 with results in tieredImageNet-H (top row) and iNaturalist-H (bottom row). As we try to align the model better with the hierarchy by decreasing β and increasing α, we observe the same trends with better alignment to hierarchy – their mistakes are equally bad compared to cross-entropy near the right end (high severity), and they increasingly make more mistakes in the left end (lower severity), lowering the average over mistakes but not always making better mistakes. This gives evidence that the models have the tendency to decrease their mistake severity (shown in the legend) by largely making lots of additional mistakes and not making better mistakes in the large β and small α regimes.\nA.2 VARIATION OF CLASS RANKING WITH HYPERPARAMETERS\nWe similarly analyze the variation of ranking classes measured by average hierarchical distance@k for Softlabels and HXE with different hyperparameters. We present our results in Figure 5, by varying Softlabel with different β values on the left and HXE with different α values on the right. We observe that the previously chosen values β = 4 and α = 0.6 perform the best in ranking in all cases except β = 4 in iNaturalist where β = 10 performs the best, which we updated in Figure 3. We observe there that CRM still outperforms these methods by large margins.\nA.3 VARIATION OF CALIBRATION ACROSS HIERARCHY\nWe can additionally calculate ECE across levels in the hierarchy by sequentially shrinking leaf nodes from the maximum depth (corresponding to flat classification). The ECE at depth i (from root\nassigned depth 0) is defined as obtaining probabilities for nodes with most depth which are at most at level i. Their probabilities are obtained by summing up probabilities of their children nodes (level > i). We present the results in Figure 6, where we observe that overall the calibration increases as you go up the hierarchy. The cross-entropy baseline shows a consistent decreasing trend, but the other models have aberrations where the calibration error increases going up the hierarchy, especially the models with high calibration errors.\nA.4 INCREASING BETA FOR SOFT-LABELS ON INATURALIST19\nWe also experiment with increasing β for Soft-labels on the iNaturalist19 dataset by trying the values of 50, 75, 100 and 200 respectively. These results are shown in Table 2. We can see that increasing Beta does not significantly improve top-1 accuracy while worsening the hierarchical distance metrics. We additionally observe this in Figure 2, where as we increase β the graph shoots up with little leftward shift." } ]
2,021
null
SP:7cc59c8f556d03597f7ab391ef14d1a96191a4db
[ "The design of a useful generalization of neural networks on quantum computers has been challenging because the gradient signal will decay exponentially with respect to the depth of the quantum circuit (saturating to exponentially small in system size after the depth is linear in system size). This work provides a detailed analysis of quantum neural networks with a tree structure that uses only a depth logarithmic in the system size. The authors show that the gradient signal will only be polynomially small with respect to the system size. The authors also provide empirical verification of the theoretical analysis showing a much larger gradient norm. However, the improvement in prediction accuracy (under early stopping) when using tree-structure quantum neural networks is not very significant. This is likely because the considered system size (8 qubits) is too small to fully demonstrate the exponential decay and the inability to train random quantum neural networks." ]
Quantum Neural Networks (QNNs) have been recently proposed as generalizations of classical neural networks to achieve the quantum speed-up. Despite the potential to outperform classical models, serious bottlenecks exist for training QNNs; namely, QNNs with random structures have poor trainability due to the vanishing gradient with rate exponential to the input qubit number. The vanishing gradient could seriously influence the applications of large-size QNNs. In this work, we provide a first viable solution with theoretical guarantees. Specifically, we prove that QNNs with tree tensor and step controlled architectures have gradients that vanish at most polynomially with the qubit number. Moreover, our result holds irrespective of which encoding methods are employed. We numerically demonstrate QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
[ { "affiliations": [], "name": "TOWARD TRAINABILITY" } ]
[ { "authors": [ "Frank Arute", "Kunal Arya", "Ryan Babbush", "Dave Bacon", "Joseph C Bardin", "Rami Barends", "Rupak Biswas", "Sergio Boixo", "Fernando GSL Brandao", "David A Buell" ], "title": "Quantum supremacy using a programmable superconducting processor", "venue": null, "year": 2019 }, { "authors": [ "Kerstin Beer", "Dmytro Bondarenko", "Terry Farrelly", "Tobias J Osborne", "Robert Salzmann", "Daniel Scheiermann", "Ramona Wolf" ], "title": "Training deep quantum neural networks", "venue": "Nature communications,", "year": 2020 }, { "authors": [ "Marcello Benedetti", "Erika Lloyd", "Stefan Sack", "Mattia Fiorentini" ], "title": "Parameterized quantum circuits as machine learning models", "venue": "Quantum Science and Technology,", "year": 2019 }, { "authors": [ "Ville Bergholm", "Josh Izaac", "Maria Schuld", "Christian Gogolin", "M. Sohaib Alam", "Shahnawaz Ahmed", "Juan Miguel Arrazola", "Carsten Blank", "Alain Delgado", "Soran Jahangiri", "Keri McKiernan", "Johannes Jakob Meyer", "Zeyue Niu", "Antal Száva", "Nathan Killoran" ], "title": "Pennylane: Automatic differentiation of hybrid quantum-classical computations, 2020", "venue": null, "year": 2020 }, { "authors": [ "Sergey Bravyi", "David Gosset", "Robert König" ], "title": "Quantum advantage with shallow circuits", "venue": null, "year": 2018 }, { "authors": [ "M Cerezo", "Akira Sone", "Tyler Volkoff", "Lukasz Cincio", "Patrick J Coles" ], "title": "Cost-function-dependent barren plateaus in shallow quantum neural networks", "venue": "arXiv preprint arXiv:2001.00550,", "year": 2020 }, { "authors": [ "Zhao Chen", "Vijay Badrinarayanan", "Chen-Yu Lee", "Andrew Rabinovich" ], "title": "Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Gavin E Crooks" ], "title": "Gradients of parameterized quantum gates using the parameter-shift rule and gate decomposition", "venue": "arXiv preprint arXiv:1905.13311,", "year": 2019 }, { "authors": [ "Yuxuan Du", "Min-Hsiu Hsieh", "Tongliang Liu", "Dacheng Tao" ], "title": "Expressive power of parametrized quantum circuits", "venue": "Physical Review Research,", "year": 2020 }, { "authors": [ "Yuxuan Du", "Min-Hsiu Hsieh", "Tongliang Liu", "Shan You", "Dacheng Tao" ], "title": "On the learnability of quantum neural networks", "venue": "arXiv preprint arXiv:2007.12369,", "year": 2020 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Edward Farhi", "Hartmut Neven" ], "title": "Classification with quantum neural networks on near term processors", "venue": "arXiv preprint arXiv:1802.06002,", "year": 2018 }, { "authors": [ "Edward Grant", "Marcello Benedetti", "Shuxiang Cao", "Andrew Hallam", "Joshua Lockhart", "Vid Stojevic", "Andrew G. Green", "Simone Severini" ], "title": "Hierarchical quantum classifiers", "venue": "npj Quantum Information,", "year": 2018 }, { "authors": [ "Edward Grant", "Leonard Wossnig", "Mateusz Ostaszewski", "Marcello Benedetti" ], "title": "An initialization strategy for addressing barren plateaus in parametrized quantum circuits", "venue": null, "year": 2019 }, { "authors": [ "Aram W Harrow", "Richard A Low" ], "title": "Random quantum circuits are approximate 2-designs", "venue": "Communications in Mathematical Physics,", "year": 2009 }, { "authors": [ "Aram W Harrow", "Avinatan Hassidim", "Seth Lloyd" ], "title": "Quantum algorithm for linear systems of equations", "venue": "Physical review letters,", "year": 2009 }, { "authors": [ "Vojtěch Havlı́ček", "Antonio D Córcoles", "Kristan Temme" ], "title": "Supervised learning with quantumenhanced feature", "venue": "spaces. Nature,", "year": 2019 }, { "authors": [ "Kohei Hayashi", "Taiki Yamaguchi", "Yohei Sugawara", "Shin-ichi Maeda" ], "title": "Exploring unexplored tensor network decompositions for convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Robert Hecht-Nielsen" ], "title": "Theory of the backpropagation neural network. In Neural networks for perception, pp. 65–93", "venue": null, "year": 1992 }, { "authors": [ "William Huggins", "Piyush Patil", "Bradley Mitchell", "K Birgitta Whaley", "E Miles Stoudenmire" ], "title": "Towards quantum machine learning with tensor networks", "venue": "Quantum Science and technology,", "year": 2019 }, { "authors": [ "Abhinav Kandala", "Antonio Mezzacapo", "Kristan Temme", "Maika Takita", "Markus Brink", "Jerry M Chow", "Jay M Gambetta" ], "title": "Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets", "venue": null, "year": 2017 }, { "authors": [ "Iordanis Kerenidis", "Jonas Landman", "Anupam Prakash" ], "title": "Quantum algorithms for deep convolutional neural networks", "venue": "arXiv preprint arXiv:1911.01117,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Ding Liu", "Shi-Ju Ran", "Peter Wittek", "Cheng Peng", "Raul Blázquez Garcı́a", "Gang Su", "Maciej Lewenstein" ], "title": "Machine learning by unitary tensor network of hierarchical tree structure", "venue": "New Journal of Physics,", "year": 2019 }, { "authors": [ "Jarrod R McClean", "Sergio Boixo", "Vadim N Smelyanskiy", "Ryan Babbush", "Hartmut Neven" ], "title": "Barren plateaus in quantum neural network training landscapes", "venue": "Nature communications,", "year": 2018 }, { "authors": [ "Daniel K Park", "Francesco Petruccione", "June-Koo Kevin Rhee" ], "title": "Circuit-based quantum random access memory for classical data", "venue": "Scientific reports,", "year": 2019 }, { "authors": [ "Zbigniew Puchała", "Jarosław Adam Miszczak" ], "title": "Symbolic integration with respect to the haar measure on the unitary group", "venue": "arXiv preprint arXiv:1109.4244,", "year": 2011 }, { "authors": [ "Patrick Rebentrost", "Masoud Mohseni", "Seth Lloyd" ], "title": "Quantum support vector machine for big data classification", "venue": "Physical review letters,", "year": 2014 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Maria Schuld", "Alex Bocharov", "Krysta M Svore", "Nathan Wiebe" ], "title": "Circuit-centric quantum classifiers", "venue": "Physical Review A,", "year": 2020 }, { "authors": [ "Kunal Sharma", "Marco Cerezo", "Lukasz Cincio", "Patrick J Coles" ], "title": "Trainability of dissipative perceptron-based quantum neural networks", "venue": "arXiv preprint arXiv:2005.12458,", "year": 2020 }, { "authors": [ "Andrea Skolik", "Jarrod R McClean", "Masoud Mohseni", "Patrick van der Smagt", "Martin Leib" ], "title": "Layerwise learning for quantum neural networks", "venue": "arXiv preprint arXiv:2006.14904,", "year": 2020 }, { "authors": [ "Wojciech Zaremba", "Ilya Sutskever", "Oriol Vinyals" ], "title": "Recurrent neural network regularization", "venue": "arXiv preprint arXiv:1409.2329,", "year": 2014 }, { "authors": [ "Leo Zhou", "Sheng-Tao Wang", "Soonwon Choi", "Hannes Pichler", "Mikhail D Lukin" ], "title": "Quantum approximate optimization algorithm: performance, mechanism, and implementation on near-term devices", "venue": "arXiv preprint arXiv:1812.01041,", "year": 2018 }, { "authors": [ "Cerezo" ], "title": "The form in the right side of (9) can be viewed as the average or the expectation of the function Pt,t(G). We remark that only the parameterized gates RY = e−iθσ2 could not form a universal gate set even in the single-qubit space U(2), thus quantum circuits employing parameterized RY gates could not form the 2-design. This is only a simple introduction about the unitary 2-design, and we refer readers to Puchała", "venue": null, "year": 2011 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural Networks (Hecht-Nielsen, 1992) using gradient-based optimizations have dramatically advanced researches in discriminative models, generative models, and reinforcement learning. To efficiently utilize the parameters and practically improve the trainability, neural networks with specific architectures (LeCun et al., 2015) are introduced for different tasks, including convolutional neural networks (Krizhevsky et al., 2012) for image tasks, recurrent neural networks (Zaremba et al., 2014) for the time series analysis, and graph neural networks (Scarselli et al., 2008) for tasks related to graph-structured data. Recently, the neural architecture search (Elsken et al., 2019) is proposed to improve the performance of the networks by optimizing the neural structures.\nDespite the success in many fields, the development of the neural network algorithms could be limited by the large computation resources required for the model training. In recent years, quantum computing has emerged as one solution to this problem, and has evolved into a new interdisciplinary field known as the quantum machine learning (QML) (Biamonte et al., 2017; Havlı́ček et al., 2019). Specifically, variational quantum circuits (Benedetti et al., 2019) have been explored as efficient protocols for quantum chemistry (Kandala et al., 2017) and combinatorial optimizations (Zhou et al., 2018). Compared to the classical circuit models, quantum circuits have shown greater expressive power (Du et al., 2020a), and demonstrated quantum advantage for the low-depth case (Bravyi et al., 2018). Due to the robustness against noises, variational quantum circuits have attracted significant interest for the hope to achieve the quantum supremacy on near-term quantum computers (Arute et al., 2019).\nQuantum Neural Networks (QNNs) (Farhi & Neven, 2018; Schuld et al., 2020; Beer et al., 2020) are the special kind of quantum-classical hybrid algorithms that run on trainable quantum circuits. Recently, small-scale QNNs have been implemented on real quantum computers (Havlı́ček et al., 2019) for supervised learning tasks. The training of QNNs aims to minimize the objective function f with respect to parameters θ. Inspired by the classical optimizations of neural networks, a natural strategy to train QNNs is to exploit the gradient of the loss function (Crooks, 2019). However, the recent work (McClean et al., 2018) shows that n-qubit quantum circuits with random structures and large depth L = O(poly(n)) tend to be approximately unitary 2-design (Harrow & Low, 2009), and the partial derivative vanishes to zero exponentially with respect to n. The vanishing gradient problem is usually referred to as the Barren Plateaus (McClean et al., 2018), and could affect the\ntrainability of QNNs in two folds. Firstly, simply using the gradient-based method like Stochastic Gradient Descent (SGD) to train the QNN takes a large number of iterations. Secondly, the estimation of the derivatives needs an extremely large number of samples from the quantum output to guarantee a relatively accurate update direction (Chen et al., 2018). To avoid the Barren Plateaus phenomenon, we explore QNNs with special structures to gain fruitful results.\nIn this work, we introduce QNNs with special architectures, including the tree tensor (TT) structure (Huggins et al., 2019) referred to as TT-QNNs and the setp controlled structure referred to as SCQNNs. We prove that for TT-QNNs and SC-QNNs, the expectation of the gradient norm of the objective function is bounded.\nTheorem 1.1. (Informal) Consider the n-qubit TT-QNN and the n-qubit SC-QNN defined in Figure 1-2 and corresponding objective functions fTT and fSC defined in (3-4), then we have:\n1 + log n\n2n · α(ρin) ≤ Eθ‖∇θfTT‖2 ≤ 2n− 1,\n1 + nc 21+nc · α(ρin) ≤ Eθ‖∇θfSC‖2 ≤ 2n− 1,\nwhere nc is the number of CNOT operations that directly link to the first qubit channel in the SCQNN, the expectation is taken for all parameters in θ with uniform distributions in [0, 2π], and α(ρin) ≥ 0 is a constant that only depends on the input state ρin ∈ C2\nn×2n . Moreover, by preparing ρin using the L-layer encoding circuit in Figure 4, the expectation of α(ρin) could be further lower bounded as Eα(ρin) ≥ 2−2L.\nCompared to random QNNs with 2−O(poly(n)) derivatives, the gradient norm of TT-QNNs ad SCQNNs is greater than Ω(1/n) or Ω(2−nc) that could lead to better trainability. Our contributions are summarized as follows:\n• We prove Ω̃(1/n) and Ω̃(2−nc) lower bounds on the expectation of the gradient norm of TT-QNNs and SC-QNNs, respectively, that guarantees the trainability on related optimization problems. Our theorem does not require the unitary 2-design assumption in existing works and is more realistic to near-term quantum computers.\n• We prove that by employing the encoding circuit in Figure 4 to prepare ρin, the expectation of term α(ρin) is lower bounded by a constant 2−2L. Thus, we further lower bounded the expectation of the gradient norm to the term independent from the input state.\n• We simulate the performance of TT-QNNs, SC-QNNs, and random structure QNNs on the binary classification task. All results verify proposed theorems. Both TT-QNNs and SC-QNNs show better trainability and accuracy than random QNNs.\nOur proof strategy could be adopted for analyzing QNNs with other architectures as future works. With the proven assurance on the trainability of TT-QNNs and SC-QNNs, we eliminate one bottleneck in front of the application of large-size Quantum Neural Networks.\nThe rest parts of this paper are organized as follows. We address the preliminary including the definitions, the basic quantum computing knowledge and related works in Section 2. The QNNs with special structures and the corresponding results are presented in Section 3. We implement the binary classification using QNNs with the results shown in Section 4. We make conclusions in Section 5." }, { "heading": "2 PRELIMINARY", "text": "" }, { "heading": "2.1 NOTATIONS AND THE BASIC QUANTUM COMPUTING", "text": "We use [N ] to denote the set {1, 2, · · · , N}. The form ‖ · ‖ denotes the ‖ · ‖2 norm for vectors. We denote aj as the j-th component of the vector a. The tensor product operation is denoted as “⊗”. The conjugate transpose of a matrixA is denoted asA†. The trace of a matrixA is denoted as Tr[A]. We denote ∇θf as the gradient of the function f with respect to the vector θ. We employ notations O and Õ to describe the standard complexity and the complexity ignoring minor terms, respectively.\nNow we introduce the quantum computing. The pure state of a qubit could be written as |φ〉 = a|0〉+b|1〉, where a, b ∈ C satisfies |a|2+|b|2 = 1, and {|0〉 = (1, 0)T , |1〉 = (0, 1)T }, respectively. The n-qubit space is formed by the tensor product of n single-qubit spaces. For the vector x ∈ R2n , the amplitude encoded state |x〉 is defined as 1‖x‖ ∑2n j=1 xj |j〉. The dense matrix is defined as ρ = |x〉〈x| for the pure state, in which 〈x| = (|x〉)†. A single-qubit operation to the state behaves like the matrix-vector multiplication and can be referred to as the gate in the quantum circuit language. Specifically, single-qubit operations are often used including RX(θ) = e−iθX , RY (θ) = e−iθY , and RZ(θ) = e−iθZ :\nX = ( 0 1 1 0 ) , Y = ( 0 −i i 0 ) , Z = ( 1 0 0 −1 ) .\nPauli matrices {I,X, Y, Z} will be referred to as {σ0, σ1, σ2, σ3} for the convenience. Moreover, two-qubit operations, the CNOT gate and the CZ gate, are employed for generating quantum entanglement:\nCNOT = • = |0〉〈0| ⊗ σ0 + |1〉〈1| ⊗ σ1, CZ = • • = |0〉〈0| ⊗ σ0 + |1〉〈1| ⊗ σ3.\nWe could obtain information from the quantum system by performing measurements, for example, measuring the state |φ〉 = a|0〉+b|1〉 generates 0 and 1 with probability p(0) = |a|2 and p(1) = |b|2, respectively. Such a measurement operation could be mathematically referred to as calculating the average of the observable O = σ3 under the state |φ〉:\n〈σ3〉|φ〉 ≡ 〈φ|σ3|φ〉 ≡ Tr[|φ〉〈φ| · σ3] = |a|2 − |b|2 = p(0)− p(1) = 2p(0)− 1. The average of a unitary observable under arbitrary states is bounded by [−1, 1]." }, { "heading": "2.2 RELATED WORKS", "text": "The barren plateaus phenomenon in QNNs is first noticed by McClean et al. (2018). They prove that for n-qubit random quantum circuits with depth L = O(poly(n)), the expectation of the derivative to the objective function is zero, and the variance of the derivative vanishes to zero with rate exponential in the number of qubits n. Later, Cerezo et al. (2020) prove that for L-depth quantum circuits consisting of 2-design gates, the gradient with local observables vanishes with the rate O(2−O(L)). The result implies that in the low-depth L = O(log n) case, the vanishing rate could be O( 1polyn ), which is better than previous exponential results. Recently, some techniques have been proposed to address the barren plateaus problem, including the special initialization strategy (Grant et al., 2019) and the layerwise training method (Skolik et al., 2020). We remark that these techniques rely on the assumption of low-depth quantum circuits. Specifically, Grant et al. (2019) initialize parameters such that the initial quantum circuit is equivalent to an identity matrix (L = 0). Skolik et al. (2020) train parameters in subsets in each layer, so that a low-depth circuit is optimized during the training of each subset of parameters.\nSince random quantum circuits tend to be approximately unitary 2-design1 as the circuit depth increases (Harrow & Low, 2009), and 2-design circuits lead to exponentially vanishing gradients (McClean et al., 2018), the natural idea is to consider circuits with special structures. On the other hand, tensor networks with hierarchical structures have been shown an inherent relationship with classical neural networks (Liu et al., 2019; Hayashi et al., 2019). Recently, quantum classifiers using hierarchical structure QNNs have been explored (Grant et al., 2018), including the tree tensor network and the multi-scale entanglement renormalization ansatz. Besides, QNNs with dissipative layers have shown the ability to avoid the barren plateaus (Beer et al., 2020). However, theoretical analysis of the trainability of QNNs with certain layer structures has been little explored (Sharma et al., 2020). Also, the 2-design assumption in the existing theoretical analysis (McClean et al., 2018; Cerezo et al., 2020; Sharma et al., 2020) is hard to implement exactly on near-term quantum devices." }, { "heading": "3 QUANTUM NEURAL NETWORKS", "text": "In this section, we discuss the quantum neural networks in detail. Specifically, the optimizing of QNNs is presented in Section 3.1. We analyze QNNs with special structures in Section 3.2. We\n1We refer the readers to Appendix B for a short discussion about the unitary 2-design.\nintroduce an approximate quantum input model in Section 3.3 which helps for deriving further theoretical bounds." }, { "heading": "3.1 THE OPTIMIZING OF QUANTUM NEURAL NETWORKS", "text": "In this subsection, we introduce the gradient-based strategy for optimizing QNNs. Like the weight matrix in classical neural networks, the QNN involves a parameterized quantum circuit that mathematically equals to a parameterized unitary matrix V (θ). The training of QNNs aims to optimize the function f defined as:\nf(θ; ρin) = 1\n2 +\n1 2 Tr [ O · V (θ) · ρin · V (θ)† ] = 1 2 + 1 2 〈O〉V (θ),ρin , (1)\nwhere O denotes the quantum observable and ρin denotes the density matrix of the input quantum state. Generally, we could deploy the parameters θ in a tunable quantum circuit arbitrarily. A practical tactic is to encode parameters {θj} as the phases of the single-qubit gates {e−iθjσk , k ∈ {1, 2, 3}} while employing two-qubit gates {CNOT, CZ} among them to generate quantum entanglement. This strategy has been frequently used in existing quantum circuit algorithms (Schuld et al., 2020; Benedetti et al., 2019; Du et al., 2020b) since the model suits the noisy near-term quantum computers. Under the single-qubit phase encoding case, the partial derivative of the function f could be calculated using the parameter shifting rule (Crooks, 2019),\n∂f ∂θj = 1 2 〈O〉V (θ+),ρin − 1 2 〈O〉V (θ−),ρin = f(θ+; ρin)− f(θ−; ρin), (2)\nwhere θ+ and θ− are different from θ only at the j-th parameter: θj → θj ± π4 . Thus, the gradient of f could be obtained by estimating quantum observables, which allows employing quantum computers for fast optimizations using stochastic gradient descents." }, { "heading": "3.2 QUANTUM NEURAL NETWORKS WITH SPECIAL STRUCTURES", "text": "In this subsection, we introduce quantum neural networks with tree tensor (TT) (Grant et al., 2018) and step controlled (SC) architectures. We prove that the expectation of the square of gradient `2- norm for the TT-QNN and the SC-QNN are lower bounded by Ω̃(1/n) and Ω̃(2−nc), respectively, where nc is a parameter in the SC-QNN that is independent from the qubit number n. Moreover, the corresponding theoretical analysis does not rely on 2-design assumptions for quantum circuits.\nNow we discuss proposed quantum neural networks in detail. We consider the n-qubit QNN constructed by the single-qubit gate W (k)j = e\n−iθ(k)j σ2 and the CNOT gate σ1 ⊗ |1〉〈1| + σ0 ⊗ |0〉〈0|. We define the k-th parameter in the j-th layer as θ(k)j . We only employ RY rotations for singlequbit gates, due to the fact that the real world data lie in the real space, while applying RX and RZ rotations would introduce imaginary term to the quantum state.\nWe demonstrate the TT-QNN in Figure 1 for the n = 4 case, which employs CNOT gates in the binary tree form to achieve the quantum entanglement. The circuit of the SC-QNN could be divided into two parts: in the first part, CNOT operations are performed between adjacent qubit channels; in the second part, CNOT operations are performed between different qubit channels and the first qubit channel. An illustration of the SC-QNN is shown in Figure 2 for the n = 4 and nc = 2 case, where nc denotes the number of CNOT operations that directly link to the first qubit channel. The number of parameters in both the TT-QNN and the SC-QNN are 2n− 1. We consider correspond objective functions that are defined as\nfTT(θ) = 1\n2 +\n1 2 Tr[σ3 ⊗ I⊗(n−1)VTT(θ)ρinVTT(θ)†], (3)\nfSC(θ) = 1\n2 +\n1 2 Tr[σ3 ⊗ I⊗(n−1)VSC(θ)ρinVSC(θ)†], (4)\nwhere VTT(θ) and VSC(θ) denotes the parameterized quantum circuit operations for the TT-QNN and the SC-QNN, respectively. We employ the observable σ3 ⊗ I⊗(n−1) in Eq. (3-4) such that objective functions could be easily estimated by measuring the first qubit in corresponding quantum circuits.\nMain results of this section are stated in Theorem 3.1, in which we prove Ω̃(1/n) and Ω̃(2−nc) lower bounds on the expectation of the square of the gradient norm for the TT-QNN and the SCQNN, respectively. By setting nc = O(log n), we could obtain the linear inverse bound for the SC-QNN as well. We provide the proof of Theorem 3.1 in Appendix D and Appendix G.\nTheorem 3.1. Consider the n-qubit TT-QNN and the n-qubit SC-QNN defined in Figure 1-2 and corresponding objective functions fTT and fSC defined in Eq. (3-4), then we have:\n1 + log n\n2n · α(ρin) ≤ Eθ‖∇θfTT‖2 ≤ 2n− 1, (5)\n1 + nc 21+nc · α(ρin) ≤ Eθ‖∇θfSC‖2 ≤ 2n− 1, (6)\nwhere nc is the number of CNOT operations that directly link to the first qubit channel in the SC-QNN, the expectation is taken for all parameters in θ with uniform distributions in [0, 2π], ρin ∈ C2 n×2n denotes the input state, α(ρin) = Tr [ σ(1,0,··· ,0) · ρin ]2 + Tr [ σ(3,0,··· ,0) · ρin ]2 , and σ(i1,i2,··· ,in) ≡ σi1 ⊗ σi2 ⊗ · · · ⊗ σin .\nFrom the geographic view, the value Eθ‖∇θf‖2 characterizes the global steepness of the function surface in the parameter space. Optimizing the objective function f using gradient-based methods could be hard if the norm of the gradient vanishes to zero. Thus, lower bounds in Eq. (5-6) provide a theoretical guarantee on optimizing corresponding functions, which then ensures the trainability of QNNs on related machine learning tasks.\nFrom the technical view, we provide a new theoretical framework during proving Eq. (5-6). Different from existing works (McClean et al., 2018; Grant et al., 2019; Cerezo et al., 2020) that define the expectation as the average of the finite unitary 2-design group, we consider the uniform distribution in which each parameter in θ varies continuously in [0, 2π]. Our assumption suits the quantum circuits that encode the parameters in the phase of single-qubit rotations. Moreover, the result in Eq. (6) gives the first proven guarantee on the trainability of QNN with linear depth. Our framework could be extensively employed for analyzing QNNs with other different structures as future works." }, { "heading": "3.3 PREPARE THE QUANTUM INPUT MODEL: A VARIATIONAL CIRCUIT APPROACH", "text": "State preparation is an essential part of most quantum algorithms, which encodes the classical information into quantum states. Specifically, the amplitude encoding |x〉 = ∑2n i=1 xi/‖x‖|i〉 allows storing the 2n-dimensional vector in n qubits. Due to the dense encoding nature and the similarity to the original vector, the amplitude encoding is preferred as the state preparation by many QML algorithms (Harrow et al., 2009; Rebentrost et al., 2014; Kerenidis et al., 2019). Despite the wide application in quantum algorithms, efficient amplitude encoding remains little explored. Existing work (Park et al., 2019) could prepare the amplitude encoding state in time O(2n) using a quantum circuit with O(2n) depth, which is prohibitive for large-size data on near-term quantum computers. In fact, arbitrary quantum amplitude encoding with polynomial gate complexity remains an open problem.\nAlgorithm 1 Quantum Input Model\nRequire: The input vector xin ∈ R2 n\n, the number of alternating layers L, the iteration time T , and the learning rate {η(t)}T−1t=0 .\nEnsure: A parameter vector β∗ which tunes the approximate encoding circuit U(β∗). 1: Initialize {β(k)j } n,2L+1 j,k=1 randomly in [0, 2π]. Denote the parameter vector as β\n(0). 2: for t ∈ {0, 1, · · · , T − 1} do 3: Run the circuit in Figure 3 classically to calculate the gradient ∇βfinput|β=β(t) using the parameter shifting rule (2), where the function finput is defined in (7). 4: Update the parameter β(t+1) = β(t) − η(t) · ∇βfinput|β=β(t) . 5: end for 6: Output the trained parameter β∗.\nIn this subsection, we introduce a quantum input model for approximately encoding the arbitrary vector xin ∈ R2 n\nin the amplitude of the quantum state |xin〉. The main idea is to classically train an alternating layered circuit as summarized in Algorithm 1 and Figures 3-4. Now we explain the detail of the input model. Firstly, we randomly initialize the parameter β(0) in the circuit 3. Then, we train the parameter to minimize the objective function defined in (7) through the gradient descent,\nfinput(β) = 1\nn n∑ i=1 〈Oi〉W (β)|xin〉 = 1 n n∑ i=1 1 ‖xin‖2 Tr[Oi ·W (β) · xinxTin ·W (β)†], (7)\nwhere Oi = σ ⊗(i−1) 0 ⊗ σ3 ⊗ σ ⊗(n−i) 0 ,∀i ∈ [n], and W (β) denotes the tunable alternating layered circuit in Figure 3. Note that although the framework is given in the quantum circuit language, we actually calculate and update the gradient on classical computers by considering each quantum gate operation as the matrix multiplication. The output of Algorithm 1 is the trained parameter vector β∗ which tunes the unitary W (β∗). The corresponding encoding circuit could then be implemented as U(β∗) = W (β∗)† · X⊗n, which is low-depth and appropriate for near-term quantum computers. Structures of circuits W and U are illustrated in Figure 3 and 4, respectively. Suppose we could minimize the objective function (7) to −1. Then 〈Oi〉 = −1,∀i ∈ [n], which means the final output state in Figure 3 equals to W (β∗)|xin〉 = |1〉⊗n.2 Thus the state |xin〉 could be prepared exactly by applying the circuit U(β∗) = W (β∗)†X⊗n on the state |0〉⊗n. However we could not always optimize the loss in Eq. (7) to −1, which means the framework could only prepare the amplitude encoding state approximately.\n2We ignore the global phase on the quantum state. Based on the assumptions of this paper, all quantum states that encode input data lie in the real space, which then limit the global phase as 1 or -1. For the former case, nothing needs to be done; for the letter case, the global phase could be introduced by adding a single qubit gate e−iπσ0 = −σ0 = −I on anyone among qubit channels.\nAlgorithm 2 QNNs for the Binary Classification Training\nRequire: Quantum input states {ρtraini }Si=1 for the dataset {(xtraini , yi)}Si=1, the quantum observable O, the parameterized quantum circuit V (θ), the iteration time T , the batch size s, and learning rates {ηθ(t)}T−1t=0 and {ηb(t)} T−1 t=0 . Ensure: The trained parameters θ∗ and b∗. 1: Initialize each parameter in θ(0) randomly in [0, 2π] and initialize b(0) = 0. 2: for t ∈ {0, 1, · · · , T − 1} do 3: Randomly sample an index subset It ⊂ [S] with size s. Calculate the gradient\n∇θ`It(θ, b)|θ,b=θ(t),b(t) and ∇b`It(θ, b)|θ,b=θ(t),b(t) using the chain rule and the parameter shifting rule (2), where the function `It(θ, b) is defined in (8).\n4: Update parameters θ(t+1)/b(t+1) = θ(t)/b(t) − ηθ/b(t) · ∇θ/b`It(θ, b)|θ,b=θ(t),b(t) . 5: end for 6: Output trained parameters θ∗ = θ(T ) and b∗ = b(T ).\nAn interesting result is that by employing the encoding circuit in Figure 4 for constructing the input state in Section 3.2 as ρin = U(β)(|0〉〈0|)⊗nU(β)†, we could bound the expectation of α(ρin) defined in Theorem 3.1 by a constant that only relies on the layer L (Theorem 3.2). Theorem 3.2. Suppose the state ρin is prepared by the L-layer encoding circuit in Figure 4, then we have, Eβα(ρin) ≥ 2−2L, where β denotes all variational parameters in the encoding circuit, and the expectation is taken for all parameters in β with uniform distributions in [0, 2π].\nWe provide the proof of Theorem 3.2 and more details about the input model in Appendix E. Theorem 3.2 can be employed with Theorem 3.1 to derive Theorem 1.1, in which the lower bound for the expectation of the gradient norm is independent from the input state." }, { "heading": "4 APPLICATION: QNNS FOR THE BINARY CLASSIFICATION", "text": "" }, { "heading": "4.1 QNNS: TRAINING AND PREDICTION", "text": "In this section, we show how to train QNNs for the binary classification in quantum computers. First of all, for the training and test data denoted as {(xtraini , yi)}Si=1 and {(xtestj , yj)} Q j=1, where yi ∈ {0, 1} denotes the label, we prepare corresponding quantum input states {ρtraini }Si=1 and {ρtestj } Q j=1 using the encoding circuit presented in Section 3.3. Then, we employ Algorithm 2 to train the parameter θ and the bia b via the stochastic gradient descent method for the given parameterized circuit V and the quantum observable O. The parameter updating in each iteration is presented in the Step 3-4, which aims to minimize the loss defined in (8) for each input batch It,∀t ∈ [T ]:\n`It(θ) = 1\n|It| |It|∑ i=1 ( f(θ; ρtraini )− yi + b )2 . (8)\nBased on the chain rule and the Eq. (2), derivatives ∂`It∂θj and ∂`It ∂b could be decomposed into products of objective functions f with different variables, which could be estimated efficiently by counting quantum outputs. In practice, we calculate the value of the objective function by measuring the output state of the QNN for several times and averaging quantum outputs. After the training iteration we obtain the trained parameters θ∗ and b∗. Denote the quantum circuit V ∗ = V (θ∗). We do test for an input state ρtest by calculating the objective function f(ρtest) = 12 + 1 2Tr[O · V\n∗ ρtest V ∗)†]. We classify the input ρtest as in the class 0 if f(ρtest) + b∗ < 12 , or the class 1 if f(ρ test) + b∗ > 12 .\nThe time complexity of the QNN training and test could be easily derived by counting resources for estimating all quantum observables. Denote the number of gates and parameters in the quantum circuit V (θ) as ngate and npara, respectively. Denote the number of measurements for estimating each quantum observable as ntrain and ntest for the training and test stages, respectively. Then, the time complexity to train QNNs is O(ngatenparantrainT ), and the time complexity of the test using QNNs\nis O(ngatentest). We emphasize that directly comparing the time complexity of QNNs with classical NNs is unfair due to different parameter strategies. However, the complexity of QNNs indeed shows a polylogarithmic dependence on the dimension of the input data if the number of gates and parameters are polynomial to the number of qubits. Specifically, both the TT-QNN and the SC-QNN equipped with the L-layer encoding circuit in this work have O(nL) gates and O(n) parameters, which lead to the training complexity O(ntrainn2LT ) and the test complexity O(ntestnL)." }, { "heading": "4.2 NUMERICAL SIMULATIONS", "text": "To analyze the practical performance of QNNs for binary classification tasks, we simulate the training and test of QNNs on the MNIST handwritten dataset. The 28 × 28 size image is sampled into 16× 16, 32× 32, and 64× 64 to fit QNNs for qubit number n ∈ {8, 10, 12}. We set the parameter in SC-QNNs as nc = 4 for all qubit number settings. Note that the based on the tree structure, the qubit number of original TT-QNNs is limited to the power of 2. To analysis the behavior of TTQNNs for general qubit numbers, we modify TT-QNNs into Deformed Tree Tensor (DTT) QNNs. The gradient norm for DTT-QNNs is lower bounded by O(1/n), which has a similar form of that for TT-QNNs. We provide more details of DTT-QNNs in Appendix F and denote DTT-QNNs as TT-QNNs in simulation parts (Section 4.2 and Appendix A) for the convenience.\nWe construct the encoding circuits in Section 3.3 with the number of alternating layers L = 1 for 400 training samples and 400 test samples in each class. The TT-QNN and the SC-QNN is compared to the QNN with the random structure. To make a fair comparison, we set the numbers of RY and CNOT gates in the random QNN to be the same with the TT-QNN and the SC-QNN. The objective function of the random QNN is defined as the average of the expectation of the observable σ3 for all qubits in the circuit. The number of the training iteration is 100, the batch size is 20, and the decayed learning rate is adopted as {1.00, 0.75, 0.50, 0.25}. We set ntrain = 200 and ntest = 1000 as numbers of measurements for estimating quantum observables during the training and test stages, respectively. All experiments are simulated through the PennyLane Python package (Bergholm et al., 2020).\nFirstly, we explain our results about QNNs with different qubit numbers in Figure 5. We train TTQNNs, SC-QNNs, and Random-QNNs with the stochastic gradient descent method described in Algorithm 2 for images in the class (0, 2) and the qubit number n ∈ {8, 10, 12}. The total loss is defined as the average of the single-input loss. The training loss and the test error during the training iteration are illustrated in Figures 5(a), 5(e) for the n=8 case, Figures 5(b), 5(f) for the\nn=10 case, and Figures 5(c), 5(g) for the n=12 case. The test error of the TT-QNN, the SC-QNN and the Random-QNN converge to around 0.2 for the n=8 case. As the qubit number increases, the converged test error of both TT-QNNs and SC-QNNs remains lower than 0.2, while that of Random-QNNs increases to 0.26 and 0.50 for n=10 and n=12 case, respectively. The training loss of both TT-QNNs and SC-QNNs converges to around 0.15 for all qubit number settings, while that of Random-QNNs remains higher than 0.22. Both the training loss and the test error results show that TT-QNNs and SC-QNNs have better trainability and accuracy on the binary classification compared with Random-QNNs. We record the l2-norm of the gradient during the training for the n=8 case in Figure 5(d). The gradient norm for the TT-QNN and the SC-QNN is mostly distributed in [0.4, 1.4], which is significantly larger than the gradient norm for the Random-QNN that is mostly distributed in [0.1, 0.2]. As shown in Figure 5(d), the gradient norm verifies the lower bounds in the Theorem 3.1. Moreover, we calculate the term α(ρin) defined in Theorem 3.1 and show the result in Figure 5(h). The average of α(ρin) is around 0.6, which is lower bounded by the theoretical result 14 in Theorem 3.2 (L = 1).\nSecondly, we explain our results about QNNs on the binary classification with different class pairs. We conduct the binary classification with the same super parameters mentioned before for 10-qubit QNNs, and the test accuracy and F1-scores for all class pairs {i, j} ⊂ {0, 1, 2, 3, 4} are provided in Table 1. The F1-0 denotes the F1-score when treats the former class to be positive, and the F1-1 denotes the F1-score for the other case. As shown in Table 1, TT-QNNs and SC-QNNs have higher test accuracy and F1-score than Random-QNNs for all class pairs in Table 1. Specifically, test accuracy of TT-QNNs and SC-QNNs exceed that of Random-QNNs by more than 10% for all class pairs except the (0, 1) which is relatively easy to classify.\nIn conclusion, both TT-QNNs and SC-QNNs show better trainability and accuracy on binary classification tasks compared with the random structure QNN, and all theorems are verified by experiments. We provide more experimental details and results about the input model and other classification tasks in Appendix A." }, { "heading": "5 CONCLUSIONS", "text": "In this work, we analyze the vanishing gradient problem in quantum neural networks. We prove that the gradient norm of n-qubit quantum neural networks with the tree tensor structure and the step controlled structure are lower bounded by Ω( 1n ) and Ω(( 1 2 ) nc), respectively. The bound guarantees the trainability of TT-QNNs and SC-QNNs on related machine learning tasks. Our theoretical framework requires fewer assumptions than previous works and meets constraints on quantum neural networks for near-term quantum computers. Compared with the random structure QNN which is known to be suffered from the barren plateaus problem, both TT-QNNs and SC-QNNs show better trainability and accuracy on the binary classification task. We hope the paper could inspire future works on the trainability of QNNs with different architectures and other quantum machine learning algorithms." }, { "heading": "A NUMERICAL SIMULATIONS", "text": "In this section, we provide more experimential details about the input model and other binary classification tasks.\nA.1 QUANTUM INPUT MODEL FOR THE MNIST DATASET\nIn this section, we discuss the training of the input model (Algorithm 1) in detail. We construct the encoding circuits in Section 3.3 for each training or test data with the number of alternating layers L = 1. The number of the training iteration is 100. We adopt the decayed learning rate as {0.100, 0.075, 0.050, 0.025}. We illustrate the loss function defined in Eq. 7 during the training of Algorithm 1 for label in {0, 1, 2, 3} for the n=8 case in Figure 6, in which we show the training of the input model for one image per sub-figure. All shown loss functions converge to around -0.6 after 60 iterations.\nFor a better understanding to the input model, we provide the visualization of the encoding circuit in Figure 7. We notice that the encoding circuit could only catch a few features from the input data (except the image 1 which shows good results). Despite this, we obtain relatively good results on binary classification tasks which employ the mentioned encoding circuit.\nApart from the binary classification tasks using QNNs equipped with the encoding circuit provided in Section 3.3, we perform some experiments such that the encoding circuit is replaced by the exact amplitude encoding, which is commonly used in existing quantum machine learning algorithms. Figure 8 demonstrates the simulation on MNIST classifications between images (0, 2), which shows the convergence of training loss (Figure 8(a)) and the test error (Figure 8(b)). The norm of the gradient is counted in Figure 8(c), in which both the TT-QNN and the SC-QNN show larger gradient norm than the Random-QNN. Thus, the trainability of TT-QNNs and SC-QNNs remains when replace the encoding circuit with the exact amplitude encoding.\nThe training and test accuracy for other class pairs are summarized in Table 2. We notice that compared with QNNs using the encoding circuit, QNNs with the exact encoding tend to have better\nperformance on the accuracy, which is reasonable since the exact amplitude encoding remains all information of the input data.\nA.2 QNNS WITH DIFFERENT QUBIT SIZES\nWe summarize the results of training and test accuracy for different qubit numbers in Table 3, which corresponds to the main results presented in Figure 5 in Section 4.2. As shown in Table 3, both the training and test accuracy of TT-QNNs and SC-QNNs remain at a high level for all qubit number settings, while the training and test accuracy of Random-QNNs decrease to around 0.5 for the 12- qubit case, which means the Random-QNN cannot classify better than a random guessing.\nA.3 QNNS WITH DIFFERENT LABEL PAIRS\nWe summarize the results of training and test accuracy, along with the F1-score, for QNNs on the classification between different label pairs in Table 4 and Table 5, for qubit number 8 and 10, respectively. For all label pairs, TT-QNNs and SC-QNNs show higher performance of the training accuracy, the test accuracy, and the F1-scores than that of Random-QNNs. Moreover, most of test accuracy of Random-QNNs drop for the same class pair when the qubit number is increased from 8 to 10, which suggest the trainability of Random-QNNs get worse as the qubit number increases.\nA.4 QNNS WITH DIFFERENT ROTATION GATES\nIn this section, we simulate variants of TT-QNNs and SC-QNNs such that single-qubit gate operations are extended from {RY} to {RX, RY, RZ}. Results on the binary classification between MNIST image (0, 2) using 8-qubit QNNs are provided in Figure 9. The training loss converges to around 0.23 and 0.175 for the TT-QNN and the SC-QNN, respectively, and the test error converges to around 0.3 for both the TT-QNN and the SC-QNN. We remark that based on results in Figures 5(a) and Figure 5(e), the training loss of original TT-QNN and SC-QNN converge at around 0.15, and the test error converge at around 0.20 and 0.15, respectively. Thus, both the TT-QNN and the SC-QNN show the worse performance than original QNNs when employing the extended gate set {RX, RY, RZ}. Another result is provided in Table 6 which shows the difference on the training and test accuracy.\nAs a conclusion, employing gate set {RX, RY, RZ} could worse the performance of the QNNs on real-world problems, which may due to the fact that real-world data lie in the real space, while operations {RX, RZ} introduce the imaginary term to the state.\nB NOTES ABOUT THE UNITARY 2-DESIGN\nIn this section, we introduce the notion of the unitary 2-design. Consider the finite gate set S = {Gi}|S|i=1 in the d-dimensional Hilbert space. We denote U(d) as the unitary gate group with the dimension d. We denote Pt,t(G) as the polynomial function which has the degree at most t on the matrix elements of G and at most t on the matrix elements of G†. Then, we could say the set S to be the unitary t-design if and only if for every function Pt,t(·), Eq. (9) holds:\n1 |S| ∑ G∈S Pt,t(G) = ∫ U(d) dµ(G)Pt,t(G), (9)\nwhere dµ(·) denotes the Haar distribution. The Haar distribution dµ(·) is defined that for any function f and any matrix K ∈ U(d),∫\nU(d)\ndµ(G)f(G) = ∫ U(d) dµ(G)f(KG) = ∫ U(d) dµ(G)f(GK).\nThe form in the right side of (9) can be viewed as the average or the expectation of the function Pt,t(G). We remark that only the parameterized gates RY = e−iθσ2 could not form a universal gate set even in the single-qubit space U(2), thus quantum circuits employing parameterized RY gates could not form the 2-design. This is only a simple introduction about the unitary 2-design, and we refer readers to Puchała & Miszczak (2011) and Cerezo et al. (2020) for more detail." }, { "heading": "C TECHNICAL LEMMAS", "text": "In this section we provide some technical lemmas. Lemma C.1. Let CNOT = σ0 ⊗ |0〉〈0|+ σ1 ⊗ |1〉〈1|. Then\nCNOT(σj ⊗ σk)CNOT† =(δj0 + δj1)(δk0 + δk3)σj ⊗ σk + (δj0 + δj1)(δk1 + δk2)σjσ1 ⊗ σk + (δj2 + δj3)(δk0 + δk3)σj ⊗ σkσ3 − (δj2 + δj3)(δk1 + δk2)σjσ1 ⊗ σkσ3.\nFurther for the case σk = σ0,\nCNOT(σj ⊗ σ0)CNOT† = (δj0 + δj1)σj ⊗ σ0 + (δj2 + δj3)σj ⊗ σ3.\nProof.\nCNOT(σj ⊗ σk)CNOT†\n= (σ0 ⊗ |0〉〈0|+ σ1 ⊗ |1〉〈1|) (σj ⊗ σk) (σ0 ⊗ |0〉〈0|+ σ1 ⊗ |1〉〈1|)\n= ( σ0 ⊗\nσ0 + σ3 2 + σ1 ⊗ σ0 − σ3 2\n) (σj ⊗ σk) ( σ0 ⊗\nσ0 + σ3 2 + σ1 ⊗ σ0 − σ3 2 ) = 1\n4 (σj ⊗ σk + σ1σjσ1 ⊗ σk + σj ⊗ σ3σkσ3 + σ1σjσ1 ⊗ σ3σkσ3) + 1\n4 (σjσ1 ⊗ σk + σ1σj ⊗ σk − σjσ1 ⊗ σ3σkσ3 − σ1σj ⊗ σ3σkσ3)\n+ 1\n4 (σj ⊗ σkσ3 + σj ⊗ σ3σk − σ1σjσ1 ⊗ σkσ3 − σ1σjσ1 ⊗ σ3σk)\n+ 1\n4 (σjσ1 ⊗ σ3σk − σjσ1 ⊗ σkσ3 + σ1σj ⊗ σkσ3 − σ1σj ⊗ σ3σk)\n=(δj0 + δj1)(δk0 + δk3)σj ⊗ σk + (δj0 + δj1)(δk1 + δk2)σjσ1 ⊗ σk + (δj2 + δj3)(δk0 + δk3)σj ⊗ σkσ3 − (δj2 + δj3)(δk1 + δk2)σjσ1 ⊗ σkσ3.\nFor the case σk = σ0, we have, CNOT(σj ⊗ σ0)CNOT† = (δj0 + δj1)σj ⊗ σ0 + (δj2 + δj3)σj ⊗ σ3.\nLemma C.2. Let CZ = σ0 ⊗ |0〉〈0|+ σ3 ⊗ |1〉〈1|. Then CZ(σj ⊗ σk)CZ† =(δj0 + δj3)(δk0 + δk3)σj ⊗ σk + (δj0 + δj3)(δk1 + δk2)σjσ3 ⊗ σk\n+ (δj1 + δj2)(δk0 + δk3)σj ⊗ σkσ3 − (δj1 + δj2)(δk1 + δk2)σjσ3 ⊗ σkσ3. Further for the case σk = σ0,\nCZ(σj ⊗ σ0)CZ† = (δj0 + δj3)σj ⊗ σ0 + (δj1 + δj2)σj ⊗ σ3.\nProof.\nCZ(σj ⊗ σk)CZ†\n= (σ0 ⊗ |0〉〈0|+ σ3 ⊗ |1〉〈1|) (σj ⊗ σk) (σ0 ⊗ |0〉〈0|+ σ3 ⊗ |1〉〈1|)\n= ( σ0 ⊗\nσ0 + σ3 2 + σ3 ⊗ σ0 − σ3 2\n) (σj ⊗ σk) ( σ0 ⊗\nσ0 + σ3 2 + σ3 ⊗ σ0 − σ3 2 ) = 1\n4 (σj ⊗ σk + σ3σjσ3 ⊗ σk + σj ⊗ σ3σkσ3 + σ3σjσ3 ⊗ σ3σkσ3) + 1\n4 (σjσ3 ⊗ σk + σ3σj ⊗ σk − σjσ3 ⊗ σ3σkσ3 − σ3σj ⊗ σ3σkσ3)\n+ 1\n4 (σj ⊗ σkσ3 + σj ⊗ σ3σk − σ3σjσ3 ⊗ σkσ3 − σ3σjσ3 ⊗ σ3σk)\n+ 1\n4 (σjσ3 ⊗ σ3σk − σjσ3 ⊗ σkσ3 + σ3σj ⊗ σkσ3 − σ3σj ⊗ σ3σk)\n=(δj0 + δj3)(δk0 + δk3)σj ⊗ σk + (δj0 + δj3)(δk1 + δk2)σjσ3 ⊗ σk + (δj1 + δj2)(δk0 + δk3)σj ⊗ σkσ3 − (δj1 + δj2)(δk1 + δk2)σjσ3 ⊗ σkσ3.\nFor the case σk = σ0, we have, CZ(σj ⊗ σ0)CZ† = (δj0 + δj3)σj ⊗ σ0 + (δj1 + δj2)σj ⊗ σ3.\nLemma C.3. Let θ be a variable with uniform distribution in [0, 2π]. Let A,C : H2 → H2 be arbitrary linear operators and let B = D = σj be arbitrary Pauli matrices, where j ∈ {0, 1, 2, 3}. Then\nEθTr[WAW †B]Tr[WCW †D] = 1\n2π ∫ 2π 0 Tr[WAW †B]Tr[WCW †D]dθ (10)\n=\n[ 1\n2 + δj0 + δjk 2\n] Tr[AB]Tr[CD] + [ −1\n2 + δj0 + δjk 2\n] Tr[ABσk]Tr[CDσk], (11)\nwhere W = e−iθσk and k ∈ {1, 2, 3}.\nProof. First we simply replace the term W = e−iθσk = I cos θ − iσk sin θ. 1\n2π ∫ 2π 0 dθTr[WAW †B]Tr[WCW †D]\n= 1\n2π ∫ 2π 0 dθTr[(I cos θ − iσk sin θ)A(I cos θ + iσk sin θ)B] · Tr[(I cos θ − iσk sin θ)C(I cos θ + iσk sin θ)D]\n= 1\n2π ∫ 2π 0 dθ { cos2 θTr[AB]− i sin θ cos θTr[σkAB] + i sin θ cos θTr[AσkB] + sin2 θTr[σkAσkB] }\n· { cos2 θTr[CD]− i sin θ cos θTr[σkCD] + i sin θ cos θTr[CσkD] + sin2 θTr[σkCσkD] } .\nWe remark that: 1\n2π ∫ 2π 0 dθ cos4 θ = 3 8 , (12)\n1\n2π ∫ 2π 0 dθ sin4 θ = 3 8 , (13)\n1\n2π ∫ 2π 0 dθ cos2 θ sin2 θ = 1 8 , (14)\n1\n2π ∫ 2π 0 dθ cos3 θ sin θ = 0, (15)\n1\n2π ∫ 2π 0 dθ cos θ sin3 θ = 0. (16)\nThen\nThe integration term = 3\n8 Tr[AB]Tr[CD] +\n3 8 Tr[σkAσkB]Tr[σkCσkD]\n+ 1\n8 Tr[AB]Tr[σkCσkD] +\n1 8 Tr[σkAσkB]Tr[CD]\n−1 8 Tr[σkAB]Tr[σkCD]− 1 8 Tr[AσkB]Tr[CσkD] + 1\n8 Tr[σkAB]Tr[CσkD] +\n1 8 Tr[AσkB]Tr[σkCD]\n=Tr[AB]Tr[CD] [ 1\n2 + δj0 + δjk 2 ] +Tr[ABσk]Tr[CDσk] [ −1\n2 + δj0 + δjk 2\n] .\nThe last equation is derived by noticing that for B = σj , Tr[σkAσkB] = Tr[Aσkσjσk]\n= [2(δj0 + δjk)− 1] · Tr[Aσj ] = [2(δj0 + δjk)− 1] · Tr[AB],\nTr[σkAB]− Tr[AσkB] = Tr[Aσjσk]− Tr[Aσkσj ] = 2(1− δj0 − δjk) · Tr[Aσjσk] = 2(1− δj0 − δjk) · Tr[ABσk],\nwhile similar forms hold for D = σj , Tr[σkCσkD] = Tr[Cσkσjσk]\n= [2(δj0 + δjk)− 1] · Tr[Cσj ] = [2(δj0 + δjk)− 1] · Tr[CD],\nTr[σkCD]− Tr[CσkD] = Tr[Cσjσk]− Tr[Cσkσj ] = 2(1− δj0 − δjk) · Tr[Cσjσk] = 2(1− δj0 − δjk) · Tr[CDσk].\nLemma C.4. Let θ be a variable with uniform distribution in [0, 2π]. Let A,C : H2 → H2 be arbitrary linear operators and let B = D = σj be arbitrary Pauli matrices, where j ∈ {0, 1, 2, 3}. Then\nEθTr[GAW †B]Tr[GCW †D] = 1\n2π ∫ 2π 0 Tr[GAW †B]Tr[GCW †D]dθ\n=\n[ 1\n2 − δj0 + δjk 2\n] Tr[AB]Tr[CD] + [ −1\n2 − δj0 + δjk 2\n] Tr[ABσk]Tr[CDσk],\nwhere W = e−iθσk , G = ∂W∂θ and k ∈ {1, 2, 3}.\nProof. First we simply replace the term W = e−iθσk = I cos θ − iσk sin θ and G = ∂W∂θ = −I sin θ − iσk cos θ.\n1\n2π ∫ 2π 0 dθTr[GAW †B]Tr[GCW †D]\n= 1\n2π ∫ 2π 0 dθTr[(−I sin θ − iσk cos θ)A(I cos θ + iσk sin θ)B]Tr[(−I sin θ − iσk cos θ)C(I cos θ + iσk sin θ)D]\n= 1\n2π ∫ 2π 0 dθ { − sin θ cos θTr[AB]− i cos2 θTr[σkAB]− i sin2 θTr[AσkB] + cos θ sin θTr[σkAσkB] } · { − sin θ cos θTr[CD]− i cos2 θTr[σkCD]− i sin2 θTr[CσkD] + cos θ sin θTr[σkCσkD] } .\nThe integration above could be simplified using equations 12-16,\nThe integration term = 1\n8 Tr[AB]Tr[CD] +\n1 8 Tr[σkAσkB]Tr[σkCσkD]\n−1 8 Tr[AB]Tr[σkCσkD]− 1 8 Tr[σkAσkB]Tr[CD] −3 8 Tr[σkAB]Tr[σkCD]− 3 8 Tr[AσkB]Tr[CσkD] −1 8 Tr[σkAB]Tr[CσkD]− 1 8 Tr[AσkB]Tr[σkCD]\n=Tr[AB]Tr[CD] [ 1\n2 − δj0 + δjk 2 ] +Tr[ABσk]Tr[CDσk] [ −1\n2 − δj0 + δjk 2\n] .\nThe last equation is derived by noticing that for B = σj ,\nTr[σkAσkB] = Tr[Aσkσjσk] = [2(δj0 + δjk)− 1] · Tr[Aσj ] = [2(δj0 + δjk)− 1] · Tr[AB],\nTr[σkAB] + Tr[AσkB] = Tr[Aσjσk] + Tr[Aσkσj ] = 2(δj0 + δjk) · Tr[Aσjσk] = 2(δj0 + δjk) · Tr[ABσk],\nwhile similar forms hold for D = σj ,\nTr[σkCσkD] = Tr[Cσkσjσk] = [2(δj0 + δjk)− 1] · Tr[Cσj ] = [2(δj0 + δjk)− 1] · Tr[CD],\nTr[σkCD] + Tr[CσkD] = Tr[Cσjσk]− Tr[Cσkσj ] = 2(δj0 + δjk) · Tr[Cσjσk] = 2(δj0 + δjk) · Tr[CDσk]." }, { "heading": "D THE PROOF OF THEOREM 3.1: THE TT PART", "text": "Now we begin the proof of Theorem 3.1.\nProof. Firstly we remark that by Lemma D.1, each partial derivative is calculated as\n∂fTT\n∂θ (k) j\n= 1\n2\n( Tr[O · VTT(θ+)ρinVTT(θ+)†]− Tr[O · VTT(θ−)ρinVTT(θ−)†] ) ,\nsince the expectation of the quantum observable is bounded by [−1, 1], the square of the partial derivative could be easily bounded as: (\n∂fTT\n∂θ (k) j\n)2 ≤ 1.\nBy summing up 2n− 1 parameters, we obtain\n‖∇θfTT‖2 = ∑ j,k\n( ∂fTT\n∂θ (k) j\n)2 ≤ 2n− 1.\nOn the other side, the lower bound could be derived as follows,\nEθ‖∇θfTT‖2 ≥ 1+logn∑ j=1 Eθ\n( ∂fTT\n∂θ (1) j\n)2 (17)\n= 1+logn∑ j=1 4Eθ ( fTT − 1 2 )2 (18)\n≥ 1 + log n 2n\n· ( Tr [ σ(1,0,··· ,0) · ρin ]2 + Tr [ σ(3,0,··· ,0) · ρin ]2) , (19)\nwhere Eq. (18) is derived using Lemma D.2, and Eq. (19) is derived using Lemma D.3.\nNow we provide the detail and the proof of Lemmas D.1, D.2, D.3.\nLemma D.1. Consider the objective function of the QNN defined as\nf(θ) = 1 + 〈O〉\n2 = 1 + Tr[O · V (θ)ρinV (θ)†] 2 ,\nwhere θ encodes all parameters which participate the circuit as e−iθjσk , k ∈ 1, 2, 3, ρin denotes the input state and O is an arbitrary quantum observable. Then, the partial derivative of the function respect to the parameter θj could be calculated by\n∂f ∂θj = 1 2\n( Tr[O · V (θ+)ρinV (θ+)†]− Tr[O · V (θ−)ρinV (θ−)†] ) ,\nwhere θ+ ≡ θ + π4 ej and θ− ≡ θ − π 4 ej .\nProof. First we assume that the circuit V (θ) consists of p parameters, and could be written in the sequence: V (θ) = Vp(θp) · Vp−1(θp−1) · · ·V1(θ1), such that each block Vj contains only one parameter.\nConsider the observable defined as O′ = V †j+1 · · ·V †pOVp · · ·Vj+1 and the input state defined as ρ′in = Vj−1 · · ·V1ρinV † 1 · · ·V † j−1. The parameter shifting rule (Crooks, 2019) provides a gradient\ncalculation method for the single parameter case. For fj(θj) = Tr[O′ ·U(θj)ρ′inU(θj)†], the gradient could be calculated as\ndfj dθj = fj(θj + π 4 )− fj(θj − π 4 ).\nThus, by inserting the form of O′ and ρ′in, we could obtain\n∂f ∂θj = dfj dθj = fj(θj+ π 4 )−fj(θj− π 4 ) = 1 2\n( Tr[O · V (θ+)ρinV (θ+)†]− Tr[O · V (θ−)ρinV (θ−)†] ) .\nLemma D.2. For the objective function fTT defined in Eq. (3), the following formula holds for every j ∈ {1, 2, · · · , 1 + log n}:\nEθ\n( ∂fTT\n∂θ (1) j\n)2 = 4 · Eθ(fTT − 1\n2 )2, (20)\nwhere the expectation is taken for all parameters in θ with uniform distribution in [0, 2π].\nProof. First we rewrite the formulation of fTT in detail:\nfTT = 1\n2 +\n1 2 Tr [ σ(3,0,··· ,0) · Vm+1CXm · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CX†mV † m+1 ] , (21)\nwhere m = log n and we denote\nσ(i1,i2,··· ,in) ≡ σi1 ⊗ σi2 ⊗ · · · ⊗ σin .\nThe operation V` consists of n·21−` single qubit rotationsW (j)` = e−iσ2θ (j) ` on the (j−1)·2`−1+1th qubit, where j = 1, 2, · · · , n · 21−`. The operation CX` consists of n · 2−` CNOT gates, where each of them acts on the (j − 1) · 2` + 1-th and (j − 0.5) · 2` + 1-th qubit, for j = 1, 2, · · · , n · 2−`.\nNow we focus on the partial derivative of the function f to the parameter θ(1)j . We have:\n∂fTT\n∂θ (1) j\n= 1\n2 Tr\n[ σ(3,0,··· ,0) · Vm+1CXm · · · ∂Vj\nθ (1) j\n· · ·CX1V1 · ρin · V †1 CX † 1 · · ·V † j · · ·CX † mV † m+1 ] (22)\n+ 1\n2 Tr\n[ σ(3,0,··· ,0) · Vm+1CXm · · ·Vj · · ·CX1V1 · ρin · V †1 CX † 1 · · · ∂V †j\nθ (1) j\n· · ·CX†mV † m+1 ] (23)\n= Tr [ σ(3,0,··· ,0) · Vm+1CXm · · · ∂Vj\nθ (1) j\n· · ·CX1V1 · ρin · V †1 CX † 1 · · ·V † j · · ·CX † mV † m+1\n] .\n(24)\nThe Eq. (24) holds because both terms in (22) and (23) except ρin are real matrices, and ρin = ρ † in. The key idea to derive Eθ ( ∂fTT\n∂θ (1) j\n)2 = 4 · Eθ(fTT − 12 ) 2 is that for cases B = D = σj ∈ {σ1, σ3},\nthe term δj0+δj22 = 0, which means Lemma C.3 and Lemma C.4 collapse to the same formulation:\nEθTr[WAW †B]Tr[WCW †D] = EθTr[GAW †B]Tr[GCW †D]\n= 1\n2 Tr[Aσ1]Tr[Cσ1] +\n1 2 Tr[Aσ3]Tr[Cσ3].\nNow we write the analysis in detail.\nEθ ( ∂fTT ∂θ\n(1) j\n)2 − 4(fTT − 1\n2 )2\n (25)\n=Eθ1 · · ·EθmEθ(1)m+1Tr [ σ(3,0,··· ,0) · Vm+1CXmAmCXmV †m+1 ]2 (26)\n−Eθ1 · · ·EθmEθ(1)m+1Tr [ σ(3,0,··· ,0) · Vm+1CXmBmCXmV †m+1 ]2 (27)\n=Eθ1 · · ·Eθm { 1 2 Tr [ σ(3,0,··· ,0,3,0,··· ,0) ·Am ]2 + 1 2 Tr [ σ(1,0,··· ,0,0,0,··· ,0) ·Am ]2} (28)\n−Eθ1 · · ·Eθm { 1 2 Tr [ σ(3,0,··· ,0,3,0,··· ,0) ·Bm ]2 + 1 2 Tr [ σ(1,0,··· ,0,0,0,··· ,0) ·Bm ]2} , (29)\nwhere\nAm = VmCXm−1 · · · ∂Vj\nθ (1) j\n· · ·CX1V1 · ρin · V †1 CX † 1 · · ·V † j · · ·CX † m−1V † m,\nBm = VmCXm−1 · · ·Vj · · ·CX1V1 · ρin · V †1 CX † 1 · · ·V † j · · ·CX † m−1V † m,\nand Eq. (28,29) are derived using the collapsed form of Lemma C.1:\nCNOT(σ1 ⊗ σ0)CNOT† = σ1 ⊗ σ0, CNOT(σ3 ⊗ σ0)CNOT† = σ3 ⊗ σ3, and θ` denotes the vector consisted with all parameters in the `-th layer. The integration could be performed for parameters {θm,θm−1, · · · ,θj+1}. It is not hard to find that after the integration of the parameters θj+1, the term Tr[σi · Aj ]2 and Tr[σi · Bj ]2 have the opposite coefficients. Besides, the first index of each Pauli tensor product σ(i1,i2,··· ,in) could only be i1 ∈ {1, 3} because of the Lemma C.3. So we could write\nEθ ( ∂fTT ∂θ\n(1) j\n)2 − 4(fTT − 1\n2 )2 (30) =Eθ1 · · ·Eθj\n ∑ i1∈{1,3} 3∑ i2=0 · · · 3∑ in=0 aiTr [σi ·Aj ]2 − aiTr [σi ·Bj ]2 (31)\nwhere\nAj = ∂Vj\n∂θ (1) j\nCXj−1 · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CXj−1V † j\n= (G (1) j ⊗ I ⊗(n−1))A /θ\n(1) j\nj (W (1)† j ⊗ I ⊗(n−1)),\nBj = VjCXj−1 · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CXj−1V † j\n= (W (1) j ⊗ I ⊗(n−1))A /θ\n(1) j\nj (W (1)† j ⊗ I ⊗(n−1)),\nand ai is the coefficient of the term Tr [σi ·Aj ]2. We denote G(1)j = ∂W\n(1) j\n∂θ (1) j\nand use A /θ\n(1) j\nj to denote\nthe rest part of Aj and Bj . By Lemma C.3 and Lemma C.4, we have\nE θ (1) j\n[ Tr [σi ·Aj ]2 − Tr [σi ·Bj ]2 ] = 0,\nsince for the case i1 ∈ {1, 3}, the term δi10+δi12\n2 = 0, which means Lemma C.3 and Lemma C.4 have the same formulation. Then, we derive the Eq. (20).\nLemma D.3. For the loss function f defined in (3), we have:\nEθ(fTT − 1\n2 )2 ≥\n{ Tr [ σ(1,0,··· ,0) · ρin ]}2 + { Tr [ σ(3,0,··· ,0) · ρin ]}2 8n , (32)\nwhere we denote σ(i1,i2,··· ,in) ≡ σi1 ⊗ σi2 ⊗ · · · ⊗ σin ,\nand the expectation is taken for all parameters in θ with uniform distributions in [0, 2π].\nProof. First we expand the function fTT in detail,\nfTT = 1\n2 +\n1 2 Tr [ σ(3,0,··· ,0) · Vm+1CXm · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CX†mV † m+1 ] , (33)\nwhere m = log n. Now we consider the expectation of (fTT − 12 ) 2 under the uniform distribution for θ(1)m+1:\nE θ (1) m+1\n(fTT − 1\n2 )2 =\n1 4 E θ (1) m+1\n{ Tr [ σ(3,0,··· ,0) · Vm+1CXm · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CX†mV † m+1 ]}2 (34)\n= 1\n8\n{ Tr [ σ(3,0,··· ,0) ·A′ ]}2 + 1\n8\n{ Tr [ σ(1,0,··· ,0) ·A′ ]}2 (35)\n= 1\n8\n{ Tr [ σ(3,0,··· ,0,3,0,··· ,0) ·A ]}2 + 1\n8\n{ Tr [ σ(1,0,··· ,0,0,··· ,0) ·A ]}2 (36)\n≥ 1 8\n{ Tr [ σ(1,0,··· ,0,0,··· ,0) ·A ]}2 , (37)\nwhere\nA′ = CXmVm · · ·CX1V1 · ρin · V †1 CX † 1 · · ·VmCX†m, (38)\nA = Vm · · ·CX1V1 · ρin · V †1 CX † 1 · · ·Vm, (39)\nand Eq. (35) is derived using Lemma C.3, Eq. (36) is derived using the collapsed form of Lemma C.1:\nCNOT(σ1 ⊗ σ0)CNOT† = σ1 ⊗ σ0, CNOT(σ3 ⊗ σ0)CNOT† = σ3 ⊗ σ3,\nWe remark that during the integration of the parameters {θ(j)` } in each layer ` ∈ {1, 2, · · · ,m+ 1}, the coefficient of the term {Tr[σ(1,0,··· ,0) · A]}2 only times a factor 1/2 for the case j = 1, and the coefficient remains for the cases j > 1 (check Lemma C.3 for detail). Since the formulation {Tr[σ(1,0,··· ,0) · A]}2 remains the same when merging the operation CX` with σ(1,0,··· ,0), for ` ∈ {1, 2, · · · ,m}, we could generate the following equation,\nEθ2 · · ·Eθm+1(fTT − 1 2 )2 ≥ (1 2 )m+2\n{ Tr [ σ(1,0,··· ,0) · V1 · ρin · V †1 ]}2 , (40)\nwhere θ` denotes the vector consisted with all parameters in the `-th layer.\nFinally by using Lemma C.3, we could integrate the parameters {θ(j)1 }nj=1 in (40):\nEθ(fTT − 1\n2 )2 = Eθ1 · · ·Eθm+1(fTT −\n1 2 )2 (41)\n≥ { Tr [ σ(1,0,··· ,0) · ρin ]}2 + { Tr [ σ(3,0,··· ,0) · ρin ]}2 2m+3\n(42)\n=\n{ Tr [ σ(1,0,··· ,0) · ρin ]}2 + { Tr [ σ(3,0,··· ,0) · ρin ]}2 8n . (43)" }, { "heading": "E THE QUANTUM INPUT MODEL", "text": "For the convenience of the analysis, we consider the encoding model that the number of alternating layers is L. The model begins with the inital state |0〉⊗n, where n is the number of the qubit. Then we employ the X gate to each qubit which transform the state into |1〉⊗n. Next we employ L alternating layer operations, each of which contains four parts: a single qubit rotation layer denoted as V2i−1, a CZ gate layer denoted as CZ2, again a single qubit rotation layer denoted as V2i, and another CZ gate layer with alternating structures denoted as CZ1, for i ∈ {1, 2, · · · , L}. Each single qubit gate contains a parameter encoded in the phase: W (k)j = e\n−iσ2β(k)j , and each single qubit rotation layer could be written as\nVj = Vj(βj) = W (1) j ⊗W (2) j ⊗ · · · ⊗W (n) j .\nFinally, we could mathematically define the encoding model:\nU(ρin) = V2L+1ULUL−1 · · ·U1X⊗n,\nwhere Uj = CZ1V2jCZ2V2j−1 is the j-th alternating layer.\nBy employing the encoding model illustrated in Figure 10 for the state preparation, we find that the expectation of the term α(ρin) defined in Theorem 1.1 has the lower bound independent from the qubit number.\nTheorem E.1. Suppose the input state ρin(θ) is prepared by the L-layer encoding model illustrated in Figure 10. Then,\nEβα(ρin) = Eβ ( Tr [ σ(1,0,··· ,0) · ρin ]2 + Tr [ σ(3,0,··· ,0) · ρin ]2) ≥ 2−2L, where β denotes all variational parameters in the encoding circuit, and the expectation is taken for all parameters in β with uniform distribution in [0, 2π].\nProof. Define ρj = UjUj−1 · · ·U1X⊗n|0〉⊗n〈0|⊗nX⊗nU†1 · · ·U † j−1U † j , for j ∈ {0, 1, · · · , L}. We have:\nEβ ( Tr [ σ(1,0,··· ,0) · ρin ]2 + Tr [ σ(3,0,··· ,0) · ρin ]2) (44)\n= Eβ1 · · ·Eβ2L+1 ( Tr [ σ(1,0,··· ,0) · V2L+1ρLV †2L+1 ]2 + Tr [ σ(3,0,··· ,0) · V2L+1ρLV †2L+1 ]2) (45)\n= Eβ1 · · ·Eβ2L ( Tr [ σ(1,0,··· ,0) · ρL ]2 + Tr [ σ(3,0,··· ,0) · ρL ]2) . (46)\n≥ 2−2L ( Tr [ σ(1,0,··· ,0) · ρ0 ]2 + Tr [ σ(3,0,··· ,0) · ρ0 ]2) (47)\n= 2−2L · (02 + (−1)2) = 2−2L, (48)\nwhere Eq. (45) is derived from the definition of ρL. Eq. (46) is derived using Lemma C.3. Eq. (47) is derived by noticing that for each j ∈ {0, 1, · · · , L− 1}, the following equations holds,\nEβ1 · · ·Eβ2j+2 ( Tr [ σ(1,0,··· ,0) · ρj+1 ]2 + Tr [ σ(3,0,··· ,0) · ρj+1 ]2) (49)\n= Eβ1 · · ·Eβ2j+2 ( Tr [ σ(1,0,··· ,0) · Uj+1ρjU†j+1 ]2 + Tr [ σ(3,0,··· ,0) · Uj+1ρjU†j+1 ]2) (50)\n= Eβ1 · · ·Eβ2j+2Tr [ σ(1,0,··· ,0) · CZ1V2j+2CZ2V2j+1ρjV †2j+1CZ2V † 2j+2CZ1 ]2 (51)\n+ Eβ1 · · ·Eβ2j+2Tr [ σ(3,0,··· ,0) · CZ1V2j+2CZ2V2j+1ρjV †2j+1CZ2V † 2j+2CZ1 ]2 (52)\n= Eβ1 · · ·Eβ2j+2Tr [ σ(1,3,0,··· ,0) · V2j+2CZ2V2j+1ρjV †2j+1CZ2V † 2j+2 ]2 (53)\n+ Eβ1 · · ·Eβ2j+2Tr [ σ(3,0,··· ,0) · V2j+2CZ2V2j+1ρjV †2j+1CZ2V † 2j+2 ]2 (54)\n≥ Eβ1 · · ·Eβ2j+2Tr [ σ(3,0,··· ,0) · V2j+2CZ2V2j+1ρjV †2j+1CZ2V † 2j+2 ]2 (55)\n= Eβ1 · · ·Eβ2j+1 ( 1 2 Tr [ σ(1,0,··· ,0) · CZ2V2j+1ρjV †2j+1CZ2 ]2 + 1 2 Tr [ σ(3,0,··· ,0) · CZ2V2j+1ρjV †2j+1CZ2 ]2) (56)\n= Eβ1 · · ·Eβ2j+1 ( 1 2 Tr [ σ(1,0,··· ,0,3) · V2j+1ρjV †2j+1 ]2 + 1 2 Tr [ σ(3,0,··· ,0) · V2j+1ρjV †2j+1 ]2) (57)\n≥ Eβ1 · · ·Eβ2j+1 1 2 Tr [ σ(3,0,··· ,0) · V2j+1ρjV †2j+1 ]2 (58) = Eβ1 · · ·Eβ2j 1\n4\n( Tr [ σ(1,0,··· ,0) · ρj ]2 + Tr [ σ(3,0,··· ,0) · ρj ]2) , (59)\nwhere Eq. (50) is derived from the definition of ρj+1. Eq. (51-52) are derived from the definition of Uj+1. Eq. (53-54) and Eq. (57) are derived using Lemma C.1. Eq. (56) and Eq. (59) are derived using Lemma C.3." }, { "heading": "F THE DEFORMED TREE TENSOR QNN", "text": "Similar to the TT-QNN case, the objective function for the Deformed Tree Tensor (DTT) QNN is given in the form:\nfDTT(θ) = 1\n2 +\n1 2 Tr[σ3 ⊗ I⊗(n−1)VDTT(θ)ρinVDTT(θ)†], (60)\nwhere VDTT denotes the circuit operation of DTT-QNN which is illustrated in Figure 11. The lower bound result for the gradient norm of DTT-QNNs is provided in Theorem F.1.\nTheorem F.1. Consider the n-qubit DTT-QNN defined in Figure 11 and the corresponding objective function fDTT defined in (60), then we have:\n1 + log n\n4n · α(ρin) ≤ Eθ‖∇θfDTT‖2 ≤ 2n− 1, (61)\nwhere the expectation is taken for all parameters in θ with uniform distributions in [0, 2π], ρin ∈ C2 n×2n denotes the input state, α(ρin) = Tr [ σ(1,0,··· ,0) · ρin ]2 + Tr [ σ(3,0,··· ,0) · ρin ]2 , and σ(i1,i2,··· ,in) ≡ σi1 ⊗ σi2 ⊗ · · · ⊗ σin .\nProof. Firstly we remark that by Lemma D.1, each partial derivative is calculated as\n∂fDTT\n∂θ (k) j\n= 1\n2\n( Tr[O · VDTT(θ+)ρinVDTT(θ+)†]− Tr[O · VDTT(θ−)ρinVDTT(θ−)†] ) ,\nsince the expectation of the quantum observable is bounded by [−1, 1], the square of the partial derivative could be easily bounded as: (\n∂fDTT\n∂θ (k) j\n)2 ≤ 1.\nBy summing up 2n− 1 parameters, we obtain\n‖∇θfDTT‖2 = ∑ j,k\n( ∂fDTT\n∂θ (k) j\n)2 ≤ 2n− 1.\nOn the other side, the lower bound could be derived as follows,\nEθ‖∇θfDTT‖2 ≥ 1+dlogne∑ j=1 Eθ\n( ∂fDTT\n∂θ (1) j\n)2 (62)\n= 1+dlogne∑ j=1 4Eθ ( fDTT − 1 2 )2 (63)\n≥ 1 + log n 4n\n· ( Tr [ σ(1,0,··· ,0) · ρin ]2 + Tr [ σ(3,0,··· ,0) · ρin ]2) , (64)\nwhere Eq. (63) is derived using Lemma F.1, and Eq. (64) is derived using Lemma F.2.\nLemma F.1. For the objective function fDTT defined in Eq. (60), the following formula holds for every j ∈ {1, 2, · · · , 1 + dlog ne}:\nEθ\n( ∂fDTT\n∂θ (1) j\n)2 = 4 · Eθ(fDTT − 1\n2 )2, (65)\nwhere the expectation is taken for all parameters in θ with uniform distribution in [0, 2π].\nProof. The proof has a very similar formulation compare to the original tree tensor case. First we rewrite the formulation of fDTT in detail:\nfDTT = 1\n2 +\n1 2 Tr [ σ(3,0,··· ,0) · Vm+1CXm · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CX†mV † m+1 ] , (66)\nwhere m = dlog ne and we denote\nσ(i1,i2,··· ,in) ≡ σi1 ⊗ σi2 ⊗ · · · ⊗ σin .\nThe operation V` consists of bn · 21−`c single qubit rotations W (j)` = e−iσ2θ (j) ` on the (j − 1) · 2`−1 + 1-th qubit, where j = 1, 2, · · · , bn · 21−`c. The operation CX` consists of bn · 2−`c CNOT gates, where each of them acts on the (j − 1) · 2` + 1-th and (j − 0.5) · 2` + 1-th qubit, for j = 1, 2, · · · , bn · 2−`c.\nNow we focus on the partial derivative of the function f to the parameter θ(1)j . We have:\n∂fDTT\n∂θ (1) j\n= 1\n2 Tr\n[ σ(3,0,··· ,0) · Vm+1CXm · · · ∂Vj\nθ (1) j\n· · ·CX1V1 · ρin · V †1 CX † 1 · · ·V † j · · ·CX † mV † m+1 ] (67)\n+ 1\n2 Tr\n[ σ(3,0,··· ,0) · Vm+1CXm · · ·Vj · · ·CX1V1 · ρin · V †1 CX † 1 · · · ∂V †j\nθ (1) j\n· · ·CX†mV † m+1 ] (68)\n= Tr [ σ(3,0,··· ,0) · Vm+1CXm · · · ∂Vj\nθ (1) j\n· · ·CX1V1 · ρin · V †1 CX † 1 · · ·V † j · · ·CX † mV † m+1\n] .\n(69)\nThe Eq. (69) holds because both terms in (67) and (68) except ρin are real matrices, and ρin = ρ † in. Similar to the tree tensor case, the key idea to derive Eθ ( ∂fDTT\n∂θ (1) j\n)2 = 4 · Eθ(fDTT − 12 ) 2 is that for\ncases B = D = σj ∈ {σ1, σ3}, the term δj0+δj22 = 0, which means Lemma C.3 and Lemma C.4 collapse to the same formulation:\nEθTr[WAW †B]Tr[WCW †D] = EθTr[GAW †B]Tr[GCW †D]\n= 1\n2 Tr[Aσ1]Tr[Cσ1] +\n1 2 Tr[Aσ3]Tr[Cσ3].\nNow we write the analysis in detail.\nEθ (∂fDTT ∂θ\n(1) j\n)2 − 4(fDTT − 1\n2 )2 (70) =Eθ1 · · ·EθmEθ(1)m+1Tr [ σ(3,0,··· ,0) · Vm+1CXmAmCXmV †m+1 ]2 (71)\n−Eθ1 · · ·EθmEθ(1)m+1Tr [ σ(3,0,··· ,0) · Vm+1CXmBmCXmV †m+1 ]2 (72)\n=Eθ1 · · ·Eθm { 1 2 Tr [ σ(3,0,··· ,0,3,0,··· ,0) ·Am ]2 + 1 2 Tr [ σ(1,0,··· ,0,0,0,··· ,0) ·Am ]2} (73)\n−Eθ1 · · ·Eθm { 1 2 Tr [ σ(3,0,··· ,0,3,0,··· ,0) ·Bm ]2 + 1 2 Tr [ σ(1,0,··· ,0,0,0,··· ,0) ·Bm ]2} , (74)\nwhere\nAm = VmCXm−1 · · · ∂Vj\nθ (1) j\n· · ·CX1V1 · ρin · V †1 CX † 1 · · ·V † j · · ·CX † m−1V † m,\nBm = VmCXm−1 · · ·Vj · · ·CX1V1 · ρin · V †1 CX † 1 · · ·V † j · · ·CX † m−1V † m.\nEq. (73) and Eq. (74) are derived using the collapsed form of Lemma C.1:\nCNOT(σ1 ⊗ σ0)CNOT† = σ1 ⊗ σ0, CNOT(σ3 ⊗ σ0)CNOT† = σ3 ⊗ σ3, and θ` denotes the vector consisted with all parameters in the `-th layer. The integration could be performed for parameters {θm,θm−1, · · · ,θj+1}. It is not hard to find that after the integration of the parameters θj+1, the term Tr[σi · Aj ]2 and Tr[σi · Bj ]2 have the opposite coefficients. Besides, the first index of each Pauli tensor product σ(i1,i2,··· ,in) could only be i1 ∈ {1, 3} because of the Lemma C.3. So we could write\nEθ (∂fDTT ∂θ\n(1) j\n)2 − 4(fDTT − 1\n2 )2 (75) =Eθ1 · · ·Eθj\n ∑ i1∈{1,3} 3∑ i2=0 · · · 3∑ in=0 aiTr [σi ·Aj ]2 − aiTr [σi ·Bj ]2 (76)\nwhere\nAj = ∂Vj\n∂θ (1) j\nCXj−1 · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CXj−1V † j\n= (G (1) j ⊗ I ⊗(n−1))A /θ\n(1) j\nj (W (1)† j ⊗ I ⊗(n−1)),\nBj = VjCXj−1 · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CXj−1V † j\n= (W (1) j ⊗ I ⊗(n−1))A /θ\n(1) j\nj (W (1)† j ⊗ I ⊗(n−1)),\nand ai is the coefficient of the term Tr [σi ·Aj ]2. We denote G(1)j = ∂W\n(1) j\n∂θ (1) j\nand use A /θ\n(1) j\nj to denote\nthe rest part of Aj and Bj . By Lemma C.3 and Lemma C.4, we have\nE θ (1) j\n[ Tr [σi ·Aj ]2 − Tr [σi ·Bj ]2 ] = 0,\nsince for the case i1 ∈ {1, 3}, the term δi10+δi12\n2 = 0, which means Lemma C.3 and Lemma C.4 have the same formulation. Then, we derive the Eq. (65).\nLemma F.2. For the loss function fDTT defined in (60), we have:\nEθ(fDTT − 1\n2 )2 ≥\n{ Tr [ σ(1,0,··· ,0) · ρin ]}2 + { Tr [ σ(3,0,··· ,0) · ρin ]}2 16n , (77)\nwhere we denote σ(i1,i2,··· ,in) ≡ σi1 ⊗ σi2 ⊗ · · · ⊗ σin ,\nand the expectation is taken for all parameters in θ with uniform distributions in [0, 2π].\nProof. First we expand the function fDTT in detail,\nfTT = 1\n2 +\n1 2 Tr [ σ(3,0,··· ,0) · Vm+1CXm · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CX†mV † m+1 ] , (78)\nwherem = dlog ne. Now we consider the expectation of (fDTT− 12 ) 2 under the uniform distribution for θ(1)m+1:\nE θ (1) m+1\n(fDTT − 1\n2 )2 =\n1 4 E θ (1) m+1\n{ Tr [ σ(3,0,··· ,0) · Vm+1CXm · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CX†mV † m+1 ]}2 (79)\n= 1\n8\n{ Tr [ σ(3,0,··· ,0) ·A′ ]}2 + 1\n8\n{ Tr [ σ(1,0,··· ,0) ·A′ ]}2 (80)\n= 1\n8\n{ Tr [ σ(3,0,··· ,0,3,0,··· ,0) ·A ]}2 + 1\n8\n{ Tr [ σ(1,0,··· ,0,0,··· ,0) ·A ]}2 (81)\n≥ 1 8\n{ Tr [ σ(1,0,··· ,0,0,··· ,0) ·A ]}2 , (82)\nwhere\nA′ = CXmVm · · ·CX1V1 · ρin · V †1 CX † 1 · · ·VmCX†m, (83)\nA = Vm · · ·CX1V1 · ρin · V †1 CX † 1 · · ·Vm. (84)\nEq. (80) is derived using Lemma C.3, and Eq. (81) is derived using the collapsed form of Lemma C.1:\nCNOT(σ1 ⊗ σ0)CNOT† = σ1 ⊗ σ0, CNOT(σ3 ⊗ σ0)CNOT† = σ3 ⊗ σ3,\nWe remark that during the integration of the parameters {θ(j)` } in each layer ` ∈ {1, 2, · · · ,m+ 1}, the coefficient of the term {Tr[σ(1,0,··· ,0) · A]}2 only times a factor 1/2 for the case j = 1, and the coefficient remains for the cases j > 1 (check Lemma C.3 for detail). Since the formulation {Tr[σ(1,0,··· ,0) · A]}2 remains the same when merging the operation CX` with σ(1,0,··· ,0), for ` ∈ {1, 2, · · · ,m}, we could generate the following equation,\nEθ2 · · ·Eθm+1(fDTT − 1 2 )2 ≥ (1 2 )m+2\n{ Tr [ σ(1,0,··· ,0) · V1 · ρin · V †1 ]}2 , (85)\nwhere θ` denotes the vector consisted with all parameters in the `-th layer.\nFinally by using Lemma C.3, we could integrate the parameters {θ(j)1 }nj=1 in (85):\nEθ(fDTT − 1\n2 )2 = Eθ1 · · ·Eθm+1(fDTT −\n1 2 )2 (86)\n≥ { Tr [ σ(1,0,··· ,0) · ρin ]}2 + { Tr [ σ(3,0,··· ,0) · ρin ]}2 2m+3\n(87) ≥ { Tr [ σ(1,0,··· ,0) · ρin ]}2 + { Tr [ σ(3,0,··· ,0) · ρin ]}2 16n . (88)" }, { "heading": "G THE PROOF OF THEOREM 3.1: THE SC PART", "text": "Theorem G.1. Consider the n-qubit SC-QNN defined in Figure 2 and the corresponding objective function fSC defined in (4), then we have:\n1 + nc 21+nc · α(ρin) ≤ Eθ‖∇θfSC‖2 ≤ 2n− 1, (89)\nwhere nc is the number of the control operation CNOT that directly links to the first qubit channel, and the expectation is taken for all parameters in θ with uniform distributions in [0, 2π], ρin ∈ C2 n×2n denotes the input state, α(ρin) = Tr [ σ(1,0,··· ,0) · ρin ]2 + Tr [ σ(3,0,··· ,0) · ρin ]2 , and σ(i1,i2,··· ,in) ≡ σi1 ⊗ σi2 ⊗ · · · ⊗ σin .\nProof. Firstly we remark that by Lemma D.1, each partial derivative is calculated as\n∂fSC\n∂θ (k) j\n= 1\n2\n( Tr[O · VSC(θ+)ρinVSC(θ+)†]− Tr[O · VSC(θ−)ρinVSC(θ−)†] ) ,\nsince the expectation of the quantum observable is bounded by [−1, 1], the square of the partial derivative could be easily bounded as: (\n∂fSC\n∂θ (k) j\n)2 ≤ 1.\nBy summing up 2n− 1 parameters, we obtain\n‖∇θfSC‖2 = ∑ j,k\n( ∂fSC\n∂θ (k) j\n)2 ≤ 2n− 1.\nOn the other side, the lower bound could be derived as follows,\nEθ‖∇θfSC‖2 ≥ Eθ\n( ∂fSC\n∂θ (1) 1\n)2 +\nn∑ j=n−nc+1 Eθ\n( ∂fSC\n∂θ (1) j\n)2 (90)\n= (1 + nc) · 4 · Eθ ( fSC − 1\n2\n)2 (91)\n≥ 1 + nc 21+nc\n· ( Tr [ σ(1,0,··· ,0) · ρin ]2 + Tr [ σ(3,0,··· ,0) · ρin ]2) , (92)\nwhere Eq. (91) is derived using Lemma G.1, and Eq. (92) is derived using Lemma G.2.\nLemma G.1. For the objective function fSC defined in Eq. (60), the following formula holds for every j such that θ(1)j tunes the single qubit gate on the first qubit channel:\nEθ\n( ∂fSC\n∂θ (1) j\n)2 = 4 · Eθ ( fSC − 1\n2\n)2 , (93)\nwhere the expectation is taken for all parameters in θ with uniform distribution in [0, 2π].\nProof. First we write the formulation of fSC in detail:\nfSC = 1\n2 +\n1 2 Tr [ σ(3,0,··· ,0) · VnCXn−1 · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CX † n−1V † n ] , (94)\nwhere we denote σ(i1,i2,··· ,in) ≡ σi1 ⊗ σi2 ⊗ · · · ⊗ σin . The operation CX` is defined as,\nCX` = { CNOT operation on qubits pair (n+ 1− `, n− `) (1 ≤ ` ≤ n− 1− nc), CNOT operation on qubits pair (n+ 1− `, 1) (n− nc ≤ ` ≤ n− 1).\nThe operation V` is defined as,\nV` = W (1) 1 ⊗W (2) 1 ⊗ · · · ⊗W (n) 1 (` = 1),\nI⊗(n−`) ⊗W (1)` ⊗ I⊗(`−1) (1 ≤ ` ≤ n− 1− nc), W\n(1) ` ⊗ I ⊗ I ⊗ · · · ⊗ I (n− nc ≤ ` ≤ n).\nNow we focus on the partial derivative of the function fSC to the parameter θ (1) j . We have:\n∂fSC\n∂θ (1) j\n= 1\n2 Tr\n[ σ(3,0,··· ,0) · VnCXn−1 · · · ∂Vj\nθ (1) j\n· · ·CX1V1 · ρin · V †1 CX † 1 · · ·V † j · · ·CX † n−1V † n ] (95)\n+ 1\n2 Tr\n[ σ(3,0,··· ,0) · VnCXn−1 · · ·Vj · · ·CX1V1 · ρin · V †1 CX † 1 · · · ∂V †j\nθ (1) j\n· · ·CX†n−1V †n ] (96)\n= Tr [ σ(3,0,··· ,0) · VnCXn−1 · · · ∂Vj\nθ (1) j\n· · ·CX1V1 · ρin · V †1 CX † 1 · · ·V † j · · ·CX † n−1V † n\n] .\n(97)\nThe Eq. (97) holds because both terms in (95) and (96) except ρin are real matrices, and ρin = ρ † in. The key idea to derive Eθ ( ∂fSC\n∂θ (1) j\n)2 = 4 ·Eθ(fSC− 12 ) 2 is that for cases B = D = σj ∈ {σ1, σ3} in\nLemma C.3 and Lemma C.4, the term δj0+δj22 = 0, which means both lemma collapse to the same formulation:\nEθTr[WAW †B]Tr[WCW †D] = EθTr[GAW †B]Tr[GCW †D]\n= 1\n2 Tr[Aσ1]Tr[Cσ1] +\n1 2 Tr[Aσ3]Tr[Cσ3].\nNow we write the analysis in detail.\nEθ ( ∂fSC ∂θ\n(1) j\n)2 − 4 ( fSC − 1\n2 )2 (98) =Eθ1 · · ·Eθn−1EθnTr [ σ(3,0,··· ,0) · VnCXn−1An−1CXn−1V †n ]2 (99)\n−Eθ1 · · ·Eθn−1EθnTr [ σ(3,0,··· ,0) · VnCXn−1Bn−1CXn−1V †n ]2 (100)\n=Eθ1 · · ·Eθn−1 { 1 2 Tr [ σ(3,3,0,··· ,0) ·An−1 ]2 + 1 2 Tr [ σ(1,0,0,··· ,0) ·An−1 ]2} (101)\n−Eθ1 · · ·Eθn−1 { 1 2 Tr [ σ(3,3,0,··· ,0) ·Bn−1 ]2 + 1 2 Tr [ σ(1,0,0,··· ,0) ·Bn−1 ]2} , (102)\nwhere\nAn−1 = Vn−1CXn−2 · · · ∂Vj\nθ (1) j\n· · ·CX1V1 · ρin · V †1 CX † 1 · · ·V † j · · ·CX † n−2V † n−1,\nBn−1 = Vn−1CXn−2 · · ·Vj · · ·CX1V1 · ρin · V †1 CX † 1 · · ·V † j · · ·CX † n−2V † n−1.\nEq. (101) and Eq. (102) are derived using Lemma C.3 and the collapsed form of Lemma C.1:\nCNOT(σ1 ⊗ σ0)CNOT† = σ1 ⊗ σ0, CNOT(σ3 ⊗ σ0)CNOT† = σ3 ⊗ σ3,\nand θ` denotes the vector consisted with all parameters in the `-th layer. The integration (99)- (102) could be performed similarly for parameters {θn−1,θn−2, · · · ,θj+1}. It is not hard to find that after the integration of the parameters θj+1, the term Tr[σi · Aj ]2 and Tr[σi · Bj ]2 have the opposite coefficients. Besides, the first index of each Pauli tensor product σ(i1,i2,··· ,in) could only be i1 ∈ {1, 3} because of the Lemma C.3. So we could write\nEθ ( ∂fSC ∂θ\n(1) j\n)2 − 4 ( fSC − 1\n2 )2 (103) =Eθ1 · · ·Eθj\n ∑ i1∈{1,3} 3∑ i2=0 · · · 3∑ in=0 aiTr [σi ·Aj ]2 − aiTr [σi ·Bj ]2 , (104)\nwhere\nAj = ∂Vj\n∂θ (1) j\nCXj−1 · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CXj−1V † j\n= (G (1) j ⊗ I ⊗(n−1))A /θ\n(1) j\nj (W (1)† j ⊗ I ⊗(n−1)),\nBj = VjCXj−1 · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CXj−1V † j\n= (W (1) j ⊗ I ⊗(n−1))A /θ\n(1) j\nj (W (1)† j ⊗ I ⊗(n−1)),\nand ±ai are coefficients of the term Tr [σi ·Aj ]2, Tr [σi ·Bj ]2, respectively. We denote G(1)j = ∂W\n(1) j\n∂θ (1) j\nand useA /θ\n(1) j\nj to denote the rest part ofAj andBj . By Lemma C.3 and Lemma C.4, we have\nE θ (1) j\n[ Tr [σi ·Aj ]2 − Tr [σi ·Bj ]2 ] = 0,\nsince for the case i1 ∈ {1, 3}, the term δi10+δi12\n2 = 0 in Lemma C.3 and Lemma C.4, which means both lemmas have the same formulation. Then, we derive the Eq. (93).\nLemma G.2. For the loss function fSC defined in (4), we have:\nEθ ( fSC − 1\n2\n)2 ≥ { Tr [ σ(1,0,··· ,0) · ρin ]}2 + { Tr [ σ(3,0,··· ,0) · ρin ]}2 23+nc , (105)\nwhere we denote σ(i1,i2,··· ,in) ≡ σi1 ⊗ σi2 ⊗ · · · ⊗ σin ,\nand the expectation is taken for all parameters in θ with uniform distributions in [0, 2π].\nProof. We expand the function fSC in detail,\nfSC = 1\n2 +\n1 2 Tr [ σ(3,0,··· ,0) · VnCXn−1 · · ·CX1V1 · ρin · V †1 CX † 1 · · ·CX † n−1V † n ] = 1\n2 +\n1 2 Tr [ σ3,0,··· ,0 · ρ(n) ] ,\nwhere ρ(j) = VjCXj−1 · · ·CX1V1ρin · V †1 CX † 1 · · ·CX † j−1V † j , ∀j ∈ [n]. (106)\nNow we focus on the expectation of (fSC − 12 ) 2:\nEθ ( fSC − 1\n2\n)2 = Eθ1Eθ2 · · ·Eθn 1 4 Tr [ σ3,0,··· ,0 · ρ(n) ]2 (107)\n≥ Eθ1Eθ2 · · ·Eθn−1 1 8 Tr [ σ1,0,··· ,0 · ρ(n−1) ]2 (108)\n≥ Eθ1Eθ2 · · ·Eθn−nc 1 22+nc Tr [ σ1,0,··· ,0 · ρ(n−nc) ]2 (109)\n= Eθ1Eθ2 · · ·Eθn−nc−1 1 22+nc Tr [ σ1,0,··· ,0 · ρ(n−nc−1) ]2 (110)\n= Eθ1 1 22+nc Tr [ σ1,0,··· ,0 · ρ(1) ]2 (111) = 1\n23+nc\n( Tr [σ1,0,··· ,0 · ρin]2 + Tr [σ3,0,··· ,0 · ρin]2 ) , (112)\nwhere Eq. (112) is derived from Lemma C.3. We derive Eqs. (108-109) by noticing that following equations hold for n− nc + 1 ≤ j ≤ n and i ∈ {1, 3},\nEθj Tr [ σi,0,··· ,0 · ρ(j) ]2 = Eθj Tr [ σi,0,··· ,0 · VjCXj−1ρ(j−1)CX†j−1V † j ]2 (113)\n= 1 2 Tr [ σ1,0,··· ,0 · CXj−1ρ(j−1)CX†j−1 ]2 (114)\n+ 1 2 Tr [ σ3,0,··· ,0 · CXj−1ρ(j−1)CX†j−1 ]2 (115)\n≥ 1 2\nTr [ σ1,0,··· ,0 · CXj−1ρ(j−1)CX†j−1 ]2 (116)\n= 1 2 Tr [ σ1,0,··· ,0 · ρ(j−1) ]2 , (117)\nwhere Eq. (113) is derived based on the definition of ρ(j) in Eq. (106), Eqs. (114-115) are derived based on Lemma C.3, and Eq. (117) is derived based on Lemma C.1.\nWe derive Eq. (110-111) by noticing that following equations hold for 2 ≤ j ≤ n− nc, Eθj Tr [ σ1,0,··· ,0 · ρ(j) ]2 = Eθj Tr [ σ1,0,··· ,0 · VjCXj−1ρ(j−1)CX†j−1V † j ]2 (118)\n= Tr [ σ1,0,··· ,0 · CXj−1ρ(j−1)CX†j−1 ]2 (119)\n= Tr [ σ1,0,··· ,0 · ρ(j−1) ]2 , (120)\nwhere Eq. (118) is based on the definition of ρ(j) in Eq. (106), Eq. (119) is based on Lemma C.3, and Eq. (120) is based on Lemma C.1." } ]
2,020
null
SP:8a8aa5f245c2fb82beddb19c82dddb8d67f66f8a
[ "In this paper, the authors introduce a class of games called Hidden Convex-Concave where a (stricly) convex-concave potential is composed with smooth maps. On this class of problems, they show that the continuous gradient dynamics converge to (a neighbordhood of) the minimax solutions of the problem. This is an exploratory theoretical paper which aims at better capturing the behaviors that can be observed e.g. in the training of GANs.", "This paper extends \"hidden bilinear\" games [1] to \"hidden convex-concave (HCC)\" games, and studies a special case of HCC games (denoted as HCC* from below) where output of each dimension of a nonlinear function depends on a seperate group of parameters . For HCC* games, the authors utilize Lyapunov-type arguments and show that GDA dynamics stabilizes around the output-space Nash equilibrium (Theorem 3), whose parameter counterparts are named as \"von Neumann solutions\". With an additional assumption that the game has one-sided strictness property, the authors further show that GDA converges to a \"von Neumann solution\" (Theorem 6), by combining Lasalle's principle and the fact that \"safe initializations\" reside in ROA of such solutions. Additionally, the authors show that adding a regularization term can accelerate the convergence of GDA towards the perturbed equilibrium in the output space (Theorem 8). Finally, the authors exemplify GANs and evolutionary games, and argue that such games can be nicely formulated as HCC games.", "This paper studies a special problem structure in min-max optimization. Specifically, the paper considers the setting when the objective function can be reparametrized into a (strict) convex-concave game, or referred to as a hidden convex-concave game in the paper. More formally, the considered function is of the form L(F(\\theta), G(\\phi)), where L is convex-concave. F and G are of separable structure, namely both the max and min player are from a cartesian product of scalar functions from disjoint sets of parameters. With the separable structure, the authors can thoroughly investigate the conditions for stable GDA dynamics and the good initializations for GDA to find the equilibrium of L. Finally, the authors give some discussion on connecting the structures to GANs and evolutionary game theory/biology. ", "In this paper, the authors study a two-player zero-sum convex-concave game whose inputs are parameterized by nonlinear functions. This results in the overall game being nonconvex-nonconcave in the parameter space. This setting is similar to that observed in GANs where the game is convex-concave in function space of discriminator and generator but is nonconvex-nonconcave in the space of parameters. The authors call such games as Hidden Convex-Concave (HCC) games." ]
Many recent AI architectures are inspired by zero-sum games, however, the behavior of their dynamics is still not well understood. Inspired by this, we study standard gradient descent ascent (GDA) dynamics in a specific class of non-convex nonconcave zero-sum games, that we call hidden zero-sum games. In this class, players control the inputs of smooth but possibly non-linear functions whose outputs are being applied as inputs to a convex-concave game. Unlike general zero-sum games, these games have a well-defined notion of solution; outcomes that implement the von-Neumann equilibrium of the “hidden" convex-concave game. We provide conditions under which vanilla GDA provably converges not merely to local Nash, but the actual von-Neumann solution. If the hidden game lacks strict convexity properties, GDA may fail to converge to any equilibrium, however, by applying standard regularization techniques we can prove convergence to a von-Neumann solution of a slightly perturbed zero-sum game. Our convergence results are non-local despite working in the setting of non-convex non-concave games. Critically, under proper assumptions we combine the Center-Stable Manifold Theorem along with novel type of initialization dependent Lyapunov functions to prove that almost all initial conditions converge to the solution. Finally, we discuss diverse applications of our framework ranging from generative adversarial networks to evolutionary biology.
[ { "affiliations": [], "name": "Lampros Flokas" } ]
[ { "authors": [ "Jacob Abernethy", "Kevin A Lai", "Kfir Y Levy", "Jun-Kun Wang" ], "title": "Faster rates for convex-concave games", "venue": "In COLT,", "year": 2018 }, { "authors": [ "Jacob Abernethy", "Kevin A Lai", "Andre Wibisono" ], "title": "Last-iterate convergence rates for min-max optimization", "venue": null, "year": 1906 }, { "authors": [ "Leonard Adolphs", "Hadi Daneshmand", "Aurélien Lucchi", "Thomas Hofmann" ], "title": "Local saddle point optimization: A curvature exploitation approach", "venue": "In AISTATS,", "year": 2019 }, { "authors": [ "James Bailey", "Georgios Piliouras" ], "title": "Fast and furious learning in zero-sum games: vanishing regret with non-vanishing step sizes", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "James P Bailey", "Gauthier Gidel", "Georgios Piliouras" ], "title": "Finite regret and cycles with fixed step-size via alternating gradient descent-ascent", "venue": "In COLT,", "year": 2020 }, { "authors": [ "James P. Bailey", "Georgios Piliouras" ], "title": "Multiplicative weights update in zero-sum games", "venue": "In EC,", "year": 2018 }, { "authors": [ "David Balduzzi", "Sébastien Racanière", "James Martens", "Jakob N. Foerster", "Karl Tuyls", "Thore Graepel" ], "title": "The mechanics of n-player differentiable games", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Michel Benaïm" ], "title": "Dynamics of stochastic approximation algorithms", "venue": "In Seminaire de probabilites XXXIII,", "year": 1999 }, { "authors": [ "Sanjay P. Bhat", "Dennis S. Bernstein" ], "title": "Nontangency-based lyapunov tests for convergence and stability in systems having a continuum of equilibria", "venue": "SIAM J. Control Optim,", "year": 2003 }, { "authors": [ "Stephen Boyd", "Lieven Vandenberghe" ], "title": "Convex Optimization", "venue": null, "year": 2004 }, { "authors": [ "Yang Cai", "Costantinos Daskalakis" ], "title": "On minmax theorems for multiplayer games", "venue": "In SODA, SODA,", "year": 2011 }, { "authors": [ "Yun Kuen Cheung", "Georgios Piliouras" ], "title": "Vortices instead of equilibria in minmax optimization: Chaos and butterfly effects of online learning in zero-sum games", "venue": null, "year": 2019 }, { "authors": [ "Yun Kuen Cheung", "Georgios Piliouras" ], "title": "Chaos, extremism and optimism: Volume analysis of learning in games", "venue": "In NeurIPS,", "year": 2020 }, { "authors": [ "Constantinos Daskalakis", "Andrew Ilyas", "Vasilis Syrgkanis", "Haoyang Zeng" ], "title": "Training GANs with optimism", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Constantinos Daskalakis", "Ioannis Panageas" ], "title": "The limit points of (optimistic) gradient descent in min-max optimization", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Constantinos Daskalakis", "Ioannis Panageas" ], "title": "Last-iterate convergence: Zero-sum games and constrained min-max optimization", "venue": "In ITCS,", "year": 2019 }, { "authors": [ "Constantinos Daskalakis", "Stratis Skoulakis", "Manolis Zampetakis" ], "title": "The complexity of constrained min-max optimization", "venue": null, "year": 2009 }, { "authors": [ "Lawrence C Evans" ], "title": "Partial differential equations and monge-kantorovich mass transfer", "venue": "Current developments in mathematics,", "year": 1997 }, { "authors": [ "Farzan Farnia", "Asuman E. Ozdaglar" ], "title": "Do gans always have nash equilibria", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Tanner Fiez", "Benjamin Chasnov", "Lillian Ratliff" ], "title": "Implicit learning dynamics in stackelberg games: Equilibria characterization, convergence analysis, and empirical study", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Gauthier Gidel", "Hugo Berard", "Gaëtan Vignoud", "Pascal Vincent", "Simon Lacoste-Julien" ], "title": "A variational inequality perspective on generative adversarial networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "NeurIPS,", "year": 2014 }, { "authors": [ "Ian J. Goodfellow" ], "title": "NIPS 2016 tutorial", "venue": "Generative adversarial networks", "year": 2017 }, { "authors": [ "J. Hofbauer", "K. Sigmund" ], "title": "Evolutionary Games and Population Dynamics", "venue": null, "year": 1998 }, { "authors": [ "Yu-Guan Hsieh", "Franck Iutzeler", "Jérôme Malick", "Panayotis Mertikopoulos" ], "title": "On the convergence of single-call stochastic extra-gradient methods", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Yu-Guan Hsieh", "Franck Iutzeler", "Jérôme Malick", "Panayotis Mertikopoulos" ], "title": "Explore aggressively, update conservatively: Stochastic extragradient methods with variable stepsize scaling", "venue": "In NeurIPS,", "year": 2020 }, { "authors": [ "Chi Jin", "Praneeth Netrapalli", "Michael I. Jordan" ], "title": "Minmax optimization: Stable limit points of gradient descent ascent are locally optimal", "venue": null, "year": 1902 }, { "authors": [ "Hassan K Khalil" ], "title": "Nonlinear systems; 3rd ed", "venue": null, "year": 2002 }, { "authors": [ "Karol Kurach", "Mario Lucic", "Xiaohua Zhai", "Marcin Michalski", "Sylvain Gelly" ], "title": "A large-scale study on regularization and normalization in gans", "venue": null, "year": 2019 }, { "authors": [ "Jason D. Lee", "Ioannis Panageas", "Georgios Piliouras", "Max Simchowitz", "Michael I. Jordan", "Benjamin Recht" ], "title": "First-order methods almost always avoid saddle points", "venue": null, "year": 2017 }, { "authors": [ "Jason D. Lee", "Max Simchowitz", "Michael I. Jordan", "Benjamin Recht" ], "title": "Gradient descent only converges to minimizers", "venue": "In COLT,", "year": 2016 }, { "authors": [ "Qi Lei", "Jason Lee", "Alex Dimakis", "Constantinos Daskalakis" ], "title": "SGD learns one-layer networks in wgans", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Adi Livnat", "Christos Papadimitriou", "Jonathan Dushoff", "Marcus W. Feldman" ], "title": "A mixability theory for the role of sex in evolution", "venue": null, "year": 2008 }, { "authors": [ "Adi Livnat", "Christos Papadimitriou", "Aviad Rubinstein", "Andrew Wan", "Gregory Valiant" ], "title": "Satisfiability and evolution", "venue": "In FOCS,", "year": 2014 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Eric Mazumdar", "Lillian J Ratliff" ], "title": "Local nash equilibria are isolated, strict local nash equilibria in ‘almost all’ zero-sum continuous games", "venue": "In Proc. IEEE Conf. Decis. Control,", "year": 2019 }, { "authors": [ "Ruta Mehta", "Ioannis Panageas", "Georgios Piliouras" ], "title": "Natural selection as an inhibitor of genetic diversity: Multiplicative weights updates algorithm and a conjecture of haploid genetics", "venue": "ITCS,", "year": 2015 }, { "authors": [ "Ruta Mehta", "Ioannis Panageas", "Georgios Piliouras", "Sadra Yazdanbod" ], "title": "The Computational Complexity of Genetic Diversity", "venue": "European Symposium on Algorithms (ESA),", "year": 2016 }, { "authors": [ "Reshef Meir", "David Parke" ], "title": "A note on sex, evolution, and the multiplicative updates algorithm", "venue": "In AAMAS,", "year": 2015 }, { "authors": [ "Panayotis Mertikopoulos", "Bruno Lecouat", "Houssam Zenati", "Chuan-Sheng Foo", "Vijay Chandrasekhar", "Georgios Piliouras" ], "title": "Optimistic mirror descent in saddle-point problems: Going the extra(-gradient) mile", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Panayotis Mertikopoulos", "Christos H. Papadimitriou", "Georgios Piliouras" ], "title": "Cycles in adversarial regularized learning", "venue": "In SODA,", "year": 2018 }, { "authors": [ "Lars M. Mescheder", "Andreas Geiger", "Sebastian Nowozin" ], "title": "Which training methods for gans do actually converge", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Aryan Mokhtari", "Asuman Ozdaglar", "Sarath Pattathil" ], "title": "A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach", "venue": "In AISTATS,", "year": 2020 }, { "authors": [ "Yurii E. Nesterov" ], "title": "Introductory Lectures on Convex Optimization - A Basic Course, volume 87 of Applied Optimization", "venue": null, "year": 2004 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-gan: Training generative neural samplers using variational divergence minimization", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Brendan O’Donoghue", "Chris J. Maddison" ], "title": "Hamiltonian descent for composite objectives", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Ioannis Panageas", "Georgios Piliouras" ], "title": "Gradient descent only converges to minimizers: Non-isolated critical points and invariant regions", "venue": "In ITCS,", "year": 2017 }, { "authors": [ "Ioannis Panageas", "Georgios Piliouras" ], "title": "Gradient descent only converges to minimizers: Non-isolated critical points and invariant regions", "venue": "In ITCS,", "year": 2017 }, { "authors": [ "Ioannis Panageas", "Georgios Piliouras", "Xiao Wang" ], "title": "First-order methods almost always avoid saddle points: The case of vanishing step-sizes", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Lawrence Perko" ], "title": "Differential Equations and Dynamical Systems. Springer, 3nd", "venue": null, "year": 1991 }, { "authors": [ "Julien Pérolat", "Rémi Munos", "Jean-Baptiste Lespiau", "Shayegan Omidshafiei", "Mark Rowland", "Pedro A. Ortega", "Neil Burch", "Thomas W. Anthony", "David Balduzzi", "Bart De Vylder", "Georgios Piliouras", "Marc Lanctot", "Karl Tuyls" ], "title": "From poincaré recurrence to convergence in imperfect information games: Finding equilibrium via regularization", "venue": null, "year": 2002 }, { "authors": [ "Georgios Piliouras", "Leonard J Schulman" ], "title": "Learning dynamics and the co-evolution of competing sexual species", "venue": "In ITCS,", "year": 2018 }, { "authors": [ "Georgios Piliouras", "Jeff S. Shamma" ], "title": "Optimization despite chaos: Convex relaxations to complex limit sets via poincaré recurrence", "venue": "In SODA,", "year": 2014 }, { "authors": [ "Kevin Roth", "Aurélien Lucchi", "Sebastian Nowozin", "Thomas Hofmann" ], "title": "Stabilizing training of generative adversarial networks through regularization", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Maziar Sanjabi", "Jimmy Ba", "Meisam Razaviyayn", "Jason D. Lee" ], "title": "On the convergence and robustness of training gans with regularized optimal transport", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Yuzuru Sato", "Eizo Akiyama", "J. Doyne Farmer" ], "title": "Chaos in learning a simple two-person game", "venue": null, "year": 2002 }, { "authors": [ "Leonard Schulman", "Umesh V Vazirani" ], "title": "The duality gap for two-team zero-sum games", "venue": "In ITCS,", "year": 2017 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Cédric Villani" ], "title": "Optimal transport: old and new, volume 338", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Lampros Flokas", "Georgios Piliouras" ], "title": "Poincaré recurrence, cycles and spurious equilibria in gradient-descent-ascent for non-convex non-concave zero-sum games", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Jun-Kun Wang", "Jacob D Abernethy" ], "title": "Acceleration through optimistic no-regret dynamics", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Jörgen Weibull" ], "title": "Evolutionary Game Theory", "venue": null, "year": 1995 }, { "authors": [ "Junchi Yang", "Negar Kiyavash", "Niao He" ], "title": "Global convergence and variance reduction for a class of nonconvex-nonconcave minimax problems", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Guojun Zhang", "Pascal Poupart", "Yaoliang Yu" ], "title": "Optimality and Stability in Non-Convex-Non- Concave Min-Max Optimization", "venue": null, "year": 2002 } ]
[ { "heading": "1 Introduction", "text": "Traditionally, our understanding of convex-concave games revolves around von Neumann’s celebrated minimax theorem, which implies the existence of saddle point solutions with a uniquely defined value. These solutions are called von Nemann solutions and guarantee to each agent their corresponding value regardless of opponent play. Although many learning algorithms are known to be able to compute such saddle points [13], recently there has there has been a fervor of activity in proving stronger results such as faster regret minimization rates or analysis of the day-to-day behavior [46, 17, 7, 1, 66, 19, 2, 45, 5, 25, 70, 29, 6, 48, 30, 56].\nThis interest has been largely triggered by the impressive successes of AI architectures inspired by min-max games such as Generative Adversarial Networks (GANS) [26], adversarial training [40] and reinforcement learning self-play in games [63]. Critically, however, all these applications are based upon non-convex non-concave games, our understanding of which is still nascent. Nevertheless,\n35th Conference on Neural Information Processing Systems (NeurIPS 2021).\nsome important early work in the area has focused on identifying new solution concepts that are widely applicable in general min-max games, such as (local/differential) Nash equilibrium [3, 41], local minmax [18], local minimax [31], (local/differential) Stackleberg equilibrium [24], local robust point [69]. The plethora of solutions concepts is perhaps suggestive that “solving\" general min-max games unequivocally may be too ambitious a task. Attraction to spurious fixed points [18], cycles [65], robustly chaotic behavior [15, 16] and computational hardness issues [20] all suggest that general min-max games might inherently involve messy, unpredictable and complex behavior.\nAre there rich classes of non-convex non-concave games with an effectively unique game theoretic solution that is selected by standard optimization dynamics (e.g. gradient descent)?\nOur class of games. We will define a general class of min-max optimization problems, where each agent selects its own vectors of parameters which are then processed separately by smooth functions. Each agent receives their respective payoff after entering the outputs of the processed decision vectors as inputs to a standard convex-concave game. Formally, there exist functions F : RN → X ⊂ Rn and G : RM → Y ⊂ Rm and a continuous convex-concave function L : X × Y → R, such that the min-max game is\nmin θ∈RN max φ∈RM L(F(θ),G(φ)). (Hidden Convex-Concave (HCC))\nWe call this class of min-max problems Hidden Convex-Concave Games. It generalizes the recently defined hidden bilinear games of [65].\nOur solution concept. Out of all the local Nash equilibria of HCC games, there exists a special subclass, the vectors (θ∗,φ∗) that implement the von Neumann solution of the convex-concave game. This solution has a strong and intuitive game theoretic justification. Indeed, it is stable even if the agents could perform arbitrary deviations directly on the output spaces X,Y . These parameter combinations (θ∗,φ∗) “solve\" the “hidden” convex-concave L and thus we call them von Neumann solutions. Naturally, HCCs will typically have numerous local saddle/Nash equilibria/fixed points that do not satisfy this property. Instead, they correspond to stationary points of the F,G where their output is stuck, e.g., due to an unfortunate initialization. At these points the agents may be receiving payoffs which can be arbitrarily smaller/larger than the game theoretic value of game L. Fortunately, we show that Gradient Descent Ascent (GDA) strongly favors von Neumann solutions over generic fixed points.\nOur results. In this work, we study the behavior of continuous GDA dynamics for the class of HCC games where each coordinate of F,G is controlled by disjoint sets of variables. In a nutshell, we show that GDA trajectories stabilize around or converge to the corresponding von Neumann solutions of the hidden game. Despite restricting our attention to a subset of HCC games, our analysis has to overcome unique hurdles not shared by standard convex concave games.\nChallenges of HCC games. In convex-concave games, deriving the stability of the von Neumann solutions relies on the Euclidean distance from the equilibrium being a Lyapunov function. In contrast, in HCC games where optimization happens in the parameter space of θ,φ, the non-linear nature of F,G distorts the convex-concave landscape in the output space. Thus, the Euclidean distance will not be in general a Lyapunov function. Moreover, the existence of any Lyapunov function for the trajectories in the output space of F,G does not translate to a well-defined function in the parameter space (unless F,G are trivial, invertible maps). Worse yet, even if L has a unique solution in the output space, this solution could be implemented by multiple equilibria in the parameter space and thus each of them can not be individually globally attracting. Clearly any transfer of stability or convergence properties from the output to the parameter space needs to be initialization dependent. It is worth mentioning that similar challenges like transfering results from the output to the input space was also faced in the simpler class of hidden bilinear games. However, [65] to sidestep this issue assume the restricitve requirement of F,G to be invertible operators. Our results go beyond this simplified case requiring new proof techniques. Specifically, we show how to combine the powerful technologies of the the Center-Stable Manifold Theorem, typically used to argue convergence to equilibria in non-convex optimization settings [34, 52, 54, 53, 35], along with a novel Lyapunov function argument to prove that almost all initial conditions converge to the our game theoretic solution.\nLyapunov Stability. Our first step is to construct an initialization-dependent Lyapunov function that accounts for the curvature induced by the operators F and G (Lemma 2). Leveraging a potentially\ninfinite number of initialization-dependent Lyapunov functions in Theorem 5 we prove that under mild assumptions the outputs of F,G stabilize around the von Neumann solution of L.\nConvergence. Mirroring convex concave games, we require strict convexity or concavity of L to provide convergence guarantees to von Neumann solutions (Theorem 6). Barring initializations where von Neumann solutions are not reachable due to the limitations imposed by F and G, the set of von Neumann solutions are globally asymptotically stable (Corollary 1). Even in non-strict HCC games, we can add regularization terms to make L strictly convex concave. Small amounts of regularization allows for convergence without significantly perturbing the von Neumann solution (Theorem 7) while increasing regularization enables exponentially faster convergence rates (Theorem 8). Similar to the aforementioned theoretical work, our model of HCC games provides a formal and theoretical tractable testbed for evaluating the performance of different training methods in GAN inspired architectures. As a concrete example, [36] recently proved the success of WGAN training for learning the parameters of non-linearly transformed Gaussian distributions, where for simplicity they replaced the typical Lipschitz constraint of the discriminator function with a quadratic regularizer. Interestingly, we can elucidate on why regularized learning is actually necessary by establishing a formal connection to HCC games. On top of other such ML applications, our game theoretic framework can furthermore capture and generalize evolutionary game theoretic models. [57] analyze a model of evolutionary competition between two species (host-parasite). The outcome of this competition depends on their respective phenotypes (informally their properties, e.g., agility, camouflage, etc.). These phenotypes are encoded via functions that map input vectors (here genotype/DNA sequences) to phenotypes. While [57] proved that learning in these games does not converge to equilibria and typically cycles for almost all initial conditions, we can explicitly construct initial conditions that do not satisfy our definition of safety and end up converging to artificial fixed points. Safety conditions aside, we show that a slight variation of the evolutionary/learning algorithm suffices to resolve the cycling issues and for the dynamics to equilibrate to the von Neumann solution. Hence, we provide the first instance of team zero-sum games [62], a notoriously hard generalization of zero-sum games with a large duality gap, that is solvable by decentralized dynamics.\nOrganization. In Section 2 we provide some preliminary notation, the definition of our model and some useful technical lemmas. Section 3 is devoted to the presentation of our the main results. Section 4 discusses applications of our framework to specific GAN formulations. Section 5 concludes our work with a discussion of future directions and challenges. We defer the full proofs of our results as well as further discussion on applications to the Appendix." }, { "heading": "2 Preliminaries", "text": "" }, { "heading": "2.1 Notation", "text": "Vectors are denoted in boldface x,y unless otherwise indicated are considered as column vectors. We use ‖·‖ to denote the `2−norm. For a function f : Rd → R we use ∇f to denote its gradient. For functions of two vector arguments, f(x,y) : Rd1 × Rd2 → R , we use ∇xf,∇yf to denote its partial gradient. For the time derivative we will use the dot accent abbreviation, i.e., ẋ = ddt [x(t)]. A function f will belong to Cr if it is r times continuously differentiable. Additionally, f ◦ g = f(g(·)) denotes the composition of f, g. Finally, the term “sigmoid” function refers to σ : R→ R such that σ(x) = (1 + e−x)−1." }, { "heading": "2.2 Hidden Convex Concave Games", "text": "θ11 θ12 · · · θ1n1 θ1 f1(θ1)\n...\nθN1 θN2 · · · θNnN θN fN (θN )\nF(θ) L(F(θ),G(φ))\nφ11 φ12 · · · φ1n1 φ1 g1(φ1)\n...\nφM1 φM2 · · · φMnM φM gM (φM )\nG(φ)\nθ̇i = −∇θiL(F(θ),G(φ)) φ̇j = ∇φjL(F(θ),G(φ))Figure:Hidden Seperable Zero-Sum Game Model & Optimization Dynamics\nWe will begin our discussion by defining the notion of convex concave functions as well as strictly convex concave functions. Note that our definition of strictly convex concave functions is a superset of strictly convex strictly concave functions that are usually studied in the literature. Definition 1. L : Rn × Rm → R is convex concave if for every y ∈ Rn L(·,y) is convex and for every x ∈ Rm L(x, ·) is concave. Function L will be called strictly convex concave if it is convex concave and for every x×y ∈ Rn×Rm either L(·,y) is strictly convex or L(x, ·) is strictly concave.\nAt the center of our definition of HCC games is a convex concave utility function L. Additionally, each player of the game is equipped with a set of operator functions. The minimization player is equipped with n functions fi : Rni → R while the maximization player is equipped with m functions gj : Rmj → R. We will assume in the rest of our discussion that fi, gj , L are all C2 functions. The inputs θi ∈ Rni and φj ∈ Rmj are grouped in two vectors\nθ = [ θ1 · · · θn ]> F(θ) = [ f1(θ1) · · · fn(θn) ]> φ = [ φ1 · · · φm ]> G(φ) = [ g1(φ1) · · · gm(φm)\n]> We are ready to define the hidden convex concave game\n(θ∗,φ∗) = arg min θ∈RN arg max φ∈RM L(F(θ),G(φ)).\nwhere N = ∑n i=1 ni and M = ∑m j=1mj . Given a convex concave function L, all stationary points of L are (global) Nash equilibria of the min-max game. We will call the set of all equilibria of L, von Neumann solutions of L and denote them by Solution(L). Unfortunately, Solution(L) can be empty for games defined over the entire Rn × Rm. For games defined over convex compact sets, the existence of at least one solution is guaranteed by von Neumann’s minimax theorem. Our definition of HCC games can capture games on restricted domains by choosing appropriately bounded functions fi and gj . In the following sections, we will just assume that Solution(L) is not empty. We note that our results hold for both bounded and unbounded fi and gj . We are now ready to write down the equations of the GDA dynamics for a HCC game:\nθ̇i = −∇θiL(F(θ),G(φ)) =−∇θifi(θi) ∂L\n∂fi (F(θ),G(φ))\nφ̇j = ∇φjL(F(θ),G(φ)) =∇φjgj(φj) ∂L\n∂gj (F(θ),G(φ))\n(1)" }, { "heading": "2.3 Reparametrization", "text": "The following lemma is useful in studying the dynamics of hidden games. Lemma 1. Let k : Rd → R be a C2 function. Let h : R→ R be a C1 function and x(t) denote the unique solution of the dynamical system Σ1. Then the unique solution for dynamical system Σ2 is z(t) = x(\n∫ t 0 h(s)ds){\nẋ = ∇k(x) x(0) = xinit\n} : Σ1 { ż = h(t)∇k(z)\nz(0) = xinit\n} : Σ2 (2)\nBy choosing h(t) = −∂L(F(t),G(t))/∂fi and h(t) = ∂L(F(t),G(t))/∂gj respectively, we can connect the dynamics of each θi and φj under Equation (1) to gradient ascent on fi and gj . Applying Lemma 1, we get that trajectories of θi and φj under Equation (1) are restricted to be subsets of the corresponding gradient ascent trajectories with the same initializations. For example, in Figure 1 θi(t) can not escape the purple section if it is initialized at (a) neither the orange section if it is initialiazed at (f). This limits the attainable values that fi(t) and gj(t) can take for a specific initialization. Let us thus define the following: Definition 2. For each initialization x(0) of Σ1, Imk(x(0)) is the image of k ◦ x : R→ R.\nApplying Definition 2 in the above example, Imfi(θi(0)) = (fi(−2), fi(−1)) if θi is initialized at (c). Additionally, observe that in each colored section fi(θi(t)) uniquely identifies θi(t). Generally, even in the case that θi are vectors, Lemma 1 implies that for a given θi(0), fi(θi(t)) uniquely identifies θi(t). As a result we get that a new dynamical system involving only fi and gj Theorem 1. For each initialization (θ(0),φ(0)) of Equation (1), there are C1 functions Xθi(0) , Xφj(0) such that θi(t) = Xθi(0)(fi(t)) and φj(t) = Xφj(0)(gj(t)). If (θ(t),φ(t)) satisfy Equation (1) then fi(t) = fi(θi(t)) and gj(t) = gj(φj(t)) satisfy\nḟi = −‖∇θifi(Xθi(0)(fi))‖ 2 ∂L\n∂fi (F,G)\nġj = ‖∇φjgj(Xφj(0)(gj))‖ 2 ∂L\n∂gj (F,G)\n(3)\nBy determining the ranges of fi and gj , an initialization clearly dictates if a von Neumann solution is attainable. In Figure 1 for example, any point of the pink, orange or blue colored section like (e), (f) or (g) can not converge to a von Neumann solution with fi(θi) = f∗i . The notion of safety captures which initializations can converge to a given element of Solution(L).\nDefinition 3. . We will call the initialization (θ(0),φ(0)) safe for a (p,q) ∈ Solution(L) ifφi(0) and θj(0) are not stationary points of fi and gj respectively and pi ∈ Imfi(θi(0)) and qj ∈ Imgj (φj(0)).\nLeveraging the Center-Stable Manifold Theorem [55], the following observation shows that under mild assumptions almost all initializations are safe:\nTheorem 2. If fi and gj have isolated stationary points, only strict saddle points, compact sublevel-sets, both equilibria pi ∈ (max LocalMin(fi),min LocalMax(fi)) and qj ∈ (max LocalMin(gj),min LocalMax(gj)), then almost all initializations are safe for a (p,q) ∈ Solution(L).\nFinally, in the following sections we use some fundamental notions of stability. We call an equilibrium x∗ of an autonomous dynamical system ẋ = D(x(t)) stable if for every neighborhood U of x∗ there is a neighborhood V of x∗ such that if x(0) ∈ V then x(t) ∈ U for all t ≥ 0. We call a set S asymptotically stable if there exists a neighborhoodR such that for any initialization x(0) ∈ R, x(t) approaches S as t→ +∞. IfR is the whole space the set globally asymptotically stable." }, { "heading": "3 Learning in Hidden Convex Concave Games", "text": "" }, { "heading": "3.1 General Case", "text": "Our main results are based on designing a Lyapunov function for the dynamics of Equation (3):\nLemma 2. If L is convex concave and (φ(0), θ(0)) is a safe for (p,q) ∈ Solution(L), then the following quantity is non-increasing under the dynamics of Equation (3):\nH(F,G) = N∑ i=1 ∫ fi pi z − pi ‖∇fi(Xθi(0)(z))‖2 dz + M∑ j=1 ∫ gj qj z − qj ‖∇gj(Xφj(0)(z))‖2 dz (4)\nF(θ)\nG (φ\n)\n(p,q)\nFigure 2: Level sets of Lyapunov function of Equation (4) for both F and G being one dimensional sigmoid functions.\nObserve that our Lyapunov function here is not the distance to (p,q) as in a classical convex concave game. The gradient terms account for the non constant multiplicative terms in Equation (3). Indeed if the game was not hidden and fi and gj were the identity functions then H would coincide with the Euclidean distance to (p,q). Our first theorem employs the above Lyapunov function to show that (p,q) is stable for Equation (3).\nTheorem 3. If L is convex concave and (φ(0), θ(0)) is a safe for (p,q) ∈ Solution(L), then (p,q) is stable for Equation (3).\nClearly, for the special case of globally invertible functions F,G we could come up with an equivalent Lyapunov function in the θ,φ-space. In this case it is straightforward to transfer the stability results from the induced dynamical system of F,G (Equation (3)) to the initial dynamical system of θ,φ (Equation (1)). For example we can prove the following result: Theorem 4. If fi and gj are sigmoid functions and L is convex concave and there is a (φ(0), θ(0)) that is safe for (p,q) ∈ Solution(L), then (F−1(p),G−1(q)) is stable for Equation (1). In the general case though, stability may not be guaranteed in the parameter space of Equation (1). We will instead prove a weaker notion of stability, which we call hidden stability. Hidden stability captures that if (F(θ(0)),G(φ(0))) is close to a von Neumann solution, then (F(θ(t)),G(φ(t))) will remain close to that solution. Even though hidden stability is weaker, it is essentially what we are interested in, as the output space determines the utility that each player gets. Here we provide sufficient conditions for hidden stability. Theorem 5 (Hidden Stability). Let (p,q) ∈ Solution(L). Let Rfi and Rgj be the set of regular values1 of fi and gj respectively. Assume that there is a ξ > 0 such that [pi − ξ, pi + ξ] ⊆ Rfi and [qj − ξ, qj + ξ] ⊆ Rgj . Define\nr(t) = ‖F(θ(t))− p‖2 + ‖G(φ(t))− q‖2. If fi and gj are proper functions2, then for every > 0, there is an δ > 0 such that\nr(0) < δ =⇒ ∀t ≥ 0 : r(t) < .\nUnfortunately hidden stability still does not imply convergence to von Neumann solutions. [65] studied hidden bilinear games and proved that Ḣ = 0 for this special class of HCC games. Hence, a trajectory is restricted to be a subset of a level set of H which is bounded away from the equilibrium as shown in Figure 2. To sidestep this, we will require in the next subsection the hidden game to be strictly convex concave." }, { "heading": "3.2 Hidden strictly convex concave games", "text": "In this subsection we focus on the case where L is a strictly convex concave function. Based on Definition 1, a strictly convex concave game is not necessarily strictly convex strictly concave and thus it may have a continum of von Neumann solutions. Despite this, LaSalle’s invariance principle, combined with the strict convexity concavity, allows us to prove that if (θ(0),φ(0)) is safe for Z ⊆ Solution(L) then Z is locally asymptotically stable for Equation (3). Lemma 3. Let L be strictly convex concave and Z ⊂ Solution(L) is the non empty set of equilbria of L for which (θ(0),φ(0)) is safe. Then Z is locally asymptotically stable for Equation (3).\nThe above lemma however does not suffice to prove that for an arbitrary initialization (θ(0),φ(0)), (F(t),G(t)) approaches Z as t→ +∞. In other words, a-priori it is unclear if (F(θ(0)),G(φ(0))) is necessarily inside the region of attraction (ROA) of Z. To get a refined estimate of the ROA of Z, we analyze the behavior of H as fi and gj approach the boundaries of Imfi(θi(0)) and Imgj (φj(0)) and more precisely we show that the level sets of H are bounded. Once again the corresponding analysis is trivial for convex concave games, since the level sets are spheres around the equilibria. Theorem 6. Let L be strictly convex concave and Z ⊂ Solution(L) is the non empty set of equilbria of L for which (θ(0),φ(0)) is safe. Under the dynamics of Equation (1) (F(θ(t)),G(θ(t))) converges to a point in Z as t→∞. The theorem above guarantees convergence to a von Neumann solution for all initializations that are safe for at least one element of Solution(L). However, this is not the same as global asymptotic stability. To get even stronger guarantees, we can assume that all initializations are safe. In this case it is straightforward to get a global asymptotic stability result: Corollary 1. Let L be strictly convex concave and assume that all intitializations are safe for at least one element of Solution(L). The following set is globally asymptotically stable for continuous GDA dynamics.\n{(θ∗,φ∗) ∈ Rn × Rm : (F (θ∗), G(φ∗)) ∈ Solution(L)} 1A value a ∈ Im f is called a regular value of f if ∀q ∈ dom f : f(q) = a, it holds∇f(q) 6= 0. 2A function is proper if inverse images of compact subsets are compact.\nNotice that the above approach on global asymptotic convergence using Lyapunov arguments can be extended to other popular alternative gradient-based heuristics like variations of Hamiltonian Gradient Descent. For concision, we defer the exact statements, proofs in the supplement." }, { "heading": "3.3 Convergence via regularization", "text": "Regularization is a key technique that works both in the practice of GANs [47, 33] and in the theory of convex concave games [56, 59, 60]. Our settings of hidden convex concave games allows for provable guarantees for regularization in a wide class of settings, bringing closer practical and theoretical guarantees. Let us have a utility L(x,y) that is convex concave but not strictly. Here we will propose a modified utility L′ that is strictly convex strictly concave. Specifically we will choose\nL′(x,y) = L(x,y) + λ 2 ‖x‖2 − λ 2 ‖y‖2\nThe choice of the parameter λ captures the trade-off between convergence to the original equilibrium of L and convergence speed. On the one hand, invoking the implicit function theorem, we get that for small λ the equilibria of L are not significantly perturbed. Theorem 7. If L is a convex concave function with invertible Hessians at all its equilibria, then for each > 0 there is a λ > 0 such that L′ has equilibria that are -close to the ones of L.\nNote that invertibility of the Hessian means that L must have a unique equilibrium. On the other hand increasing λ increases the rate of convergence of safe initializations to the perturbed equilibrium. Theorem 8. Let (θ(0),φ(0)) be a safe initialization for the unique equilibrium of L′ (p,q). If\nr(t) = ‖F(θ(t))− p‖2 + ‖G(φ(t))− q‖2\nthen there are initialization dependent constants c0, c1 > 0 such that r(t) ≤ c0 exp(−λc1t)." }, { "heading": "4 Applications", "text": "In this section, we discuss how HCC framework can be used to give new insights in a variety of application areas including min-max training for GANs and Evolutionary Game Theory. We also describe applications of regularization to normal form zero sum games in Appendix D.3.\nHidden strictly convex-concave games. We will start our discussion with the fundamental generative architecture of [26]’s GAN. In the vanilla GAN architecture, as it is commonly referred, our goal is to find a generator distribution pG that is close to an input data distribution pdata. To find such a generator function, we can use a discriminator D that “criticizes” the deviations of the generator from the input data distribution. For the case of a discrete pdata over a set N , the minimax problem of [26] is the following:\nmin pG(x)≥0,∑\nx∈N pG(x)=1\nmax D∈(0,1)|N| V (G,D)\nwhere V (G,D) = ∑ x∈N pdata(x) log(D(x)) + ∑ x∈N pG(x) log(1 −D(x)). The problem above can be formulated as a constrained strictly convex-concave hidden game. On the one hand, for a fixed discriminator D∗, the V (G,D∗) is linear over the pG(x). On the other hand, for a fixed generator G∗,\nV (G∗, D) is strongly-concave. We can implement the inequality constraints on both the generator probabilities and discriminator using sigmoid activations. For the equality constraint ∑ x∈N pG(x) = 1 we can introduce a Langrange multiplier. Having effectively removed the constraints, we can see in Figure 3, the dynamics of Equation (1) converge to the unique equilibrium of the game, an outcome consistent with our results in Corollary 1. While the Euclidean distance to the equilibrium is not monotonically decreasing, H(t) is.\nHidden convex-concave games & Regularizaiton. An even more interesting case is Wassertein GANs–WGANs [4]. One of the contributions of [36] is to show that WGANs trained with Stochastic GDA can learn the parameters of Gaussian distributions whose samples are transformed by non-linear activation functions. It is worth mentioning that the original WGAN formulation has a Lipschitz constraint in the discriminator function. For simplicity, [36] replaced this constraint with a quadratic regularizer. The min-max problem for the case of one-dimensional Gaussian N (0, α2∗) and linear discriminator Dv(x) = v>x with x2 activation is:\nmin α∈R max v∈R\nVWGAN(Gα, Dv) = EX∼pdata [D(X)]− EX∼pG [D(X)]− v2/2\n= Ex∼N (0,α2∗)2 [vx]− Ex∼N (0,α2)2 [vx]− v 2/2\n= (α2∗ − α2)v − v2/2 Observe that VWGAN is not convex-concave but it can posed as a hidden strictly convex-concave game with G(α) = (α2∗−α2) and F(v) = v. When computing expectations analytically without sampling, Theorem 6 guarantees convergence. In contrast, without the regularizer VWGAN can be modeled as a hidden bilinear game and thus GDA dynamics cycle. Empirically, these results are robust to discrete and stochastic updates using sampling as shown in Figure 4. Therefore regularization in the work of [36] was a vital ingredient in their proof strategy and not just an implementation detail.\nThe two applications of HCC games in GANs are not isolated findings but instances of a broader pattern that connects HCC games and standard GAN formulations. As noted by [27], if updates in GAN applications were directly performed in the “functional space”, i.e. the generator and discriminator outputs, then standard arguments from convex concave optimization would imply convergence to global Nash equilibria. Indeed, standard GAN formulations like the vanilla GAN [26], f-GAN [50] and WGAN [4] can all be thought of as convex concave games in the space of generator and discriminator outputs. Given that the connections between convex concave games and standard GAN objectives in the output space is missing from recent literature, in Appendix D.1 we show how one can apply Von Neumann’s minimax theorem to derive the optimal generators and discriminators even in the non-realizable case. In practice, the updates happen in the parameter space and thus convexity arguments no longer apply. Our study of HCC games is a stepping stone towards bridging the gap in convergence guarantees between the case of direct updates in the output space and the parameter space.\nEvolutionary Game Theory & Biology. The study of learning dynamics in games has always been strongly and inherently connected with mathematical models of biology and evolution. Typically, this line of research is studied under the name of Evolutionary Game Theory [28, 67]. Zero-sum games and variants thereof are of particular interest for this line of work as they encode settings of direct competition between species (e.g., prey-predator or host-parasite/virus). Even in the simplest such setting of matrix zero-sum games, used to capture competition between asexually reproducing species, it is well known that the emerging dynamics can be non-equilibrating and even chaotic [61, 58].\nStudying the effects of evolutionary competition between sexually evolving species results in significantly more intricate models, as it does not suffice to merely keep track of the fractions of the different types of individuals that self-replicate. Instead it is necessary to keep a much more detailed accounting of the evolution of the frequencies of different genes that get reshuffled and recombined to create new individuals, whilst giving evolutionary preference to the most fit individuals given the current environment. Recent work on intersection of learning theory and game theory has provided concrete such game theoretic models [37, 14, 44, 42]. Due to the intricate nature of their dynamics, deciding even the simplest questions in regards to them (e.g. does genetic diversity survive or not?) is typically computationally hard [43].\nA notable exception, where the dynamics of sexual evolution and, in fact, sexual competition have been relatively thoroughly understood, is the work of [57], on two species (host-parasite) antagonism. The outcome of this competition depends on their respective phenotypes (informally their properties, e.g., large wings versus small wings.) of the two species. The crucial assumption that makes this model theoretically tractable is that the phenotype for each species is a Boolean attribute (this assumption is also used [38]). Despite these simplifications, the dynamics are still not equilibrating and are, in fact, cyclic for almost all initial conditions. Two natural questions emerge: 1) Is the almost everywhere condition necessary? I.e. Do there exist initial conditions which are not cyclic? 2) More importantly, can a slightly perturbed dynamic stabilize these systems and converge to a meaningful equilibrium? Next, we will see how our framework addresses both of these questions.\nTo understand the connection these we will examine the model of [57] in more detail. Concretely, the phenotype of species A,B can be described as a Boolean function over the species genome which is encoded by a binary string (this acts as a simplified version of a DNA string). While the phenotype plays the dominant role for the survival of the species, sexual reproduction modifies only the genotype of an organism. As a result the species are actually involved in a hidden zero-sum game. More formally, each species is game-theoritically represented as a team of agents where each agent controls one bit of the genotype:\nGA = (g A 1 , · · · , gAn ),GB = (gB1 , · · · , gBm)\nuA = L[PhenotypeA(GA),PhenotypeB(GB)] uB = −uA\nWhere gAi , g B j ∈ {0, 1}, PhenotypeA,PhenotypeB is a Boolean function (e.g., AND,XOR) and L is a 2× 2 matrix encoding a zero-sum game (e.g., Matching Pennies). Naturally, one can allow agents to use randomized/mixed strategies in which case the expected utilities of all agents/genes are defined using the standard multi-linear extension of utilities. Thus, these models of evolutionary sexual competition share the same basic structure as hidden linear-linear games, which explains their recurrent, non-equilibrating nature [65].\nIn Figure 5, each gene/agent gAi tunes one real variable θi such that Pr[g A i = 1] = σ(θi) and gene/agent gBj tunes one real variable φj correspondingly. Choosing as Boolean phenotype to be the XOR of two genes, almost all initializations are safe for any bilinear game with a mixed equilibrium. Actually, only the case θ1(0) = θ2(0) or φ1(0) = φ2(0) can be problematic, since for XOR the expected phenotype is bounded in [0, 0.5] and a mixed equilibrium out of this range would be infeasible. Finally, leveraging Theorem 7, we can design a regularized version of the game such that the dynamics converge arbitrarily close to the true von Neumann solution of these games, which is encoded by the min-max strategies of the hidden bi-linear zero-sum game." }, { "heading": "5 Discussion & Future Work", "text": "While this work is a promising first step towards understanding GAN training, significant challenges remain. Neural network architectures do not use disjoint set of parameters for each of the outputs. Additionally, the hidden competition of GANs can take place in an output space of probability distributions and classifiers whose vector space dimension is typically infinite. On the bright side, we establish point-wise (day to day) convergence results which are, to the best of our knowledge, the first result of their kind for a wide class of non-convex non-concave games that do not necessarily satisfy the Polyak-Łojasiewicz conditions studied in [68]. Such conditions imply that the notions of saddle points, global min-max and stationary points coincide. Instead our work showcases how to make progress without leveraging such strong assumptions in zero-sum games. Beyond ML applications, we believe that our framework could provide even further insights for evolutionary game theory, mathematical biology as well as team-zero-sum games. For example an interesting hybrid class of games could be network generalizations of team-zero-sums games, e.g. by combining [12] and [57].\nAcknowledgments and Disclosure of Funding\nThis research/project is supported in part by the National Research Foundation, Singapore under its AI Singapore Program (AISG Award No: AISG2-RP-2020-016), NRF 2018 Fellowship NRF-NRFF2018-07, NRF2019-NRF-ANR095 ALIAS grant, grant PIE-SGP-AI-2018-01, AME Programmatic Fund (Grant No. A20H6b0151) from the Agency for Science, Technology and Research (A*STAR). Additionally, E.V. Vlatakis-Gkaragkounis is grateful to be supported by NSF grants CCF-1703925, CCF1763970, CCF-1814873, CCF-1563155, and by the Simons Collaboration on Algorithms and Geometry. He would like to acknowledge the support of Onassis Foundation under the Scholarship(ID: FZN 010-1/2017-2018.)" } ]
2,022
Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent
SP:5e9b5c3ee27cf90eb73e2672a1bbf18a1b12e791
[ "This paper shows a correspondence between deep neural networks (DNN) trained with noisy gradients and NNGP. It provides a general analytical form for the finite width correction (FWC) for NNSP expanding around NNGP. Finally, it argues that this FWC can be used to explain why finite width CNNs can improve the performance relative to their GP counterparts on image classification tasks." ]
A recent line of works studied wide deep neural networks (DNNs) by approximating them as Gaussian Processes (GPs). A DNN trained with gradient flow was shown to map to a GP governed by the Neural Tangent Kernel (NTK), whereas earlier works showed that a DNN with an i.i.d. prior over its weights maps to the socalled Neural Network Gaussian Process (NNGP). Here we consider a DNN training protocol, involving noise, weight decay and finite width, whose outcome corresponds to a certain non-Gaussian stochastic process. An analytical framework is then introduced to analyze this non-Gaussian process, whose deviation from a GP is controlled by the finite width. Our contribution is three-fold: (i) In the infinite width limit, we establish a correspondence between DNNs trained with noisy gradients and the NNGP, not the NTK. (ii) We provide a general analytical form for the finite width correction (FWC) for DNNs with arbitrary activation functions and depth and use it to predict the outputs of empirical finite networks with high accuracy. Analyzing the FWC behavior as a function of n, the training set size, we find that it is negligible for both the very small n regime, and, surprisingly, for the large n regime (where the GP error scales as O(1/n)). (iii) We flesh-out algebraically how these FWCs can improve the performance of finite convolutional neural networks (CNNs) relative to their GP counterparts on image classification tasks.
[ { "affiliations": [], "name": "NOISY GRADIENTS" } ]
[ { "authors": [ "Sanjeev Arora", "Simon S. Du", "Wei Hu", "Zhiyuan Li", "Ruslan Salakhutdinov", "Ruosong Wang" ], "title": "On Exact Computation with an Infinitely Wide Neural Net", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Ronen Basri", "David Jacobs", "Yoni Kasten", "Shira Kritchman" ], "title": "The Convergence Rate of Neural Networks for Learned Functions of Different Frequencies", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Zixiang Chen", "Yuan Cao", "Quanquan Gu", "Tong Zhang" ], "title": "Mean-field analysis of two-layer neural networks: Non-asymptotic rates and generalization bounds", "venue": "arXiv preprint arXiv:2002.04026,", "year": 2020 }, { "authors": [ "Youngmin Cho", "Lawrence K. Saul" ], "title": "Kernel methods for deep learning", "venue": "In Proceedings of the 22Nd International Conference on Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "Omry Cohen", "Or Malka", "Zohar Ringel" ], "title": "Learning Curves for Deep Neural Networks: A Gaussian Field Theory Perspective", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "A. Daniely", "R. Frostig", "Y. Singer" ], "title": "Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity", "venue": null, "year": 2016 }, { "authors": [ "Ethan Dyer", "Guy Gur-Ari" ], "title": "Asymptotics of wide networks from feynman diagrams", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "E. Gardner", "B. Derrida" ], "title": "Optimal storage properties of neural network models", "venue": "Journal of Physics A Mathematical General,", "year": 1988 }, { "authors": [ "Boris Hanin", "Mihai Nica" ], "title": "Finite depth and width corrections to the neural tangent kernel", "venue": "arXiv preprint arXiv:1909.05989,", "year": 2019 }, { "authors": [ "Jiaoyang Huang", "Horng-Tzer Yau" ], "title": "Dynamics of deep neural networks and neural tangent hierarchy", "venue": "arXiv preprint arXiv:1909.08156,", "year": 2019 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural Tangent Kernel: Convergence and Generalization in Neural Networks", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "arXiv preprint arXiv:1609.04836,", "year": 2016 }, { "authors": [ "Jaehoon Lee", "Jascha Sohl-dickstein", "Jeffrey Pennington", "Roman Novak", "Sam Schoenholz", "Yasaman Bahri" ], "title": "Deep neural networks as gaussian processes", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jaehoon Lee", "Samuel S Schoenholz", "Jeffrey Pennington", "Ben Adlam", "Lechao Xiao", "Roman Novak", "Jascha Sohl-Dickstein" ], "title": "Finite versus infinite neural networks: an empirical study", "venue": null, "year": 2007 }, { "authors": [ "Wesley Maddox", "Timur Garipov", "Pavel Izmailov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "A Simple Baseline for Bayesian Uncertainty in Deep Learning", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Stephan Mandt", "Matthew D. Hoffman", "David M. Blei" ], "title": "Stochastic Gradient Descent as Approximate Bayesian Inference", "venue": "arXiv e-prints, art", "year": 2017 }, { "authors": [ "Alexander G de G Matthews", "Mark Rowland", "Jiri Hron", "Richard E Turner", "Zoubin Ghahramani" ], "title": "Gaussian process behaviour in wide deep neural networks", "venue": "arXiv preprint arXiv:1804.11271,", "year": 2018 }, { "authors": [ "Peter Mccullagh" ], "title": "Tensor Methods in Statistics", "venue": "Dover Books on Mathematics,", "year": 2017 }, { "authors": [ "Song Mei", "Andrea Montanari", "Phan-Minh Nguyen" ], "title": "A mean field view of the landscape of twolayer neural networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Radford M Neal" ], "title": "Mcmc using hamiltonian dynamics. Handbook of markov chain monte carlo", "venue": null, "year": 2011 }, { "authors": [ "Behnam Neyshabur", "Zhiyuan Li", "Srinadh Bhojanapalli", "Yann LeCun", "Nathan Srebro" ], "title": "Towards Understanding the Role of Over-Parametrization in Generalization of Neural Networks", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Roman Novak", "Lechao Xiao", "Jaehoon Lee", "Yasaman Bahri", "Greg Yang", "Daniel A. Abolafia", "Jeffrey Pennington", "Jascha Sohl-Dickstein" ], "title": "Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Nasim Rahaman", "Aristide Baratin", "Devansh Arpit", "Felix Draxler", "Min Lin", "Fred A. Hamprecht", "Yoshua Bengio", "Aaron Courville" ], "title": "On the Spectral Bias of Neural Networks", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Carl Edward Rasmussen", "Christopher K.I. Williams" ], "title": "Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)", "venue": null, "year": 2005 }, { "authors": [ "H. Risken", "T. Frank" ], "title": "The Fokker-Planck Equation: Methods of Solution and Applications. Springer Series in Synergetics", "venue": "URL https://books.google.co.il/books?id=MG2V9vTgSgEC", "year": 1996 }, { "authors": [ "Lawrence S Schulman" ], "title": "Techniques and applications of path integration", "venue": "Courier Corporation,", "year": 2012 }, { "authors": [ "Elena Sellentin", "Andrew H Jaffe", "Alan F Heavens" ], "title": "On the use of the edgeworth expansion in cosmology i: how to foresee and evade its pitfalls", "venue": "arXiv preprint arXiv:1709.03452,", "year": 2017 }, { "authors": [ "Peter Sollich", "Christopher KI Williams" ], "title": "Understanding gaussian process regression using the equivalent kernel", "venue": "In International Workshop on Deterministic and Statistical Methods in Machine Learning,", "year": 2004 }, { "authors": [ "Yee Whye Teh", "Alexandre H. Thiery", "Sebastian J. Vollmer" ], "title": "Consistency and fluctuations for stochastic gradient langevin dynamics", "venue": "J. Mach. Learn. Res.,", "year": 2016 }, { "authors": [ "Belinda Tzen", "Maxim Raginsky" ], "title": "A mean-field theory of lazy training in two-layer neural nets: entropic regularization and controlled mckean-vlasov dynamics", "venue": "arXiv preprint arXiv:2002.01987,", "year": 2020 }, { "authors": [ "Max Welling", "Yee Whye Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In Proceedings of the 28th International Conference on International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Sho Yaida" ], "title": "Non-gaussian processes and neural networks at finite widths", "venue": "In Mathematical and Scientific Machine Learning,", "year": 2020 }, { "authors": [ "N. Ye", "Z. Zhu", "R.K. Mantiuk" ], "title": "Langevin Dynamics with Continuous Tempering for Training Deep Neural Networks", "venue": null, "year": 2017 }, { "authors": [ "A Zee" ], "title": "Quantum Field Theory in a Nutshell. Nutshell handbook", "venue": null, "year": 2003 }, { "authors": [ "d I" ], "title": "Because a has zero mean and a variance that scales as 1/N , all odd cumulants are zero and the 2r’th cumulant scales as 1/Nr−1. This holds true for any DNN having a fully-connected last layer with variance scaling as 1/N . The derivation of the multivariate Edgeworth series can be found in e.g. Mccullagh (2017); Sellentin et al. (2017), and our case is similar where instead of a vector-valued RV we have the functional RV f(x), so the cumulants become \"functional tensors", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have been rapidly advancing the state-of-the-art in machine learning, yet a complete analytic theory remains elusive. Recently, several exact results were obtained in the highly over-parameterized regime (N →∞ where N denotes the width or number of channels for fully connected networks (FCNs) and convolutional neural networks (CNNs), respectively) (Daniely et al., 2016). This facilitated the derivation of an exact correspondence with Gaussian Processes (GPs) known as the Neural Tangent Kernel (NTK) (Jacot et al., 2018). The latter holds when highly over-parameterized DNNs are trained by gradient flow, namely with vanishing learning rate and involving no stochasticity.\nThe NTK result has provided the first example of a DNN to GP correspondence valid after end-to-end DNN training. This theoretical breakthrough allows one to think of DNNs as inference problems with underlying GPs (Rasmussen & Williams, 2005). For instance, it provides a quantitative description of the generalization properties (Cohen et al., 2019; Rahaman et al., 2018) and training dynamics (Jacot et al., 2018; Basri et al., 2019) of DNNs. Roughly speaking, highly over-parameterized DNNs generalize well because they have a strong implicit bias to simple functions, and train well because low-error solutions in weight space can be reached by making a small change to the random values of the weights at initialization.\nDespite its novelty and importance, the NTK correspondence suffers from a few shortcomings: (a) Its deterministic training is qualitatively different from the stochastic one used in practice, which may lead to poorer performance when combined with a small learning rate (Keskar et al., 2016). (b) It under-performs, often by a large margin, convolutional neural networks (CNNs) trained with SGD (Arora et al., 2019). (c) Deriving explicit finite width corrections (FWCs) is challenging, as it requires solving a set of coupled ODEs (Dyer & Gur-Ari, 2020; Huang & Yau, 2019). Thus, there is a need for an extended theory for end-to-end trained deep networks which is valid for finite width DNNs.\nOur contribution is three-fold. First, we prove a correspondence between a DNN trained with noisy gradients and a Stochastic Process (SP) which at N → ∞ tends to the Neural Network Gaussian Process (NNGP) (Lee et al., 2018; Matthews et al., 2018). In these works, the NNGP kernel is determined by the distribution of the DNN weights at initialization which are i.i.d. random variables, whereas in our correspondence the weights are sampled across the stochastic training dynamics, drifting far away from their initial values. We call ours the NNSP correspondence, and show that it holds when the training dynamics in output space exhibit ergodicity.\nSecond, we predict the outputs of trained finite-width DNNs, significantly improving upon the corresponding GP predictions. This is done by deriving leading FWCs which are found to scale with width as 1/N . The accuracy at which we can predict the empirical DNNs’ outputs serves as a strong verification for our aforementioned ergodicity assumption. In the regime where the GP RMSE error scales as 1/n, we find that the leading FWC are a decaying function of n, and thus overall negligible. In the small n regime we find that the FWC is small and grows with n. We thus conclude that finite-width corrections are important for intermediate values of n (Fig. 1).\nThird, we propose an explanation for why finite CNNs trained on image classification tasks can outperform their infinite-width counterparts, as observed by Novak et al. (2018). The key difference is that in finite CNNs weight sharing is beneficial. Our theory, which accounts for the finite width, quantifies this difference (§4.2).\nOverall, the NNSP correspondence provides a rich analytical and numerical framework for exploring the theory of deep learning, unique in its ability to incorporate finite over-parameterization, stochasticity, and depth. We note that there are several factors that make finite SGD-trained DNNs used in practice different from their GP counterparts, e.g. large learning rates, early stopping etc. (Lee et al., 2020). Importantly, our framework quantifies the contribution of finite-width effects to this difference, distilling it from the contribution of these other factors." }, { "heading": "1.1 RELATED WORK", "text": "The idea of leveraging the dynamics of the gradient descent algorithm for approximating Bayesian inference has been considered in various works (Welling & Teh, 2011; Mandt et al., 2017; Teh et al., 2016; Maddox et al., 2019; Ye et al., 2017). However, to the best of our knowledge, a correspondence with a concrete SP or a non-parametric model was not established nor was a comparison made of the DNN’s outputs with analytical predictions.\nFinite width corrections were studied recently in the context of the NTK correspondence by several authors. Hanin & Nica (2019) study the NTK of finite DNNs, but where the depth scales together with width, whereas we keep the depth fixed. Dyer & Gur-Ari (2020) obtained a finite N correction to the linear integral equation governing the evolution of the predictions on the training set. Our work differs in several aspects: (a) We describe a different correspondence under different a training protocol with qualitatively different behavior. (b) We derive relatively simple formulae for the outputs which become entirely explicit at large n. (c) We account for all sources of finite N corrections whereas finite N NTK randomness remained an empirical source of corrections not accounted for by Dyer & Gur-Ari (2020). (d) Our formalism differs considerably: its statistical mechanical nature enables one to import various standard tools for treating randomness, ergodicity breaking, and taking into account non-perturbative effects. (e) We have no smoothness limitation on our activation functions and provide FWCs on a generic data point and not just on the training set.\nAnother recent paper (Yaida, 2020) studied Bayesian inference with weakly non-Gaussian priors induced by finite-N DNNs. Unlike here, there was no attempt to establish a correspondence with trained DNNs. The formulation presented here has the conceptual advantage of representing a distribution over function space for arbitrary training and test data, rather than over specific draws of data sets. This is useful for studying the large n behavior of learning curves, where analytical insights into generalization can be gained (Cohen et al., 2019).\nA somewhat related line of work studied the mean field regime of shallow NNs (Mei et al., 2018; Chen et al., 2020; Tzen & Raginsky, 2020). We point out the main differences from our work: (a) The NN output is scaled differently with width. (b) In the mean field regime one is interested in the dynamics (finite t) of the distribution over the NN parameters in the form of a PDE of the Fokker-Planck type. In contrast, in our framework we are interested in the distribution over function\nspace at equilibrium, i.e. for t→∞. (c) It seems that the mean field analysis is tailored for two-layer fully-connected NNs and is hard to generalize to deeper nets or to CNNs. In contrast, our formalism generalizes to deeper fully-connected NNs and to CNNs as well, as we showed in section 4.2." }, { "heading": "2 THE NNSP CORRESPONDENCE", "text": "In this section we show that finite-width DNNs, trained in a specific manner, correspond to Bayesian inference using a non-parametric model which tends to the NNGP as N → ∞. We first give a short review of Langevin dynamics in weight space as described by Neal et al. (2011), Welling & Teh (2011), which we use to generate samples from the posterior over weights. We then shift our perspective and consider the corresponding distribution over functions induced by the DNN, which characterizes the non-parametric model.\nRecap of Langevin-type dynamics - Consider a DNN trained with full-batch gradient descent while injecting white Gaussian noise and including a weight decay term, so that the discrete time dynamics of the weights read\n∆wt := wt+1 − wt = − (γwt +∇wL (zw)) dt+ √ 2Tdtξt (1)\nwhere wt is the vector of all network weights at time step t, γ is the strength of the weight decay, L(zw) is the loss as a function of the output zw, T is the temperature (the magnitude of noise), dt is the learning rate and ξt ∼ N (0, I). As dt → 0 these discrete-time dynamics converge to the continuous-time Langevin equation given by ẇ (t) = −∇w ( γ 2 ||w(t)|| 2 + L (zw) ) + √\n2Tξ (t) with 〈ξi(t)ξj(t′)〉 = δijδ (t− t′), so that as t → ∞ the weights will be sampled from the equilibrium distribution in weight space, given by (Risken & Frank, 1996)\nP (w) ∝ exp ( − 1 T (γ 2 ||w||2 + L (zw) )) = exp ( − ( 1 2σ2w ||w||2 + 1 2σ2 L (zw) )) (2)\nThe above equality holds since the equilibrium distribution of the Langevin dynamics is also the posterior distribution of a Bayesian neural network (BNN) with an i.i.d. Gaussian prior on the weights w ∼ N (0, σ2wI). Thus we can map the hyper-parameters of the training to those of the BNN: σ2w = T/γ and σ\n2 = T/2. Notice that a sensible scaling for the weight variance at layer ` is σ2w,` ∼ O(1/N`−1), thus the weight decay needs to scale as γ` ∼ O(N`−1).\nA transition from weight space to function space - We aim to move from a distribution over weight space Eq. 2 to a one over function space. Namely, we consider the distribution of zw(x) implied by the above P (w) where for concreteness we consider a DNN with a single scalar output zw(x) ∈ R on a regression task with data {(xα, yα)}nα=1 ⊂ Rd × R. Denoting by P [f ] the induced measure on function space we formally write\nP [f ] = ∫ dwδ[f − zw]P (w) ∝ e− 1 2σ2 L[f ] ∫ dwe − 1 2σ2w ||w||2 δ[f − zw] (3)\nwhere ∫ dw denotes an integral over all weights and we denote by δ[f − zw] a delta-function in function-space. As common in path-integrals or field-theory formalism (Schulman, 2012), such a delta function is understood as a limit procedure where one chooses a suitable basis for function space, trims it to a finite subset, treats δ[f − zw] as a product of regular delta-functions, and at the end of the computation takes the size of the subset to infinity.\nTo proceed we decompose the posterior over functions Eq. 3 as P [f ] ∝ e− 1 2σ2 L[f ]P0[f ] where the prior over functions is P0[f ] ∝ ∫ dwe − 1 2σ2w ||w||2\nδ[f − zw]. The integration over weights now obtains a clear meaning: it yields the distribution over functions induced by a DNN with i.i.d. random weights chosen according to the prior P0(w) ∝ e − 1 2σ2w ||w||2\n. Thus, we can relate any correlation function in function space and weight space, for instance (Df is the integration measure over function space)∫ DfP0[f ]f(x)f(x′) = ∫ Df ∫ dwP0(w)δ[f−zw]f(x)f(x′) = ∫ dwP0(w)zw(x)zw(x ′) (4)\nAs noted by Cho & Saul (2009), for highly over-parameterized DNNs the r.h.s. of 4 equals the kernel of the NNGP associated with this DNN, K(x, x′). Moreover P0[f ] tends to a Gaussian and can be\nwritten as\nP0[f ] ∝ exp ( −1\n2\n∫ dµ(x)dµ(x′)f(x)K−1(x, x′)f(x′) ) +O (1/N) (5)\nwhere µ(x) is the measure of the input space, and the O(1/N) scaling of the finite-N correction will be explained in §3. If we now plug 5 in 3, take the loss to be the total square error1 L[f ] =∑n\nα=1 (yα − f (xα)) 2, and take N →∞ we have that the posterior P [f ] is that of a GP. Assuming ergodicity, one finds that training-time averaged output of the DNN is given by the posterior mean of a GP, with measurement noise2 equal to σ2 = T/2 and a kernel given by the NNGP of that DNN.\nWe refer to the above expressions for P0[f ] and P [f ] describing the distribution of outputs of a DNN trained according to our protocol – the NNSP correspondence. Unlike the NTK correspondence, the kernel which appears here is different and no additional initialization dependent terms appear (as should be the case since we assumed ergodicity). Furthermore, given knowledge of P0[f ] at finite N , one can predict the DNN’s outputs at finite N . Henceforth, we refer to P0[f ] as the prior distribution, as it is the prior distribution of a DNN with random weights drawn from P0(w).\nEvidence supporting ergodicity - Our derivation relies on the ergodicity of the dynamics. Ergodicity is in general hard to prove rigorously in non-convex settings, and thus we must revert to heuristics. The most robust evidence of ergodicity in function space is the high level of accuracy of our analytical expressions w.r.t. to our numerical results. This is a self-consistency argument: we assume ergodicity in order to derive our analytical results and then indeed find that they agree very well with the experiment, thus validating our original assumption.\nAnother indicator of ergodicity is a small auto-correlation time (ACT) of the dynamics. Although short ACT does not logically imply ergodicity (in fact, the converse is true: exponentially long ACT implies non-ergodic dynamics). However, the empirical ACT gives a lower bound on the true correlation time of the dynamics. In our framework, it is sufficient that the dynamics of the outputs zw be ergodic, even if the dynamics of the weights converge much slower to an equilibrium distribution. Indeed, we have found that the ACTs of the outputs are considerably smaller than those of the weights (see Fig. 2b). Full ergodicity may be too strong of a condition and we don’t really need it for our purposes, since we are mainly interested in collecting statistics that will allow us to accurately compute the posterior mean of the distribution in function space. Thus, a weaker condition which is sufficient here is ergodicity in the mean (see App. F), and we believe our self-consistent argument above demonstrates that it holds.\nIn a related manner, optimizing the train loss can be seen as an attempt to find a solution to n constraints using far more variables (roughly M · N2 where M is the number of layers). From a different angle, in a statistical mechanical description of satisfiability problems, one typically expects ergodic behavior when the ratio of the number of variables to the number of constraints becomes much larger than one (Gardner & Derrida, 1988)." }, { "heading": "3 INFERENCE ON THE RESULTING NNSP", "text": "Having mapped the time-averaged outputs of a DNN to inference on the above NNSP, we turn to analyze the predictions of this NNSP in the case where N is large but finite, such that the NNSP is only weakly non-Gaussian (i.e. its deviation from a GP is O(1/N)). The main result of this section is a derivation of leading FWCs to the standard GP regression results for the posterior mean f̄GP(x∗) and variance ΣGP(x∗) on an unseen test point x∗, given a training set {(xα, yα)}nα=1 ⊂ Rd × R, namely (Rasmussen & Williams, 2005)\nf̄GP(x∗) = ∑ α,β yαK̃ −1 αβK ∗ β ; ΣGP(x∗) = K ∗∗ − ∑ α,β K∗αK̃ −1 αβK ∗ β (6)\nwhere K̃αβ := K(xα, xβ) + σ2δαβ ; K∗α := K(x∗, xα); K ∗∗ := K(x∗, x∗).\n1We take the total error, i.e. we don’t divide by n so that L[f ] becomes more dominant for larger n. 2Here σ2 is a property of the training protocol and not of the data itself, or our prior on it." }, { "heading": "3.1 EDGEWORTH EXPANSION AND PERTURBATION THEORY", "text": "Our first task is to find how P [f ] changes compared to the Gaussian (N → ∞) scenario. As the data-dependent part e−L[f ]/2σ 2\nis independent of the DNNs, this amounts to obtaining finite width corrections to the prior P0[f ]. One way to characterize this is to perform an Edgeworth expansion of P0[f ] (Mccullagh, 2017; Sellentin et al., 2017). We give a short recap of the Edgeworth expansion to elucidate our derivation, beginning with a scalar valued RV. Consider continuous iid RVs {Zi} and assume WLOG 〈Zi〉 = 0, 〈 Z2i 〉\n= 1, with higher cumulants κZr for r ≥ 3. Now consider their normalized sum YN = 1√N ∑N i=1 Zi. From additivity and homogeneity of cumulants we have κr≥2 := κYr≥2 = NκZr ( √ N)r = κZr Nr/2−1 . Now, let ϕ(y) := (2π)−1/2e−y 2/2. The charac-\nteristic function of Y is f̂(t) := F [f(y)] = exp (∑∞\nr=1 κr (it)r r!\n) = exp (∑∞ r=3 κr (it)r r! ) ϕ̂(t).\nTaking the inverse Fourier transform F−1 has the effect of mapping it 7→ −∂y thus we get f(y) = exp (∑∞ r=3 κr (−∂y)r r! ) ϕ(y) = ϕ(y) ( 1 + ∑∞ r=3 κr r!Hr(y) ) where Hr(y) is the rth Hermite polynomial, e.g. H4(y) = y4− 6y2 + 3. If we were to consider vector-valued RVs, then the r’th order cumulant becomes a tensor with r indices, and the Hermite polynomials become multi-variate polynomials. In our case, we are considering random functions defined by our stochastic process (the NNSP), thus the cumulants are functional tensors, i.e. are continuously indexed by the inputs xα.\nThis is especially convenient here since for all DNNs with the last layer being fully-connected, all odd cumulants vanish and the 2rth cumulant scales as 1/Nr−1. Consequently, at large N we can characterize P0[f ] up to O(N−2) by its second and fourth cumulants, K(x1, x2) and U(x1, x2, x3, x4), respectively. Thus the leading order correction to P0[f ] reads\nP0 [f ] ∝ e−SGP[f ] (\n1− 1 N SU [f ]\n) +O ( 1/N2 ) (7)\nwhere the GP action SGP and the first FWC action SU are given by\nSGP[f ] = 1\n2\n∫ dµ1:2fx1K −1 x1,x2fx2 ; SU [f ] = − 1\n4!\n∫ dµ1:4Ux1,x2,x3,x4Hx1,x2,x3,x4 [f ] (8)\nHere, H is the 4th functional Hermite polynomial (see App. A), U is the 4th order functional cumulant of the NN output3, which depends on the choice of the activation function φ Ux1,x2,x3,x4 = ς 4 a (〈φαφβφγφδ〉 − 〈φαφβ〉 〈φγφδ〉) + 2 other perms. of (α, β, γ, δ) ∈ {1, . . . , 4} (9) where φα := φ(z`−1i (xα)) and the pre-activations are z ` i (x) = b ` i + ∑N` j=1Wijφ(z `−1 j (x)). Here we distinguished between the scaled and non-scaled weight variances: σ2a = ς 2 a/N , where a are the weights of the last layer. Our shorthand notation for the integration measure over inputs means e.g. dµ1:4 := dµ(x1) · · · dµ(x4). Using perturbation theory, in App. B we compute the leading FWC to the posterior mean f̄(x∗) and variance 〈 (δf(x∗)) 2 〉\non a test point x∗ f̄(x∗) = f̄GP(x∗) +N\n−1f̄U (x∗) +O(N−2)〈 (δf(x∗)) 2 〉 = ΣGP(x∗) +N −1ΣU (x∗) +O(N−2)\n(10)\nwith ΣU (x∗) = 〈 (f(x∗)) 2 〉 U − 2f̄GP(x∗)f̄U (x∗) and\nf̄U (x∗) = 1\n6 Ũ∗α1α2α3\n( ỹα1 ỹα2 ỹα3 − 3K̃−1α1α2 ỹα3 ) 〈 (f(x∗)) 2 〉 U = 1 2 Ũ∗∗α1α2 ( ỹα1 ỹα2 − K̃−1α1α2\n) (11) where all repeating indices are summed over the training set (i.e. range over {1, . . . , n}), denoting: ỹα := K̃ −1 αβ yβ , and defining\nŨ∗α1α2α3 := U ∗ α1α2α3 − Uα1α2α3α4K̃ −1 α4β K∗β\nŨ∗∗α1α2 := U ∗∗ α1α2 − ( U∗α1α2α3 + Ũ ∗ α1α2α3 ) K̃−1α3βK ∗ β\n(12)\n3Here we take U ∼ O(1) to emphasize the scaling with N in Eqs. 7, 10.\nEquations 11, 12 are one of our key analytical results, which are qualitatively different from the corresponding GP expressions Eq. 6. The correction to the predictive mean f̄U (x∗) has a linear term in y, which can be viewed as a correction to the GP kernel, but also a cubic term in y, unlike f̄GP(x∗) which is purely linear. The correction to the predictive variance ΣU (x∗) has quartic and quadratic terms in y, unlike ΣGP(x∗) which is y-independent. Ũ∗α1α2α3 has a clear interpretation in terms of GP regression: if we consider the indices α1, α2, α3 as fixed, then U∗α1α2α3 can be thought of as the ground truth value of a target function (analogous to y∗), and the second term on the r.h.s. Uα1α2α3α4K̃ −1 α4β K∗β is then the GP prediction of U ∗ α1α2α3 with the kernel K, where α4 runs on the training set (compare to f̄GP(x∗) in Eq. 6). Thus Ũ∗α1α2α3 is the discrepancy in predicting Uα1α2α3α4 using a GP with kernel K. In §3.2 we study the behavior of f̄U (x∗) as a function of n.\nThe posterior variance Σ(x) = 〈 (δf (x)) 2 〉\nhas a clear interpretation in our correspondence: it is a measure of how much we can decrease the test loss by averaging. Our procedure for generating empirical network outputs involves time-averaging over the training dynamics after reaching equilibrium and also over different realizations of noise and initial conditions (see App. F). This allows for a reliable comparison with our FWC theory for the mean. In principle, one could use the network outputs at the end of training without this averaging, in which case there will be fluctuations that will scale with Σ(xα). Following this, one finds that the expected MSE test loss after training saturates is n−1∗ ∑n∗ α=1 (〈( f̄ (xα)− y (xα) )2〉 + Σ(xα) ) where n∗ is the size of the test set." }, { "heading": "3.2 FINITE WIDTH CORRECTIONS FOR SMALL AND LARGE DATA SETS", "text": "The expressions in Eqs. 6, 11 for the GP prediction and the leading FWC are explicit but only up to a potentially large matrix inversion, K̃−1. These matrices also have a random component related to the largely arbitrary choice of the particular n training points used to characterize the target function. An insightful tool, used in the context of GPs, which solves both these issues is the Equivalent Kernel (EK) (Rasmussen & Williams, 2005; Sollich & Williams, 2004). The EK approximates the GP predictions at large n, after averaging on all draws of (roughly) n training points representing the target function. Even if one is interested in a particular dataset, the EK result captures the behavior of specific dataset up to small corrections. Essentially, the discrete sums over the training set appearing in Eq. 6 are replaced by integrals over all input space, which together with a spectral decomposition of the kernel function K(x, x′) = ∑ i λiψi(x)ψi(x ′) yields the well known result\nf̄EKGP (x∗) =\n∫ dµ(x′) ∑ i λiψi(x∗)ψi(x ′) λi + σ2/n g(x′) (13)\nHere we develop an extension of Eq. 13 for the NNSPs we find at large but finite N . In particular, we find the leading non-linear correction to the EK result, i.e. the \"EK analogue\" of Eq. 11. To this end, we consider the average predictions of an NNSP trained on an ensemble of data sets of size n′, corresponding to n′ independent draws from a distribution µ(x) over all possible inputs x. Following the steps in App. J we find\nf̄EKU (x∗) = 1\n6 δ̃x∗x1Ux1,x2,x3,x4\n{ n3\nσ6 δ̃x2x′2g(x\n′ 2)δ̃x3x′3g(x ′ 3)δ̃x4x′4g(x ′ 4)−\n3n2\nσ4 δ̃x2,x3 δ̃x4,x′4g(x\n′ 4) } (14)\nwhere an integral ∫ dµ(x) is implicit for every pair of repeated x coordinates. We introduced the\ndiscrepancy operator δ̃xx′ which acts on some function ϕ as ∫ dµ(x′)δ̃xx′ϕ(x ′) := δ̃xx′ϕ(x ′) = ϕ(x)− f̄EKGP (x). Essentially, Eq. 14 is derived from Eq. 11 by replacing each K̃−1 by (n/σ2)δ̃ and noticing that in this regime Ũ∗x2,x3,x4 in Eq. 12 becomes δ̃x∗x1Ux1,x2,x3,x4 . Interestingly, f̄ EK U (x∗) is written explicitly in terms of meaningful quantities: δ̃xx′g(x′) and δ̃x∗x1Ux1,x2,x3,x4 .\nEquations 13, 14 are valid for any weakly non-Gaussian process, including ones related to CNNs (where N corresponds to the number of channels). It can also be systematically extended to smaller values of n by taking into account higher terms in 1/n, as in Cohen et al. (2019). At N →∞, we obtain the standard EK result, Eq. 13. It is basically a high-pass linear filter which filters out features of g that have support on eigenfunctions ψi associated with eigenvalues λi that are small relative to σ2/n. We stress that the ψi, λi’s are independent of any particular size n dataset but rather are\na property of the average dataset. In particular, no computationally costly data dependent matrix inversion is needed to evaluate Eq. 13.\nTurning to our FWC result, Eq. 14, it depends on g(x) only via the discrepancy operator δ̃xx′ . Thus these FWCs would be proportional to the error of the DNN, at N → ∞. In particular, perfect performance at N → ∞, implies no FWC. Second, the DNN’s average predictions act as a linear transformation on the target function combined with a cubic non-linearity. Third, for g(x) having support only on some finite set of eigenfunctions ψi of K, δ̃xx′g(x′) would scale as σ2/n at very large n. Thus the above cubic term would lose its explicit dependence on n. The scaling with n of this second term is less obvious, but numerical results suggest that δ̃x2x3 also scales as σ\n2/n, so that the whole expression in the {· · · } has no scaling with n. In addition, some decreasing behavior with n is expected due to the δ̃x∗x1Ux1,x2,x3,x4 factor which can be viewed as the discrepancy in predicting Ux,x2,x3,x4 , at fixed x2, x3, x4, based on n random samples (xα’s) of Uxα,x2,x3,x4 . In Fig. 1 we illustrate this behavior at large n and also find that for small n the FWC is small but increasing with n, implying that at large N FWCs are only important at intermediate values of n." }, { "heading": "4 NUMERICAL EXPERIMENTS", "text": "In this section we numerically test our analytical results. We first demonstrate that in the limit N →∞ the outputs of a FCN trained in the regime of the NNSP correspondence converge to a GP with a known kernel, and that the MSE between them scales as ∼ 1/N2 which is the scaling of the leading FWC squared. Second, we show that introducing the leading FWC term N−1f̄U (x∗), Eq. 11, further reduces this MSE by more than an order of magnitude. Third, we study the performance gap between finite CNNs and their corresponding NNGPs on CIFAR-10." }, { "heading": "4.1 TOY EXAMPLE: FULLY CONNECTED NETWORKS ON SYNTHETIC DATA", "text": "We trained a 2-layer FCN f(x) = ∑N i=1 aiφ(w\n(i) · x) on a quadratic target y(x) = xTAx where the x’s are sampled with a uniform measure from the hyper-sphere Sd−1( √ d), see App. G.1 for more details. Our settings are such that there are not enough training points to fully learn the target: Fig. 2a shows that the time averaged outputs (after reaching equilibrium) f̄DNN(x∗) is much closer to the GP prediction f̄GP(x∗) than to the ground truth y∗. Otherwise, the convergence of the network output to the corresponding NNGP as N grows (shown in Fig. 2c) would be trivial, since all reasonable estimators would be close to the target and hence close to each other.\nIn Fig. 2c we plot in log-log scale (with base 10) the MSE (normalized by (f̄DNN)2) between the predictions of the network f̄DNN and the corresponding GP and FWC predictions for quadratic and ReLU activations. We find that indeed for sufficiently large widths (N & 500) the slope of the GP-DNN MSE approaches −2.0 (for both ReLU and quadratic), which is expected from our theory, since the leading FWC scales as 1/N . For smaller widths, higher order terms (in 1/N ) in the Edgeworth series Eq. 7 come into play. For quadratic activation, we find that our FWC result further reduces the MSE by more than an order of magnitude relative to the GP theory. We recognize a regime where the GP and FWC MSEs intersect atN . 100, below which our FWC actually increases the MSE, which suggests a scale of how large N needs to be for our leading FWC theory to hold." }, { "heading": "4.2 PERFORMANCE GAP BETWEEN FINITE CNNS AND THEIR NNGP", "text": "Several papers have shown that the performance on image classification tasks of SGD-trained finite CNNs can surpass that of the corresponding GPs, be it NTK (Arora et al., 2019) or NNGP (Novak et al., 2018). More recently, Lee et al. (2020) emphasized that this performance gap depends on the procedure used to collapse the spatial dimensions of image-shaped data before the final readout layer: flattening the image into a one-dimensional vector (CNN-VEC) or applying global average pooling to the spatial dimensions (CNN-GAP). It was observed that while infinite FCN and CNNVEC outperform their respective finite networks, infinite CNN-GAP networks under-perform their finite-width counterparts, i.e. there exists a finite optimal width.\nOne notable margin, of about 15% accuracy on CIFAR10, was shown in Novak et al. (2018) for the case of CNN-GAP. It was further pointed out there, that the NNGPs associated with CNN-VEC, coincide with those of the corresponding Locally Connected Networks (LCNs), namely CNNs without weight sharing between spatial locations. Furthermore, the performance of SGD-trained LCNs was found to be on par with that of their NNGPs. We argue that our framework can account for this observation. The priors P0[f ] of a LCN and CNN-VEC agree on their second cumulant (the covariance), which is the only one not vanishing as N →∞, but they need not agree on their higher order cumulants, which come into play at finite N . In App. I we show that U appearing in our leading FWC, already differentiates between CNNs and LCNs. Common practice strongly suggests that the prior over functions induced by CNNs is better suited than that of LCNs for classification of natural images. As a result we expect that the test loss of a finite-width CNN trained using our protocol will initially decrease with N but then increase beyond some optimal width Nopt, tending towards the loss of the corresponding GP as N →∞. This is in contrast to SGD behavior reported in some works where the CNN performance seems to saturate as a function of N , to some value better than\nthe NNGP (Novak et al., 2018; Neyshabur et al., 2018). Notably those works used maximum over architecture scans, high learning rates, and early stopping, all of which are absent from our training protocol.\nTo test the above conjecture we trained, according to our protocol, a CNN with six convolutional layers and two fully connected layers on CIFAR10, and used CNN-VEC for the readout. We used MSE loss with a one-hot encoding into a 10 dimensional vector of the categorical label; further details and additional settings are given in App. G. Fig. 3 demonstrates that, using our training protocol, a finite CNN can outperform its corresponding GP and approaches its GP as the number of channels increases. This phenomenon was observed in previous studies under realistic training settings (Novak et al., 2018), and here we show that it appears also under our training protocol. We note that a similar yet more pronounced trend in performance appears here also when one considers the averaged MSE loss rather the the MSE loss of the average outputs." }, { "heading": "5 CONCLUSION", "text": "In this work we presented a correspondence between finite-width DNNs trained using Langevin dynamics (i.e. using small learning rates, weight-decay and noisy gradients) and inference on a stochastic-process (the NNSP), which approaches the NNGP as N → ∞. We derived finite width corrections, that improve upon the accuracy of the NNGP approximation for predicting the DNN outputs on unseen test points, as well as the expected fluctuations around these. In the limit of a large number of training points n → ∞, explicit expressions for the DNNs’ outputs were given, involving no costly matrix inversions. In this regime, the FWC can be written in terms of the discrepancy of GP predictions, so that when GP has a small test error the FWC will be small, and vice versa. In the small n regime, the FWC is small but grows with n, which implies that at large N , FWCs are only important at intermediate values of n. For no-pooling CNNs, we build on an observation made by Novak et al. (2018) that finite CNNs outperform their corresponding NNGPs, and show that this is because the leading FWCs reflect the weight-sharing property of CNNs which is ignored at the level of the NNGP. This constitutes one real-world example where the FWC is well suited to the structure of the data distribution, and thus improves performance relative to the corresponding GP. In a future study, it would be very interesting to consider well controlled toy models that can elucidate under what conditions on the architecture and data distribution does the FWC improve performance relative to GP." }, { "heading": "A EDGEWORTH SERIES", "text": "The Central Limit Theorem (CLT) tells us that the distribution of a sum of N independent RVs will tend to a Gaussian as N → ∞. Its relevancy for wide fully-connected DNNs (or CNNs with many channels) comes from the fact that every pre-activation averages over N uncorrelated random variables thereby generating a Gaussian distribution at large N (Cho & Saul, 2009), augmented by higher order cumulants which decay as 1/Nr/2−1, where r is the order of the cumulant. When higher order cumulants are small, an Edgeworth series (see e.g. Mccullagh (2017); Sellentin et al. (2017)) is a useful practical tool for obtaining the probability distribution from these cumulants. Having the probability distribution and interpreting its logarithm as our action, places us closer to standard field-theory formalism.\nFor simplicity we focus on a 2-layer network, but the derivation generalizes straightforwardly to networks of any depth. We are interested in the finite N corrections to the prior distribution P0[f ], i.e. the distribution of the DNN output f(x) = ∑N i=1 aiφ(w T i x), with ai ∼ N (0, ς2a N ) and wi ∼ N (0, ς 2 w\nd I). Because a has zero mean and a variance that scales as 1/N , all odd cumulants are zero and the 2r’th cumulant scales as 1/Nr−1. This holds true for any DNN having a fully-connected last layer with variance scaling as 1/N . The derivation of the multivariate Edgeworth series can be found in e.g. Mccullagh (2017); Sellentin et al. (2017), and our case is similar where instead of a vector-valued RV we have the functional RV f(x), so the cumulants become \"functional tensors\" i.e. multivariate functions of the input x. Thus, the leading FWC to the prior P0[f ] is\nP0 [f ] = 1\nZ e−SGP[f ]\n[ 1 + 1\n4!\n∫ dµ (x1) · · · dµ (x4)U (x1, x2, x3, x4)H [f ; x1, x2, x3, x4] ] +O(1/N2)\n(A.1)\nwhere SGP[f ] is as in the main text Eq. 8 and the 4th Hermite functional tensor is\nH [f ] = ∫ dµ (x′1) · · · dµ (x′4)K−1 (x1, x′1) · · ·K−1 (x4, x′4) f (x′1) · · · f (x′4)\n−K−1 (xα, xβ) ∫ dµ ( x′µ ) dµ (x′ν)K −1 (xµ, x′µ)K−1 (xν , x′ν) f (x′µ) f (x′ν) [6] (A.2) +K−1 (xα, xβ)K −1 (xµ, xν) [3]\nwhere by the integers in [·] we mean all possible combinations of this form, e.g.\nK−1αβK −1 µν = K −1 12 K −1 34 +K −1 13 K −1 24 +K −1 14 K −1 23 (A.3)\nH[f ] is the functional analogue of the fourth Hermite polynomial: H4 (x) = x4 − 6x2 + 3, which appears in the scalar Edgeworth series expanded about a standard Gaussian." }, { "heading": "B FIRST ORDER CORRECTION TO POSTERIOR MEAN AND VARIANCE", "text": "B.1 POSTERIOR MEAN\nThe posterior mean with the leading FWC action is given by 〈f (x∗)〉 = ∫ Dfe−S[f ]f (x∗)∫ Dfe−S[f ] +O(1/N2) (B.1)\nwhere\nS[f ] = SGP[f ] + SData[f ] + SU [f ]; SData[f ] = 1\n2σ2 n∑ α=1 (f (xα)− yα)2 (B.2)\nwhere the O(1/N2) implies that we only treat the first order Taylor expansion of S[f ], and where SGP[f ], SU [f ] are as in the main text Eq. 8. The general strategy is to bring the path integral ∫ Df to the front, so that we will get just correlation functions w.r.t. the Gaussian theory (including the data term SData[f ]) 〈· · · 〉0, namely the well known results (Rasmussen & Williams, 2005) for\nf̄GP(x∗) = 〈f(x∗)〉0 and ΣGP(x∗) = 〈 (δf(x∗)) 2 〉 0 , and then finally perform the integrals over input space. Expanding both the numerator and the denominator of Eq. B.1, the leading finite width correction for the posterior mean reads\nf̄U (x∗) = 1\n4!\n(∫ dµ1:4U (x1, x2, x3, x4) 〈f (x∗)H [f ]〉0 − 〈f (x∗)〉0 ∫ dµ1:4U (x1, x2, x3, x4) 〈H [f ]〉0 ) (B.3)\nThis, as standard in field theory, amounts to omitting all terms corresponding to bubble diagrams, namely we keep only terms with a factor of 〈f (x∗) f (x′α)〉0 and ignore terms with a factor of 〈f (x∗)〉0 , since these will cancel out. This is a standard result in perturbative field theory (see e.g. Zee (2003)).\nWe now write down the contributions of the quartic, quadratic and constant terms in H[f ]:\n1. For the quartic term in H [f ], we have\n〈f (x∗) f (x′1) f (x′2) f (x′3) f (x′4)〉0 − 〈f (x∗)〉0 〈f (x ′ 1) f (x ′ 2) f (x ′ 3) f (x ′ 4)〉0 = Σ (x∗, x ′ α) Σ ( x′β , x ′ γ ) f̄ (x′δ) [12] + Σ (x∗, x ′ α) f̄ ( x′β ) f̄ ( x′γ ) f̄ (x′δ) [4] (B.4)\nWe dub these terms by f̄ΣΣ∗ and f̄ f̄ f̄Σ∗ to be referenced shortly. We mention here that they are the source of the linear and cubic terms in the target y appearing in Eq. 11 in the main text.\n2. For the quadratic term in H [f ], we have〈 f (x∗) f ( x′µ ) f (x′ν) 〉 0 − 〈f (x∗)〉0 〈 f ( x′µ ) f (x′ν) 〉 0 = Σ ( x∗, x ′ µ ) f̄ (x′ν) [2] (B.5)\nwe note in passing that these cancel out exactly together with similar but opposite sign terms/diagrams in the quartic contribution, which is a reflection of measure invariance. This is elaborated on in §B.3. 3. For the constant terms in H [f ], we will be left only with bubble diagram terms ∝∫ Df f (x∗) which will cancel out in the leading order of 1/N .\nB.2 POSTERIOR VARIANCE\nThe posterior variance is given by\nΣ(x∗) = 〈f (x∗) f (x∗)〉 − f̄2\n= 〈f (x∗) f (x∗)〉0 + 〈f (x∗) f (x∗)〉U − f̄ 2 GP − 2f̄GPf̄U +O(1/N2) = ΣGP(x∗) + 〈f (x∗) f (x∗)〉U − 2f̄GPf̄U +O(1/N 2)\n(B.6)\nFollowing similar steps as for the posterior mean, the leading finite width correction for the posterior second moment at x∗ reads\n〈f (x∗) f (x∗)〉U = 1\n4!\n(∫ dµ1:4U (x1, x2, x3, x4) 〈f (x∗) f (x∗)H [f ]〉0 − 〈f (x∗) f (x∗)〉0 ∫ dµ1:4U (x1, x2, x3, x4) 〈H [f ]〉0 ) (B.7)\nAs for the posterior mean, the constant terms in H[f ] cancel out and the contributions of the quartic and quadratic terms are\nquartic terms = Σ∗αΣ∗β f̄γ f̄δ [12] + Σ∗αΣ∗βΣγδ [12] (B.8)\nand quadratic terms = Σ∗µΣ∗ν [2] (B.9)\nB.3 MEASURE INVARIANCE OF THE RESULT\nThe expressions derived above may seem formidable, since they contain many terms and involve integrals over input space which seemingly depend on the measure µ(x). Here we show how they may in fact be simplified to the compact expressions in the main text Eq. 11 which involve only discrete sums over the training set and no integrals, and are thus manifestly measure-invariant.\nFor simplicity, we show here the derivation for the FWC of the mean f̄U (x∗), and a similar derivation can be done for ΣU (x∗). In the following, we carry out the x integrals, by plugging in the expressions from Eq. 6 and coupling them to U . As in the main text, we use the Einstein summation notation, i.e. repeated indices are summed over the training set. The contribution of the quadratic terms is\nAα1,∗K̃ −1 α1β1 yβ1 −Aα1α2K̃−1α1β1K̃ −1 α2β2 yβ1Kβ2,∗ (B.10)\nwhere we defined\nA (x3, x4) := ∫∫ dµ(x1)dµ(x2)U (x1, x2, x3, x4)K −1 (x1, x2) (B.11)\nFortunately, this seemingly measure-dependent expression will cancel out with one of the terms coming from the f̄ΣΣ∗ contribution of the quartic terms in H[f ]. This is not a coincidence and is a general feature of the Hermite polynomials appearing in the Edgeworth series, thus for any order in 1/N in the Edgeworth series we will always be left only with measure invariant terms. Collecting all terms that survive we have\n1\n4!\n{ 4Ũ∗α1α2α3K̃ −1 α1β1 K̃−1α2β2K̃ −1 α3β3 yβ1yβ2yβ3 − 12Ũ∗α1α2α3K̃ −1 α2β2 K̃−1α1β1yβ1 } (B.12)\nwhere we defined Ũ∗α1α2α3 := U ∗ α1α2α3 − Uα1α2α3α4K̃ −1 α4β4\nK∗β4 (B.13) This is a more explicit form of the result reported in the main text, Eq. 11." }, { "heading": "C FINITE WIDTH CORRECTIONS FOR MORE THAN ONE HIDDEN LAYER", "text": "For simplicity, consider a fully connected network with two hidden layers both of width N , and no biases, thus the pre-activations h (x) and output z (x) are given by\nh (x) = σw2√ N W (2)φ ( σw1√ d W (1)x ) z (x) =\nσa√ N aTφ\n( h(2) (x) ) (C.1) We want to find the 2nd and 4th cumulants of z (x). Recall that we found that the leading order Edgeworth expansion for the functional distribution of h is\nPK,U [h] ∝ e− 1 2h(x ′ 1)K −1(x′1,x ′ 2)h(x\n′ 2) ( 1 + 1\nN U (x′1, x ′ 2, x ′ 3, x ′ 4)H [h;x ′ 1, x ′ 2, x ′ 3, x ′ 4]\n) (C.2)\nwhere K−1 (x′1, x ′ 2) and U (x ′ 1, x ′ 2, x ′ 3, x ′ 4) are known from the previous layer. So we are looking for two maps: Kφ (K,U) (x, x′) = 〈φ (h (x))φ (h (x′))〉PK,U [h]\nUφ (K,U) (x1, x2, x3, x4) = 〈φ (h (x1))φ (h (x2))φ (h (x3))φ (h (x4))〉PK,U [h] (C.3)\nso that the mapping between the first two cumulants K and U of two consequent layers is (assuming no biases)\nK(`+1) (x, x′)\nσ2 w(`+1)\n= Kφ ( K(`), U (`) ) (x, x′)\nU (`+1) (x1, x2, x3, x4)\nσ4 w(`+1)\n= Uφ ( K(`), U (`) ) (x1, x2, x3, x4)\n−Kφ ( K(`), U (`) ) (xα1 , xα2)Kφ ( K(`), U (`) ) (xα3 , xα4) [3]\n(C.4)\nwhere the starting point is the first layer (N (0) ≡ d)\nK(1) (x, x′) = σ2 w(1)\nN (0) x · x′ U (1) (x1, x2, x3, x4) = 0 (C.5)\nThe important point to note, is that these functional integrals can be reduced to ordinary finite dimensional integrals. For example, for the second layer, denote\nh := ( h1 h2 ) K(1) = ( K(1) (x1, x1) K (1) (x1, x2) K(1) (x1, x2) K (1) (x2, x2) ) (C.6)\nwe find for K(2) K(2) (x1, x2)\nσ2 w(2)\n= ∫ dhe − 12h TK−1 (1) h φ (h1)φ (h2) (C.7)\nand for U (2) we denote\nh := h1h2h3 h4 K(1) = K(1) (x1, x1) K (1) (x1, x2) K (1) (x1, x3) K (1) (x1, x4) K(1) (x1, x2) K (1) (x2, x2) K (1) (x2, x3) K (1) (x2, x4) K(1) (x1, x3) K (1) (x2, x3) K (1) (x3, x3) K (1) (x3, x4) K(1) (x1, x4) K (1) (x2, x4) K (1) (x3, x4) K (1) (x4, x4) (C.8)\nso that Uφ ( K(1), U (1) ) (x1, x2, x3, x4) = ∫ dhe − 12h TK−1 (1) h φ (h1)φ (h2)φ (h3)φ (h4) (C.9)\nThis iterative process can be repeated for an arbitrary number of layers." }, { "heading": "D FOURTH CUMULANT FOR THRESHOLD POWER-LAW ACTIVATION FUNCTIONS", "text": "D.1 FOURTH CUMULANT FOR RELU ACTIVATION FUNCTION\nThe U ’s appearing in our FWC results can be derived for several activations functions, and in our numerical experiments we use a quadratic activation φ(z) = z2 and ReLU. Here we give the result for ReLU, which is similar for any other threshold power law activation (see derivation in App. D.2), and give the result for quadratic activation in App. E. For simplicity, in this section we focus on the case of a 2-layer FCN with no biases, input dimension d and N neurons in the hidden layer, such that φiα := φ(w\n(i) · xα) is the activation at the ith hidden unit with input xα sampled with a uniform measure from Sd−1( √ d), where w(i) is a vector of weights of the first layer. This can be generalized to the more realistic settings of deeper nets and un-normalized inputs, where in the former the linear kernel L is replaced by the kernel of the layer preceding the output, and the latter amounts to introducing some scaling factors.\nFor φ = ReLU, (Cho & Saul, 2009) give a closed form expression for the kernel which corresponds to the GP. Here we find U corresponding to the leading FWC by first finding the fourth moment of the hidden layer µ4 := 〈φ1φ2φ3φ4〉 (see Eq. 9), taking for simplicity ς2w = 1\nµ4 =\n√ det(L−1)\n(2π) 2\n∞∫ 0 dze− 1 2z TL−1zz1z2z3z4 (D.1)\nwhere L−1 above corresponds to the matrix inverse of the 4 × 4 matrix with elements Lαβ = (xα · xβ)/d which is the kernel of the previous layer (the linear kernel in the 2-layer case) evaluated on two random points. In App. D.2 we follow the derivation in Moran (1948), which yields (with a slight modification noted therein) the following series in the off-diagonal elements of the matrix L\nµ4 = ∞∑ `,m,n,p,q,r=0 A`mnpqrL ` 12L m 13L n 14L p 23L q 24L r 34 (D.2)\nwhere the coefficients A`mnpqr are\n(−)`+m+n+p+q+r G`+m+nG`+p+qGm+p+rGn+q+r `!m!n!p!q!r!\n(D.3)\nFor ReLU activation, these G’s read\nGReLUs = 1√ 2π s = 0 −i 2 s = 1\n0 s ≥ 3 and odd (−)k(2k)!√\n2π2kk! s = 2k + 2 k = 0, 1, 2, ...\n(D.4)\nand similar expressions can be derived for other threshold power-law activations of the form φ(z) = Θ(z)zν . The series Eq. D.2 is expected to converge for sufficiently large input dimension d since the overlap between random normalized inputs scales as O(1/ √ d) and consequently L(x, x′) ∼ O(1/ √ d) for two random points from the data sets. However, when we sum over Uα1...α4 we also have terms with repeating indices and so Lαβ’s are equal to 1. The above Taylor expansion diverges whenever the 4× 4 matrix Lαβ − δαβ has eigenvalues larger than 1. Notably this divergence does not reflect a true divergence of U , but rather the failure of representing it using the above expansion. Therefore at large n, one can opt to neglect elements of U with repeating indices, since there are much fewer of these. Alternatively this can be dealt with by a re-parameterization of the z’s leading to a similar but slightly more involved Taylor series.\nD.2 DERIVATION OF THE PREVIOUS SUBSECTION\nIn this section we derive the expression for the fourth moment 〈f1f2f3f4〉 of a two-layer fully connected network with threshold-power law activations with exponent ν: φ(z) = Θ(z)zν ; ν = 0 corresponds to a step function, ν = 1 corresponds to ReLU, ν = 2 corresponds to ReQU (rectified quadratic unit) and so forth.\nWhen the inputs are normalized to lie on the hypersphere, the matrix L is\nL = 1 L12 L13 L14L12 1 L23 L24L13 L23 1 L34 L14 L24 L34 1 (D.5) where the off diagonal elements here have Lαβ = O ( 1/ √ d )\n. We follow the derivation in Ref. Moran (1948), which computes the probability mass of the positive orthant for a quadrivariate Gaussian distribution with covariance matrix L:\nP+ =\n√ det(L−1)\n(2π) 2\n∞∫ 0 dze− 1 2z TL−1z (D.6)\nThe characteristic function (Fourier transform) of this distribution is\nϕ (t1, t2, t3, t4)\n= exp ( −1\n2 tTLt\n)\n= exp ( −1\n2 4∑ α=1 t2α\n) exp −∑ α<β Lαβtαtβ = exp ( −1\n2 4∑ α=1 t2α ) ∞∑ `,m,n,p,q,r=0 (−)`+m+n+p+q+r L`12Lm13Ln14L p 23L q 24L r 34 `!m!n!p!q!r! t`+m+n1 t `+p+q 2 t m+p+r 3 t n+q+r 4\n(D.7)\nPerforming an inverse Fourier transform, we may now write the positive orthant probability as\nP+ = 1\n(2π) 4 ∫ R4+ dz ∫ R4 dt ϕ (t1, t2, t3, t4) e −i ∑4 α=1 zαtα\n= ∞∑ `,m,n,p,q,r=0 (−)`+m+n+p+q+r L`12Lm13Ln14L p 23L q 24L r 34 `!m!n!p!q!r! × · · ·\n× 1 (2π) 4 ∫ R4+ dz ∫ R4 dt e ∑4 α=1(− 12 t 2 α−izαtα)t`+m+n1 t `+p+q 2 t m+p+r 3 t n+q+r 4\n= ∞∑ `,m,n,p,q,r=0 A`mnpqrL ` 12L m 13L n 14L p 23L q 24L r 34\n(D.8)\nwhere the coefficients A`mnpqr are\nA`mnpqr = (−)`+m+n+p+q+r G`+m+nG`+p+qGm+p+rGn+q+r\n`!m!n!p!q!r! (D.9)\nand the one dimensional integral is\nG(ν=0)s = 1\n2π ∞∫ 0 dz ∞∫ −∞ ts exp ( −1 2 t2 − itz ) dt (D.10)\nWe can evaluate the integral over t to get\nG(ν=0)s = 1\n(−i)s (2π)1/2 ∞∫ 0 ( d dz )s e−z 2/2dz (D.11)\nand performing the integral over z yields\nG(ν=0)s = 1 2 s = 0 0 s even and s ≥ 2 (2k)!\ni(2π)1/22kk! s = 2k + 1 k = 0, 1, 2, ...\n(D.12)\nWe can now obtain the result for any integer ν by inserting zν inside the z integral:\nG(ν)s = 1\n2π ∞∫ 0 dz zν ∞∫ −∞ ts exp ( −1 2 t2 − itz ) dt =\n1\n(−i)s (2π)1/2 ∞∫ 0 zν ( d dz )s e−z 2/2dz\n(D.13)\nUsing integration by parts we arrive at the result Eq. D.4 reported in the main text\nGReLUs = G (ν=1) s = 1√ 2π s = 0 −i 2 s = 1\n0 s ≥ 3 and odd (−)k(2k)!√\n2π2kk! s = 2k + 2 k = 0, 1, 2, ...\n(D.14)\nSimilar expressions can be derived for other threshold power-law activations of the form φ(z) = Θ(z)zν for arbitrary integer ν. In a more realistic setting, the inputs xmay not be perfectly normalized, in which case the diagonal elements of L are not unity. It amounts to introducing a scaling factor for each of the four z’s and makes the expressions a little less neat but poses no real obstacle." }, { "heading": "E FOURTH CUMULANT FOR QUADRATIC ACTIVATION FUNCTION", "text": "For a two-layer network, we may write U , the 4th cumulant of the output f(x) = ∑N i=1 aiφ(w T i x), with ai ∼ N (0, ς2a/N) and wi ∼ N (0, (ς2w/d)I) for a general activation function φ as\nUα1,α2,α3,α4 = ς4a N ( V(α1,α2),(α3,α4) + V(α1,α3),(α2,α4) + V(α1,α4),(α2,α3) ) (E.1)\nwith V(α1,α2),(α3,α4) = 〈φ α1φα2φα3φα4〉w − 〈φ α1φα2〉w 〈φ α3φα4〉w (E.2)\nFor the case of a quadratic activation function φ(z) = z2 the V ’s read\nV(α1,α2),(α3,α4) = 2 { L11L33 (L24) 2 + L11L44 (L23) 2 + L22L33 (L14) 2 + L22L44 (L13) 2 } +...\n4 { (L13) 2 (L24) 2 + (L14) 2 (L23) 2 } +8 (L11L23L34L24 + L22L34L14L13 + L33L12L14L24 + L44L12L13L23)+...\n16 (L12L13L24L34 + L12L14L23L34 + L13L14L23L24) (E.3)\nwhere the linear kernel from the first layer is L(x, x′) = ς 2 w\nd x · x ′. Notice that we distinguish between\nthe scaled and non-scaled variances:\nσ2a = ς2a N ; σ2w = ς2w d\n(E.4)\nThese formulae were used when comparing the outputs of the empirical two-layer network with our FWC theory Eq. 11. One can generalize them straightforwardly to a network with M layers by recursively computing K(M−1) the kernel in the (M − 1)th layer (see e.g. Cho & Saul (2009)), and replacing L with K(M−1)." }, { "heading": "F AUTO-CORRELATION TIME AND ERGODICITY", "text": "As mentioned in the main text, the network outputs f̄DNN(x∗) are a result of averaging across many realizations (seeds) of initial conditions and the noisy training dynamics, and across time (epochs) after the training loss levels off. Our NNSP correspondence relies on the fact that our stochastic training dynamics are ergodic, namely that averages across time equal ensemble averages. Actually, for our purposes it suffices that the dynamics are ergodic in the mean, namely that the time-average estimate of the mean obtained from a single sample realization of the process converges in both the mean and in the mean-square sense to the ensemble mean:\nlim T̃→∞\nE [〈 fDNN(x∗; t) 〉 T̃ − µ(x∗) ] = 0\nlim T̃→∞\nE [(〈 fDNN(x∗; t) 〉 T̃ − µ(x∗) )2] = 0\n(F.1)\nwhere µ(x∗) is the ensemble mean on the test point x∗ and the time-average estimate of the mean over a time window T̃ is\n〈 fDNN(x∗; t) 〉 T̃ := 1\nT̃ ∫ T̃ 0 fDNN(x∗; t)dt ≈ 1 T̃ tj=T̃∑ tj=0 fDNN(x∗; tj) (F.2)\nThis is hard to prove rigorously but we can do a numerical consistency check using the following procedure: Consider the time series of the network output on the test point x∗ for the i’th realization as a row vector and stack these row vectors for all different realizations into a matrix F , such that Fij = fDNNi (x∗; tj). (1) Divide the time series data in the matrix F into non-overlapping sub-matrices, each of dimension nseeds × nepochs. (2) For each of these sub-matrices, find f̂(x∗) i.e. the empirical dynamical average across that time window and across the chosen seeds; (2) Find\nthe empirical variance σ2emp(x∗) across these f̂(x∗); (4) Repeat (1)-(3) for other combinations of nepochs, nseeds. If ergodicity holds, we should expect to see the following relation\nσ2emp(x∗) = σ 2 m\nτ\nnepochsnseeds (F.3)\nwhere τ is the auto-correlation time of the outputs and σ2m is the macroscopic variance. The results of this procedure are shown in Fig. F.1, where we plot on a log-log scale the empirical variance σ2emp vs. the number of epochs nepochs used for time averaging in each set (and using all 500 seeds in this case). Performing a linear fit on the average across test points (black x’s in the figure) yields a slope of approximately −1, which is strong evidence for ergodic dynamics.\nFigure F.1: Ergodicity check. Empirical variance σ2emp(x∗) vs. the number of epochs used for time averaging on a (base 10) log-log scale, with dt = 0.003 and N = 200. The colored circles represent different test points x∗ and the black x’s are averages across these." }, { "heading": "G NUMERICAL EXPERIMENT DETAILS", "text": "G.1 FCN EXPERIMENT DETAILS\nWe trained a 2-layer FCN on a quadratic target y(x) = xTAx where the x’s are sampled with a uniform measure from the hyper-sphere Sd−1( √ d), with d = 16 and the matrix elements are sampled as Aij ∼ N (0, 1) and fixed for all x’s. For both activation functions, we used a training noise level of σ2 = 0.2, training set of size n = 110 and a weight decay of the first layer γw = 0.05. Notice that for any activation φ, K scales linearly with ς2a = σ 2 aN = (T/γa) ·N , thus in order to keep K constant as we vary N we need to scale the weight decay of the last layer as γa ∼ O(N). This is done in order to keep the prior distribution in accord with the typical values of the target as N varies, so that the comparison is fair.\nWe ran each experiment for 2 ·106 epochs, which includes the time it takes for the training loss to level off, which is usually on the order of 104 epochs. In the main text we showed GP and FWC results for a learning rate of dt = 0.001. Here we report in Fig. G.1 the results using dt ∈ {0.003, 0.001, 0.0005}. For a learning rate of dt = 0.003 and width N ≥ 1000 the dynamics become unstable and strongly oscillate, thus the general trend is broken, as seen in the blue markers in Fig. G.1. The dynamics with the smaller learning rates are stable, and we see that there is a convergence to very similar values up to an expected statistical error.\nFigure G.1: Regression task with fully connected network: (un-normalized) MSE vs. width on log-log scale (base 10) for quadratic activation and different leaning rates. The learning rates dt = 0.001, 0.0005 converge to very similar values (recall this is a log scale), demonstrating that the learning rate is sufficiently small so that the discrete-time dynamics is a good approximation of the continuous-time dynamics. For a learning rate of dt = 0.003 (blue) and width N ≥ 1000 the dynamics become unstable, thus the general trend is broken, so one cannot take the dt to be too large.\nG.2 CNN EXPERIMENT DETAILS AND ADDITIONAL SETTINGS\nThe CNN experiment reported in the main text was carried as follows.\nDataset: In the main text Fig. 3 we used a random sample of 10 train-points and 2000 test points from the CIFAR10 dataset, and in App. H we report results on 1000 train-points and 1000 test points, balanced in terms of labels. To use MSE loss, the ten categorical labels were one-hot encoded into vector of zeros and one.\nArchitecture: we used 6 convolutional layers with ReLU non-linearity, kernel of size 5× 5, stride of 1, no-padding, no-pooling. The number of input channels was 3 for the input layer and C for the subsequent 5 CNN layers. We then vectorized the outputs of the final layer and fed it into an ReLU activated fully-connected layer with 25C outputs, which were fed into a linear layer with 10 outputs corresponding to the ten categories. The loss we used was MSE loss.\nTraining: Training was carried using full-batch SGD (GD) at varying learning-rates around 5 · 10−4, Gaussian white noise was added to the gradients to generate σ2 = 0.2 in the NNGP-correspondence, layer-dependant weight decay and bias decay which implies a (normalized by width) weight variance and bias variance of σ2w = 2 and σ 2 b = 1 respectively, when trained with no-data. During training we saved, every 1000 epochs, the outputs of the CNN on every test point. We note in passing that the standard deviation of the test outputs around their training-time-averaged value was about 0.1 per CNN output. Training was carried for around half a million epochs which enabled us to reach a statistical error of about 2 · 10−4, in estimating the Mean-Squared-Discrepancy between the training-time-averaged CNN outputs and our NNGP predictions. Notably our best agreement between the DNN and GP occurred at 112 channels where the MSE was about 7 · 10−3. Notably the variance of the CNN (the average of its outputs squared) with no data, was about 25.\nStatistics. To train our CNN within the regime of the NNSP correspondence, sufficient training time (namely, epochs) was needed to get estimates of the average outputs f̄E(xα) = f̄(xα) + δfα since\nthe estimators’ fluctuations, δfα, scale as (τ/ttraining)−1/2, where τ is an auto-correlation time scale. Notably, apart from just random noise when estimating the relative MSE between the averaged CNN outputs and the GP, a bias term appears equal to the variance of δfα averaged over all α’s as indeed ntest∑ α=1 (f̄E(xα)−fGP (xα))2 = ntest∑ α=1 (f̄(xα)−fGP (xα))2−2 ntest∑ α=1 (f̄E(xα)−fGP (xα))δfα+ ntest∑ α=1 (δfα) 2 (G.1) In all our experiments this bias was the dominant source of statistical error. One can estimate it roughly given the number of uncorrelated samples taken into f̄E(xα) and correct the estimator. We did not do so in the main text to make the data analysis more transparent. Since the relative MSEs go down to 7 · 10−3 and the fluctuations of the outputs quantified by Σα = (δfα)2 are of the order 0.12, the amount of uncorrelated samples of CNN outputs we require should be much larger than 0.12/(7 · 10−3) ≈ 1.43. To estimate this bias in practice we repeated the experiment with 3-7 different initialization seeds and deduced the bias from the variance of the results. For comparison with NNGP (our DNN − GP plots) the error bars were proportional to the variance of δfα. For comparison with the target, we took much larger error bars equal to the uncertainty in estimating the expected loss from a test set of size 1000. These latter error bars where estimated empirically by measuring the variance across ten smaller test sets of size 100.\nLastly we discarded the initial “burn-in\" epochs, where the network has not yet reached equilibrium. We took this burn-in time to be the time it takes the train-loss to reach within 5% of its stationary value at large times. We estimated the stationary values by waiting until the DNNs train loss remained constant (up to trends much smaller than the fluctuations) for about 5 ·105 epochs. This also coincided well with having more or less stationary test loss.\nLearning rate. To be in the regime of the NNSP correspondence, the learning rate must be taken small enough such that discrepancy resulting from having discretization correction to the continuum Langevin dynamics falls well below those coming from finite-width. We find that higher C require lower learning rates, potentially due to the weight decay term being large at large width. In Fig. G.2. we report the relative MSE between the NNGP and CNN at learning rates of 0.002, 0.001, 0.0005 and C = 48 showing good convergence already at 0.001. Following this we used learning rates of 0.0005 for C ≤ 48 and 0.00025 for C > 48, in the main figure.\nFigure G.2: MSE between our CNN with C = 48 and its NNGP as a function of three learning rates.\nComparison with the NNGP. Following Novak et al. (2018), we obtained the Kernel of our CNN. Notably, since we did not have pooling layers this can be done straightforwardly without any approximations. The NNGP predictions were then obtained in a standard manner (Rasmussen & Williams, 2005)." }, { "heading": "H FURTHER NUMERICAL RESULTS ON CNNS", "text": "Here we report two additional numerical results following the CNN experiment we carried (for details see App. G). Fig. H.3b is the same as Fig. H.3a apart from the fact that we subtracted our estimate of the statistical bias of our MSE estimator described in App. G.\n(a) n = 1000 (b) subtract bias of MSE estimator\nFigure H.3: CNNs trained on CIFAR10 in the regime of the NNSP correspondence compared with NNGPs MSE test loss normalized by target variance of a deep CNN (solid green) and its associated NNGP (dashed green) along with the MSE between the NNGP’s predictions and CNN outputs normalized by the NNGP’s MSE test loss (solid blue, and on a different scale). We used balanced training and test sets of size 1000 each. For the largest number of channels we reached, the slope of the discrepancy between the CNN’s GP and the trained DNN on the log-log scale was−1.77, placing us close to the perturbartive regime where a slope of −2.0 is expected. Error bars here reflect statistical errors related only to output averaging and not due to the random choice of a test-set. The performance deteriorates at large N = #Channels as the NNSP associated with the CNN approaches an NNGP.\nConcerning the experiment with 10 training points. Here we used the same CNN as in the previous experiment. The noise level was again the same and led to an effective σ2 = 0.1 for the GP. The weight decay on the biases was taken to be ten times larger leading to σ2b = 0.1 instead of σb = 1.0 as before. For C ≤ 80 we used a learning rate of dt = 5 · 10−5 after verifying that reducing it further had no appreciable effect. For C ≤ 80 we used dt = 2.5 ·10−5. For c ≤ 80 we used 6 ·10+5 training epochs and we averaged over 4 different initialization seeds. For C > 80 we used between 10− 16 different initialization seeds. We reduced the aforementioned statistical bias in estimating the MSE from all our MSEs. This bias, equal to the variance of the averaged outputs, was estimated based on our different seeds. The error bars equal this estimated variance which was the dominant source of error." }, { "heading": "I THE FOURTH CUMULANT CAN DIFFERENTIATE CNNS FROM LCNS", "text": "Here we show that while the NNGP kernel K of a CNN without pooling cannot distinguish a CNN from an LCN, the fourth cumulant, U , can. For simplicity let us consider the simplest CNN without pooling consisting of the following parts: (1) A 1D image with one color/channel (Xi) as input i ∈ {0, . . . , L − 1}; (2) A single convolutional layer with some activation φ acting with stride 1 and no-padding using the conv-kernel T cx where c ∈ {1, . . . , C} is a channel number index and x ∈ {0, . . . , 2l} is the relative position in the image. Notably, in an LCN this conv-kernel will receive an additional dependence on x̃, the location on Xi on which the kernel acts. (3) A vectorizing operation taking the C outputs of each convolutional around a point x̃ ∈ {l, . . . , L − l}, into a single index y ∈ {0, . . . , C(L − 2l)}. (4) A linear fully connected layer with weights W ocx̃ where o ∈ {0, . . . ,#outputs} are the output indices. Consider first the NNGP of such a random DNN with weights chosen according to some iid Gaussian distribution P0(w), with w including both W ocx̃ and T c x . Denoting by z\no(x) the o’th output of the CNN, for an input x we have (where we denote in this section 〈· · · 〉 := 〈· · · 〉P0(w))\nKoo ′ (x, x′) ≡ 〈zo(x)zo ′ (x′)〉 = δoo′ ∑ c,c′,x̃,x̃′ 〈W ocx̃W o ′ c′x̃′〉〈φ(T cx(x̃)Xx+x̃−l)φ(T c ′ x (x̃ ′)Xx+x̃′−l)〉\n(I.1)\nThe NNGP kernel of an LCN is the same as that of a CNN. This stems from the fact that 〈W ocx̃W oc′x̃′〉 yields a Kronecker delta function on the x̃, x̃′ indices. Consequently, the difference between LCN and CNN, which amounts to whether T cx(x̃) is the same (CNN) or a different (LCN) random variable than T cx′ 6=x(x̃ ′), becomes irrelevant as the these two are never averaged together.\nFor simplicity, we turn to the fourth cumulant of the same output, given by\n〈zo(x1) · · · zo(x4)〉−〈zo(xα)zo(xβ)〉〈zo(xγ)zo(xδ)〉[3] = 〈zo(x1) · · · zo(x4)〉−K(xα, xβ)K(xγ , xδ)[3] (I.2) with the second term on the LHS implying all pair-wise averages of zo(x1)..zo(x4). Note that the first term on the LHS is not directly related to the kernel, thus it has a chance of differentiating a CNN from an LCN. Explicitly, it reads∑\nc1..c4x̃1..x̃4\n〈W oc1x̃1 · · ·W o c4x̃′4 〉〈φ(T c1x1 (x̃1)Xx1+x̃1−l) · · ·φ(T c4 x4 (x̃4)Xx4+x̃′4−l)〉 (I.3)\nThe average over the four W ’s yields non-zero terms of the type W ocx̃W o cx̃W o c′x̃′W o c′x̃′ with either x̃ = x̃′ (type 1), x̃ 6= x̃′ and c 6= c′ (type 2), or x̃ 6= x̃′ and c = c′ (type 3). The type 1 contribution cannot differentiate an LCN form a CNN since, as in the NNGP case, they always involve only one x̃. The type 2 contribution also cannot differentiate since it yields∑ c 6=c′;x̃ 6=x̃′ 〈W ocx̃W ocx̃〉〈W oc′x̃′W oc′x̃′〉〈φ(T cx(x̃)Xx+x̃−l)φ(T cx(x̃)Xx+x̃−l)φ(T c ′ x′ (x̃ ′)Xx′+x̃′−l)φ(T c′ x′ (x̃ ′)Xx′+x̃′−l)〉 (I.4) Examining the average involving the four T ’s, one finds that since T cx(x̃) is uncorrelated with T c′ x′ (x̃ ′) for both LCNs and CNNs, it splits into∑ c 6=c′;x̃ 6=x̃′ 〈W ocx̃W ocx̃〉〈W oc′x̃′W oc′x̃′〉〈φ(T cx(x̃)Xx+x̃−l)φ(T cx(x̃)Xx+x̃−l)〉〈φ(T c ′ x′ (x̃ ′)Xx′+x̃′−l)φ(T c′ x′ (x̃ ′)Xx′+x̃′−l)〉 (I.5) where as in the NNGP, two T ’s with different x̃ are never averaged together and we only get a contribution proportional to products of two K’s. We note in passing that these type 2 terms yield a contribution that largely cancels that ofK(xα, xβ)K(xγ , xδ)[3], apart from a “diagonal\" contribution (x̃ = x̃′).\nWe turn our attention to the type 3 term given by∑ c;x̃ 6=x̃′ 〈W ocx̃W ocx̃〉〈W ocx̃′W ocx̃′〉〈φ(T cx(x̃)Xx+x̃−l)φ(T cx(x̃)Xx+x̃−l)φ(T cx′(x̃′)Xx′+x̃′−l)φ(T cx′(x̃′)Xx′+x̃′−l)〉 (I.6) Examining the average involving the four T ’s, one now finds a sharp difference between an LCN and a CNN. For an LCN, this average would split into a product of two K’s since T cx(x̃) would be uncorrelated with T cx(x̃ ′). For a CNN however, T cx(x̃) is the same random variable as T c x(x̃ ′) and therefore the average does not split giving rise to a distinct contribution that differentiates a CNN from an LCN. Notably, it is small by a factor of 1/C owing to the fact that it contains a redundant summation over one c-index while the averages over the four W ’s contain a 1/C2 factor when properly normalized." }, { "heading": "J CORRECTIONS TO EK", "text": "Here we derive finite-N correction to the Equivalent Kernel result. Using the tools developed by Cohen et al. (2019), the replicated partition function relevant for estimating the predictions of the network (f(x∗)) averaged (〈· · · 〉n) over all draws of datasets of size n′ with n′ taken from a Poisson distribution with mean n is given by\nZn =\n∫ Dfe−SGP[f ]− n 2σ2 ∫ dµx(f(x)−y(x))2(1 + SU [f ]) +O(1/N2) (J.1)\nwith SGP[f ] and SU [f ] given in Eq. 8. We comment that the above expression is only valid for obtaining the leading order asymptotics in n. Enabling generic n requires introducing replicas explicitly (see Cohen et al. (2019)). Notably, the above expression coincides with that used for\na finite dataset, with two main differences: all the sums over the training set have been replaced by integrals with respect to the measure, µx, from which data points are drawn. Furthermore σ2 is now accompanied by n. Following this, all the diagrammatic and combinatorial aspects shown in the derivation for a finite dataset hold here as well. For instance, let us examine a specific contribution coming from the quartic term in H[f ]: Ux1..x4K\n−1 x1x′1 · · ·K−1x4x′4f(x ′ 1) · · · f(x′4), and\nfrom the diagram/Wick-contraction where we take the expectation value of 3 out of the 4 f ’s in this quartic term, to arrive at an expression which is ultimately cubic in the targets y\nUx1,x2,x3,x4K −1 x1x′1 〈f(x′1)〉∞K−1x2x′2〈f(x ′ 2)〉∞K−1x3x′3〈f(x ′ 3)〉∞K−1x4x′4Σ∞(x ′ 4, x∗) (J.2)\nwhere we recall that 〈f(x)〉∞ = Kxx′K̃−1x′x′′y(x′′) and Σ∞(x1, x2) = Kx1,x2 − Kx1,x′K̃ −1 x′,x′′Kx′′,x2 being the posterior covariance in the EK limit, where K̃xx′f(x ′) = Kxx′f(x ′)+ (σ2/n)f(x). Using the fact that K−1xx′Kx′x′′ gives a delta function w.r.t. the measure, the integrals against K−1xαx′α can be easily carried out yielding(\nUx1,x2,x3,x∗ − Ux1,x2,x3,x4K̃−1x4,x′4Kx′4,x∗ ) K̃−1x1,x′1 K̃−1x2,x′2 K̃−1x3,x′3 y(x′1)y(x ′ 2)y(x ′ 3) (J.3)\nIntroducing the discrepancy operator δ̃xx′′ := δxx′′ −Kxx′K̃−1x′x′′ = σ2 n K̃ −1 xx′′ , we can write a more compact expression( n σ2 )3 δ̃x∗,x4Ux1,x2,x3,x4 δ̃x1,x′1 δ̃x2,x′2 δ̃x3,x′3y(x ′ 1)(x ′ 2)y(x ′ 3) (J.4)\nThis with the additional 1/4! factor times the combinatorial factor of 4 related to choosing the \"partner\" of f(x∗) in the Wick contraction, yields an overall factor of 1/6 as in the main text, Eq. 14. The other term therein, which is linear in y, is a result of following similar steps with the f̄ΣΣ∗ contributions that do not get canceled by the quadratic part in H[f ]." } ]
2,020
null
SP:95899f38fd0f1789510e67178b587c08a14203f5
[ "This paper proposes adding regularization terms to encourage diversity of the layer outputs in order to improve the generalization performance. The proposed idea is an extension of Cogswell's work with different regularization terms. In addition, the authors performed detailed generalization analysis based on the Rademacher complexity. The appearance of the term related to the layer output diversity in the generalization bound provides theoretical support for the proposed idea." ]
During the last decade, neural networks have been intensively used to tackle various problems and they have often led to state-of-the-art results. These networks are composed of multiple jointly optimized layers arranged in a hierarchical structure. At each layer, the aim is to learn to extract hidden patterns needed to solve the problem at hand and forward it to the next layers. In the standard form, a neural network is trained with gradient-based optimization, where the errors are back-propagated from the last layer back to the first one. Thus at each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy. In this paper, we propose to complement this traditional ’between-layer’ feedback with additional ’within-layer’ feedback to encourage diversity of the activations within the same layer. To this end, we measure the pairwise similarity between the outputs of the neurons and use it to model the layer’s overall diversity. By penalizing similarities and promoting diversity, we encourage each neuron to learn a distinctive representation and, thus, to enrich the data representation learned within the layer and to increase the total capacity of the model. We theoretically study how the within-layer activation diversity affects the generalization performance of a neural network in a supervised context and we prove that increasing the diversity of hidden activations reduces the estimation error. In addition to the theoretical guarantees, we present an empirical study confirming that the proposed approach enhances the performance of neural networks.
[]
[ { "authors": [ "Madhu S Advani", "Andrew M Saxe", "Haim Sompolinsky" ], "title": "High-dimensional dynamics of generalization error in neural networks", "venue": "Neural Networks,", "year": 2020 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Behnam Neyshabur", "Yi Zhang" ], "title": "Stronger generalization bounds for deep nets via a compression approach", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Wei Hu", "Yuping Luo" ], "title": "Implicit regularization in deep matrix factorization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yebo Bao", "Hui Jiang", "Lirong Dai", "Cong Liu" ], "title": "Incoherent training of deep neural networks to decorrelate bottleneck features for speech recognition", "venue": "In International Conference on Acoustics, Speech and Signal Processing,", "year": 2013 }, { "authors": [ "Andrew R Barron" ], "title": "Universal approximation bounds for superpositions of a sigmoidal function", "venue": "IEEE Transactions on Information theory,", "year": 1993 }, { "authors": [ "Andrew R Barron" ], "title": "Approximation and estimation bounds for artificial neural networks", "venue": "Machine Learning,", "year": 1994 }, { "authors": [ "Peter L Bartlett", "Shahar Mendelson" ], "title": "Rademacher and gaussian complexities: Risk bounds and structural results", "venue": "Journal of Machine Learning Research,", "year": 2002 }, { "authors": [ "Peter L Bartlett", "Nick Harvey", "Christopher Liaw", "Abbas Mehrabian" ], "title": "Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machinelearning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Alberto Bietti", "Grégoire Mialon", "Dexiong Chen", "Julien Mairal" ], "title": "A kernel perspective for regularizing deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Erdem Bıyık", "Kenneth Wang", "Nima Anari", "Dorsa Sadigh" ], "title": "Batch active learning using determinantal point processes", "venue": "arXiv preprint arXiv:1906.07975,", "year": 2019 }, { "authors": [ "Michael Cogswell", "Faruk Ahmed", "Ross B. Girshick", "Larry Zitnick", "Dhruv Batra" ], "title": "Reducing overfitting in deep networks by decorrelating representations", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Michal Derezinski", "Daniele Calandriello", "Michal Valko" ], "title": "Exact sampling of determinantal point processes with sublinear time preprocessing", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Lu Gan", "Diana Nurbakova", "Léa Laporte", "Sylvie Calabretto" ], "title": "Enhancing recommendation diversity using determinantal point processes on knowledge graphs", "venue": "In Conference on Research and Development in Information Retrieval,", "year": 2020 }, { "authors": [ "Mike Gartrell", "Victor-Emmanuel Brunel", "Elvis Dohmatob", "Syrine Krichene" ], "title": "Learning nonsymmetric determinantal point processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Izhak Golan", "Ran El-Yaniv" ], "title": "Deep anomaly detection using geometric transformations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Noah Golowich", "Alexander Rakhlin", "Ohad Shamir" ], "title": "Size-independent sample complexity of neural networks", "venue": "In Conference On Learning Theory,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Yoshua Bengio", "Aaron Courville" ], "title": "Deep learning", "venue": null, "year": 2016 }, { "authors": [ "Steve Hanneke" ], "title": "The optimal sample complexity of PAC learning", "venue": "Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Nick Harvey", "Christopher Liaw", "Abbas Mehrabian" ], "title": "Nearly-tight vc-dimension bounds for piecewise linear neural networks", "venue": "In Conference on Learning Theory,", "year": 2017 }, { "authors": [ "Yang He", "Ping Liu", "Ziwei Wang", "Zhilan Hu", "Yi Yang" ], "title": "Filter pruning via geometric median for deep convolutional neural networks acceleration", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Li Deng", "Dong Yu", "George E Dahl", "Abdel-rahman Mohamed", "Navdeep Jaitly", "Andrew Senior", "Vincent Vanhoucke", "Patrick Nguyen", "Tara N Sainath" ], "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "venue": "Signal processing magazine,", "year": 2012 }, { "authors": [ "Geoffrey E Hinton", "Nitish Srivastava", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan R Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": "arXiv preprint arXiv:1207.0580,", "year": 2012 }, { "authors": [ "Kenji Kawaguchi", "Leslie Pack Kaelbling", "Yoshua Bengio" ], "title": "Generalization in deep learning", "venue": "arXiv preprint arXiv:1710.05468,", "year": 2017 }, { "authors": [ "Yusuke Kondo", "Koichiro Yamauchi" ], "title": "A dynamic pruning strategy for incremental learning on a budget", "venue": "In International Conference on Neural Information Processing,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Jan Kukačka", "Vladimir Golkov", "Daniel Cremers" ], "title": "Regularization for deep learning: A taxonomy", "venue": "arXiv preprint arXiv:1710.10686,", "year": 2017 }, { "authors": [ "Alex Kulesza", "Ben Taskar" ], "title": "Structured determinantal point processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Alex Kulesza", "Ben Taskar" ], "title": "Determinantal point processes for machine learning", "venue": "arXiv preprint arXiv:1207.6083,", "year": 2012 }, { "authors": [ "James T Kwok", "Ryan P Adams" ], "title": "Priors for diversity in generative latent variable models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Hae Beom Lee", "Taewook Nam", "Eunho Yang", "Sung Ju Hwang" ], "title": "Meta dropout: Learning to perturb latent features for generalization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Seunghyun Lee", "Byeongho Heo", "Jung-Woo Ha", "Byung Cheol Song" ], "title": "Filter pruning and reinitialization via latent space clustering", "venue": "IEEE Access,", "year": 2020 }, { "authors": [ "Nan Li", "Yang Yu", "Zhi-Hua Zhou" ], "title": "Diversity regularized ensemble pruning", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2012 }, { "authors": [ "Zhe Li", "Boqing Gong", "Tianbao Yang" ], "title": "Improved dropout for shallow and deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Andrew L Maas", "Awni Y Hannun", "Andrew Y Ng" ], "title": "Rectifier nonlinearities improve neural network acoustic models", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Jonathan Malkin", "Jeff Bilmes" ], "title": "Ratio semi-definite classifiers", "venue": "In International Conference on Acoustics, Speech and Signal Processing,", "year": 2008 }, { "authors": [ "Jonathan Malkin", "Jeff Bilmes" ], "title": "Multi-layer ratio semi-definite classifiers", "venue": "In International Conference on Acoustics, Speech and Signal Processing,", "year": 2009 }, { "authors": [ "Vaishnavh Nagarajan", "J Zico Kolter" ], "title": "Uniform convergence may be unable to explain generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep double descent: Where bigger models and more data hurt", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Behnam Neyshabur", "Zhiyuan Li", "Srinadh Bhojanapalli", "Yann LeCun", "Nathan Srebro" ], "title": "The role of over-parametrization in generalization of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tomaso Poggio", "Kenji Kawaguchi", "Qianli Liao", "Brando Miranda", "Lorenzo Rosasco", "Xavier Boix", "Jack Hidary", "Hrushikesh Mhaskar" ], "title": "Theory of deep learning III: explaining the non-overfitting puzzle", "venue": "arXiv preprint arXiv:1801.00173,", "year": 2017 }, { "authors": [ "Pravendra Singh", "Vinay Kumar Verma", "Piyush Rai", "Vinay Namboodiri" ], "title": "Leveraging filter correlations for deep model compression", "venue": "In The IEEE Winter Conference on Applications of Computer Vision,", "year": 2020 }, { "authors": [ "Jure Sokolic", "Raja Giryes", "Guillermo Sapiro", "Miguel RD Rodrigues" ], "title": "Lessons from the rademacher complexity for deep learning", "venue": null, "year": 2016 }, { "authors": [ "Eduardo D Sontag" ], "title": "VC dimension of neural networks", "venue": "NATO ASI Series F Computer and Systems Sciences, pp", "year": 1998 }, { "authors": [ "Yehui Tang", "Yunhe Wang", "Yixing Xu", "Boxin Shi", "Chao Xu", "Chunjing Xu", "Chang Xu" ], "title": "Beyond dropout: Feature map distortion to regularize deep neural networks", "venue": "In Association for the Advancement of Artificial Intelligence,", "year": 2020 }, { "authors": [ "Haotian Wang", "Wenjing Yang", "Zhenyu Zhao", "Tingjin Luo", "Ji Wang", "Yuhua Tang" ], "title": "Rademacher dropout: An adaptive dropout for deep neural network via optimizing generalization gap", "venue": null, "year": 2019 }, { "authors": [ "Bo Xie", "Yingyu Liang", "Le Song" ], "title": "Diverse neural network learns true target functions", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Pengtao Xie", "Yuntian Deng", "Eric Xing" ], "title": "Diversifying restricted boltzmann machine for document modeling", "venue": "In International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Pengtao Xie", "Yuntian Deng", "Eric Xing" ], "title": "On the generalization error bounds of neural networks under diversity-inducing mutual angular regularization", "venue": "arXiv preprint arXiv:1511.07110,", "year": 2015 }, { "authors": [ "Pengtao Xie", "Jun Zhu", "Eric Xing" ], "title": "Diversity-promoting bayesian learning of latent variable models", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Pengtao Xie", "Aarti Singh", "Eric P Xing" ], "title": "Uncorrelation and evenness: a new diversity-promoting regularizer", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Yang Yu", "Yu-Feng Li", "Zhi-Hua Zhou" ], "title": "Diversity regularized machine", "venue": "In International Joint Conference on Artificial Intelligence,", "year": 2011 }, { "authors": [ "Ke Zhai", "Huan Wang" ], "title": "Adaptive dropout with rademacher complexity regularization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In International Conference on Learning Representations,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks are a powerful class of non-linear function approximators that have been successfully used to tackle a wide range of problems. They have enabled breakthroughs in many tasks, such as image classification (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012a), and anomaly detection (Golan & El-Yaniv, 2018). Formally, the output of a neural network consisting of P layers can be defined as follows:\nf(x;W) = φP (W P (φP−1(· · ·φ2(W 2φ1(W 1x)))), (1) where φi(.) is the element-wise activation function, e.g., ReLU and Sigmoid, of the ith layer and W = {W 1, . . . ,W P } are the corresponding weights of the network. The parameters of f(x;W) are optimized by minimizing the empirical loss:\nL̂(f) = 1\nN N∑ i=1 l ( f(xi;W), yi ) , (2)\nwhere l(·) is the loss function, and {xi, yi}Ni=1 are the training samples and their associated groundtruth labels. The loss is minimized using the gradient decent-based optimization coupled with backpropagation.\nHowever, neural networks are often over-parameterized, i.e., have more parameters than data. As a result, they tend to overfit to the training samples and not generalize well on unseen examples (Goodfellow et al., 2016). While research on Double descent (Belkin et al., 2019; Advani et al., 2020; Nakkiran et al., 2020) shows that over-parameterization does not necessarily lead to overfitting, avoiding overfitting has been extensively studied (Neyshabur et al., 2018; Nagarajan & Kolter,\n2019; Poggio et al., 2017) and various approaches and strategies have been proposed, such as data augmentation (Goodfellow et al., 2016), regularization (Kukačka et al., 2017; Bietti et al., 2019; Arora et al., 2019), and dropout (Hinton et al., 2012b; Wang et al., 2019; Lee et al., 2019; Li et al., 2016), to close the gap between the empirical loss and the expected loss.\nDiversity of learners is widely known to be important in ensemble learning (Li et al., 2012; Yu et al., 2011) and, particularly in deep learning context, diversity of information extracted by the network neurons has been recognized as a viable way to improve generalization (Xie et al., 2017a; 2015b). In most cases, these efforts have focused on making the set of weights more diverse (Yang et al.; Malkin & Bilmes, 2009). However, diversity of the activation has not received much attention.\nInspired by the motivation of dropout to co-adapt neuron activation, Cogswell et al. (2016) proposed to regularize the activations of the network. An additional loss using cross-covariance of hidden activations was proposed, which encourages the neurons to learn diverse or non-redundant representations. The proposed approach, known as Decov, has empirically been proven to alleviate overfitting and to improve the generalization ability of neural network, yet a theoretical analysis to prove this has so far been lacking.\nIn this work, we propose a novel approach to encourage activation diversity within the same layer. We propose complementing ’between-layer’ feedback with additional ’within-layer’ feedback to penalize similarities between neurons on the same layer. Thus, we encourage each neuron to learn a distinctive representation and to enrich the data representation learned within each layer. Moreover, inspired by Xie et al. (2015b), we provide a theoretical analysis showing that the within-layer activation diversity boosts the generalization performance of neural networks and reduces overfitting.\nOur contributions in this paper are as follows:\n• Methodologically, we propose a new approach to encourage the ’diversification’ of the layer-wise feature maps’ outputs in neural networks. The proposed approach has three variants based on how the global diversity is defined. The main intuition is that by promoting the within-layer activation diversity, neurons within the same layer learn distinct patterns and, thus, increase the overall capacity of the model.\n• Theoretically, we analyse the effect the within-layer activation diversity on the generalization error bound of neural network. The analysis is presented in Section 3. As shown in Theorems 3.7, 3.8, 3.9, 3.10, 3.11, and 3.12, we express the upper-bound of the estimation error as a function of the diversity factor. Thus, we provide theoretical evidence that the within-layer activation diversity can help reduce the generalization error.\n• Empirically, we show that the within-layer activation diversity boosts the performance of neural networks. Experimental results show that the proposed approach outperforms the competing methods." }, { "heading": "2 WITHIN-LAYER ACTIVATION DIVERSITY", "text": "We propose a diversification strategy, where we encourage neurons within a layer to activate in a mutually different manner, i.e., to capture different patterns. To this end, we propose an additional within-layer loss which penalizes the neurons that activate similarly. The loss function L̂(f) defined in equation 2 is augmented as follows:\nL̂aug(f) = L̂(f) + λ P∑ i=1 J i, (3)\nwhere J i expresses the overall pair-wise similarity of the neurons within the ith layer and λ is the penalty coefficient for the diversity loss. As in (Cogswell et al., 2016), our proposed diversity loss can be applied to a single layer or multiple layers in a network. For simplicity, let us focus on a single layer.\nLet φin(xj) and φ i m(xj) be the outputs of the n th and mth neurons in the ith layer for the same input sample xj . The similarity snm between the the nth and mth neurons can be obtained as the average similarity measure of their outputs for N input samples. We use the radial basis function to\nexpress the similarity:\nsnm = 1\nN N∑ j=1 exp ( − γ||φin(xj)− φim(xj)||2 ) , (4)\nwhere γ is a hyper-parameter. The similarity snm can be computed over the whole dataset or batchwise. Intuitively, if two neurons n andm have similar outputs for many samples, their corresponding similarity snm will be high. Otherwise, their similarity smn is small and they are considered “diverse”. Based on these pair-wise similarities, we propose three variants for the global diversity loss J i of the ith layer:\n• Direct: J i = ∑ n 6=m snm. In this variant, we model the global layer similarity directly\nas the sum of the pairwise similarities between the neurons. By minimizing their sum, we encourage the neurons to learn different representations.\n• Det: J i = −det(S), where S is a similarity matrix defined as Snm = snm. This variant is inspired by the Determinantal Point Process (DPP) (Kulesza & Taskar, 2010; 2012), as the determinant of S measures the global diversity of the set. Geometrically, det(S) is the volume of the parallelepiped formed by vectors in the feature space associated with s. Vectors that result in a larger volume are considered to be more “diverse”. Thus, maximizing det(·) (minimizing −det(·)) encourages the diversity of the learned features.\n• Logdet: J i = −logdet(S)1. This variant has the same motivation as the second one. We use logdet instead of det as logdet is a convex function over the positive definite matrix space.\nIt should be noted here that the first proposed variant, i.e., direct, similar to Decov (Cogswell et al., 2016), captures only the pairwise diversity between components and is unable to capture the higherorder “diversity”, whereas the other two variants consider the global similarity and are able to measure diversity in a more global manner.\nOur newly proposed loss function defined in equation 3 has two terms. The first term is the classic loss function. It computes the loss with respect to the ground-truth. In the back-propagation, this feedback is back-propagated from the last layer to the first layer of the network. Thus, it can be considered as a between-layer feedback, whereas the second term is computed within a layer. From equation 3, we can see that our proposed approach can be interpreted as a regularization scheme. However, regularization in deep learning is usually applied directly on the parameters, i.e., weights (Goodfellow et al., 2016; Kukačka et al., 2017), while in our approach, similar to (Cogswell et al., 2016), an additional term is defined over the output maps of the layers. For a layer with C neurons and a batch size of N , the additional computational cost is O(C2(N + 1)) for direct variant and O(C3 + C2N)) for both the determinant and log of the determinant variants." }, { "heading": "3 GENERALIZATION ERROR ANALYSIS", "text": "In this section, we analyze how the proposed within-layer diversity regularizer affects the generalization error of a neural network. Generalization theory (Zhang et al., 2017; Kawaguchi et al., 2017) focuses on the relation between the empirical loss, as defined in equation 2, and the expected risk defined as follows:\nL(f) = E(x,y)∼Q[l(f(x), y)], (5)\nwhere Q is the underlying distribution of the dataset. Let f∗ = argminf L(f) be the expected risk minimizer and f̂ = argminf L̂(f) be the empirical risk minimizer. We are interested in the estimation error, i.e., L(f∗)−L(f̂), defined as the gap in the loss between both minimizers (Barron, 1994). The estimation error represents how well an algorithm can learn. It usually depends on the complexity of the hypothesis class and the number of training samples (Barron, 1993; Zhai & Wang, 2018).\n1This is defined only if S is positive definite. It can be shown that in our case S is positive semi-definite. Thus, in practice we use a regularized version (S + I) to ensure the positive definiteness.\nSeveral techniques have been used to quantify the estimation error, such as PAC learning (Hanneke, 2016; Arora et al., 2018), VC dimension (Sontag, 1998; Harvey et al., 2017; Bartlett et al., 2019), and the Rademacher complexity (Xie et al., 2015b; Zhai & Wang, 2018; Tang et al., 2020). The Rademacher complexity has been widely used as it usually leads to a tighter generalization error bound (Sokolic et al., 2016; Neyshabur et al., 2018; Golowich et al., 2018). The formal definition of the empirical Rademacher complexity is given as follows: Definition 3.1. (Bartlett & Mendelson, 2002) For a given dataset with N samples D = {xi, yi}Ni=1 generated by a distribution Q and for a model space F : X → R with a single dimensional output, the empirical Rademacher complexityRN (F) of the set F is defined as follows:\nRN (F) = Eσ [ sup f∈F 1 N N∑ i=1 σif(xi) ] , (6)\nwhere the Rademacher variables σ = {σ1, · · · , σN} are independent uniform random variables in {−1, 1}.\nIn this work, we analyse the estimation error bound of a neural network using the Rademacher complexity and we are interested in the effect of the within-layer diversity on the estimation error. In order to study this effect, inspired by (Xie et al., 2015b), we assume that with a high probability τ, the distance between the output of each pair of neurons, (φn(x)−φm(x))2, is lower bounded by dmin for any input x. Note that this condition can be expressed in terms of the similarity s defined in equation 4: snm ≤ e(−γdmin) = smin for any two distinct neurons with the probability τ . Our analysis starts with the following lemma: Lemma 3.2. (Xie et al., 2015b; Bartlett & Mendelson, 2002) With a probability of at least 1− δ\nL(f̂)− L(f∗) ≤ 4RN (A) +B √ 2 log(2/δ)\nN (7)\nfor B ≥ supx,y,f |l(f(x), y)|, whereRN (A) is the Rademacher complexity of the loss set A.\nIt upper-bounds the estimation error using the Rademacher complexity defined over the loss set and supx,y,f |l(f(x), y)|. Our analysis continues by seeking a tighter upper bound of this error and showing how the within-layer diversity, expressed with dmin, affects this upper bound. We start by deriving such an upper-bound for a simple network with one hidden layer trained for a regression task and then we extend it for a general multi-layer network and for different losses." }, { "heading": "3.1 SINGLE HIDDEN-LAYER NETWORK", "text": "Here, we consider a simple neural network with one hidden-layer with M neurons and onedimensional output trained for a regression task. The full characterization of the setup can be summarized in the following assumptions: Assumptions 1.\n• The activation function of the hidden layer, φ(t), is a Lφ-Lipschitz continuous function.\n• The input vector x ∈ RD satisfies ||x||2 ≤ C1.\n• The output scalar y ∈ R satisfies |y| ≤ C2.\n• The weight matrix W = [w1,w2, · · · ,wM ] ∈ RD×M connecting the input to the hidden layer satisfies ||wm||2 ≤ C3.\n• The weight vector v ∈ RM connecting the hidden-layer to the output neuron satisfies ||v||2 ≤ C4.\n• The hypothesis class is F = {f |f(x) = ∑M m=1 vmφm(x) = ∑M m=1 vmφ(w T mx)}.\n• Loss function set is A = {l|l(f(x), y) = 12 |f(x)− y| 2}.\n• With a probability τ , for n 6= m, ||φn(x)− φm(x)||22 = ||φ(wTnx)− φ(wTmx)||22 ≥ dmin.\nWe recall the following two lemmas related to the estimation error and the Rademacher complexity: Lemma 3.3. (Bartlett & Mendelson, 2002) For F ∈ RX , assume that g : R −→ R is a Lg-Lipschitz continuous function and A = {g ◦ f : f ∈ F}. Then we have\nRN (A) ≤ LgRN (F). (8) Lemma 3.4. (Xie et al., 2015b) Under Assumptions 1, the Rademacher complexity RN (F) of the hypothesis class F = {f |f(x) = ∑M m=1 vmφm(x) = ∑M m=1 vmφ(w T mx)} can be upper-bounded as follows:\nRN (F) ≤ 2LφC134 √ M√\nN + C4|φ(0)|\n√ M√\nN , (9)\nwhere C134 = C1C3C4 and φ(0) is the output of the activation function at the origin.\nLemma 3.4 provides an upper-bound of the Rademacher complexity for the hypothesis class. In order to find an upper-bound for our estimation error, we start by deriving an upper bound for supx,f |f(x)|: Lemma 3.5. Under Assumptions 1, with a probability at least τQ, we have\nsup x,f |f(x)| ≤\n√ J , (10)\nwhere Q is equal to the number of neuron pairs defined by M neurons, i.e., Q = M(M−1)2 , and J = C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3 + φ(0),\nThe proof can be found in Appendix 7.1. Note that in Lemma 3.5, we have expressed the upperbound of supx,f |f(x)| in terms of dmin. Using this bound, we can now find an upper-bound for supx,f,y |l(f(x), y)| in the following lemma: Lemma 3.6. Under Assumptions 1, with a probability at least τQ, we have\nsup x,y,f\n|l(f(x), y)| ≤ ( √ J + C2)2. (11)\nThe proof can be found in Appendix 7.2. The main goal is to analyze the estimation error bound of the neural network and to see how its upper-bound is linked to the diversity, expressed by dmin, of the different neurons. The main result is presented in Theorem 3.7. Theorem 3.7. Under Assumptions 1, with probability at least τQ(1− δ), we have\nL(f̂)−L(f∗) ≤ 8 (√ J +C2 )( 2LφC134+C4|φ(0)| )√M√ N +( √ J +C2)2 √ 2 log(2/δ) N (12)\nwhere C134 = C1C3C4, J = C24 ( MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).\nThe proof can be found in Appendix 7.3. Theorem 3.7 provides an upper-bound for the estimation error. We note that it is a decreasing function of dmin. Thus, we say that a higher dmin, i.e., more diverse activations, yields a lower estimation error bound. In other words, by promoting the withinlayer diversity, we can reduce the generalization error of neural networks. It should be also noted that our Theorem 3.7 has a similar form to Theorem 1 in (Xie et al., 2015b). However, the main difference is that Xie et al. analyse the estimation error with respect to the diversity of the weight vectors. Here, we consider the diversity between the outputs of the activations of the hidden neurons." }, { "heading": "3.2 BINARY CLASSIFICATION", "text": "We now extend our analysis of the effect of the within-layer diversity on the generalization error in the case of a binary classification task, i.e., y ∈ {−1, 1}. The extensions of Theorem 3.7 in the case of a hinge loss and a logistic loss are presented in Theorems 3.8 and 3.9, respectively. Theorem 3.8. Using the hinge loss, we have with probability at least τQ(1− δ)\nL(f̂)− L(f∗) ≤ 4 ( 2LφC134 + C4|φ(0)| )√M√ N + (1 + √ J ) √ 2 log(2/δ) N (13)\nwhere C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).\nTheorem 3.9. Using the logistic loss l(f(x), y) = log(1 + e−yf(x)), we have with probability at least τQ(1− δ)\nL(f̂)− L(f∗) ≤ 4 1 + e √ −J\n( 2LφC134 + C4|φ(0)| )√M√ N + log(1 + e √ J ) √ 2 log(2/δ) N (14)\nwhere C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).\nThe proofs are similar to Lemmas 7 and 8 in (Xie et al., 2015b). As we can see, for the classification task, the error bounds of the estimation error for the hinge and logistic losses are decreasing with respect to dmin. Thus, employing a diversity strategy can improve the generalization also for the binary classification task." }, { "heading": "3.3 MULTI-LAYER NETWORKS", "text": "Here, we extend our result for networks with P (> 1) hidden layers. We assume that the pair-wise distances between the activations within layer p are lower-bounded by dpmin with a probability τ\np. In this case, the hypothesis class can be defined recursively. In addition, we replace the fourth assumption in Assumptions 1 with: ||W p||∞ ≤ Cp3 for every W p, i.e., the weight matrix of the p-th layer. In this case, the main theorem is extended as follows:\nTheorem 3.10. With probability of at least ∏P−1 p=0 (τ p)Q p (1− δ), we have\nL(f̂)− L(f∗) ≤ 8( √ J + C2)\n( (2Lφ)\nPC1C 0 3√\nN\nP−1∏ p=0 √ MpCp3 + |φ(0)|√ N P−1∑ p=0 (2Lφ) P−1−p P−1∏ j=p √ M jCj3\n)\n+ (√ J + C2 )2√2 log(2/δ) N\n(15)\nwhere Qp is the number of neuron pairs in the pth layer, defined as Qp = M p(Mp−1)\n2 , and J P is defined recursively using the following identities: J 0 = C03C1 and J p = MpCp2 ( Mp2(LφC p−1 3 J p−1 + φ(0))2 −M(M − 1) dpmin 2 2 ) ) , for p = 1, . . . , P .\nThe proof can be found in Appendix 7.4. In Theorem 3.10, we see thatJ P is decreasing with respect to dpmin. Thus, we see that maximizing the within-layer diversity, we can reduce the estimation error of a multi-layer neural network." }, { "heading": "3.4 MULTIPLE OUTPUTS", "text": "Finally, we consider the case of a neural network with a multi-dimensional output, i.e., y ∈ RD. In this case, we can extend Theorem 3.7 by decomposing the problem into D smaller problems and deriving the global error bound as the sum of the small D bounds. This yields the following two theorems: Theorem 3.11. For a multivariate regression trained with the squared error, we have with probability at least τQ(1− δ),\nL(f̂)−L(f∗) ≤ 8D( √ J +C2) ( 2LφC134+C4|φ(0)| )√M√ N +D( √ J +C2)2 √ 2 log(2/δ) N (16)\nwhere C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) )\nand C5 = LφC1C3 + φ(0). Theorem 3.12. For a multi-class classification task using the cross-entropy loss, we have with probability at least τQ(1− δ),\nL(f̂)− L(f∗) ≤ D(D − 1) D − 1 + e−2 √ J\n( 2LφC134 + C4|φ(0)| )√M√ N + log ( 1 + (D − 1)e2 √ J )√2 log(2/δ)\nN (17) where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3+φ(0).\nThe proofs can be found in Appendix 7.5. Theorems 3.11 and 3.12 extend our result for the multidimensional regression and classification tasks, respectively. Both bounds are inversely proportional to the diversity factor dmin. We note that for the classification task, the upper-bound is exponentially decreasing with respect to dmin." }, { "heading": "4 RELATED WORK", "text": "Diversity promoting strategies have been widely used in ensemble learning (Li et al., 2012; Yu et al., 2011), sampling (Derezinski et al., 2019; Bıyık et al., 2019; Gartrell et al., 2019), ranking (Yang et al.; Gan et al., 2020), and pruning by reducing redundancy (Kondo & Yamauchi, 2014; He et al., 2019; Singh et al., 2020; Lee et al., 2020). In the deep learning context, various approaches have used diversity as a direct regularizer on top of the weight parameters. Here, we present a brief overview of these regularizers. Based on the way diversity is defined, we can group these approaches into two categories. The first group considers the regularizers that are based on the pairwise dissimilarity of components, i.e., the overall set of weights are diverse if every pair of weights are dissimilar. Given the weight vectors {wm}Mm=1, Yu et al. (2011) define the regularizer as∑ mn(1− θmn), where θmn represents the cosine similarity betweenwm andwn. Bao et al. (2013)\nproposed an incoherence score defined as − log (\n1 M(M−1) ∑ mn β|θmn| 1 β ) , where β is a positive\nhyperparameter. Xie et al. (2015a; 2016) used mean(θmn) − var(θmn) to regularize Boltzmann machines. They theoretically analyzed its effect on the generalization error bounds in (Xie et al., 2015b) and extended it to kernel space in (Xie et al., 2017a). The second group of regularizers considers a more globalist view of diversity. For example, in (Malkin & Bilmes, 2009; 2008; Xie et al., 2017b), a weight regularization based on the determinant of the weights covariance is proposed and based on determinantal point process in (Kulesza & Taskar, 2012; Kwok & Adams, 2012).\nUnlike the aforementioned methods which promote diversity on the weight level and similar to our method, Cogswell et al. (2016) proposed to enforce dissimilarity on the feature map outputs, i.e., on the activations. To this end, they proposed an additional loss based on the pairwise covariance of the activation outputs. Their additional loss, LDecov is defined as the squared sum of the non-diagonal elements of the global covariance matrix C:\nLDecov = 1\n2 (||C||2F − ||diag(C)||22), (18)\nwhere ||.||F is the Frobenius norm. Their approach, Decov, yielded superior empirical performance; however, it lacks theoretical proof. In this paper, we closed this gap and we showed theoretically how employing a diversity strategy on the network activations can indeed decrease the estimation error bound and improve the generalization of the model. Besides, we proposed variants of our approach which consider a global view of diversity." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "In this section, we present an empirical study of our approach in a regression context using Boston Housing price dataset (Dua & Graff, 2017) and in a classification context using CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009). We denote as Vanilla the model trained with no diversity protocol and as Decov the approach proposed in (Cogswell et al., 2016)." }, { "heading": "5.1 REGRESSION", "text": "For regression, we use the Boston Housing price dataset (Dua & Graff, 2017). It has 404 training samples and 102 test samples with 13 attributes each. We hold the last 100 sample of training as a validation set for the hyper-parameter tuning. The loss weight, is chosen from {0.00001, 0.00005, 0.0001, 0.0005, 0.001, 0.005} for both our approach and Decov (Cogswell et al., 2016). Parameter γ in the radial basis function is chosen from {0.00001, 0.0001, 0.01, 0.1.1, 10, 100}. As a base model, we use a neural network composed of two fully connected hidden layers, each with 64 neurons. The additional loss is applied on top of both hidden layers.\nWe train for 80 epochs using stochastic gradient descent with a learning rate of 0.01 and mean square error loss. For hyperparamrter tuning, we keep the model that perform best on the validation and use it in the test phase. We experiment with three different activation functions for the hidden layers: Sigmoid, Rectified Linear Units (ReLU) (Nair & Hinton, 2010), and LeakyReLU (Maas et al., 2013).\nTable 1 reports the results in terms of the mean average error for the different approaches over the Boston Housing price dataset. First, we note that employing a diversification strategy (ours and Decov) boosts the results compared to the Vanilla approach for all types of activations. The three variants of our approach, i.e., the within-layer approach, consistently outperform the Decov loss except for the LeakyReLU where the latter outperforms our direct variant. Table 1 shows that the logdet variant of our approach yields the best performance for all three activation types." }, { "heading": "5.2 CLASSIFICATION", "text": "For classification, we evaluate the performance of our approach on CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009). They contain 60,000 32 × 32 images grouped into 10 and 100 distinct categories, respectively. We train on the 50,000 given training examples and test on the 10,000 specified test samples. We hold the last 10000 of the training set for validation. For the neural network model, we use an architecture composed of 3 convolutional layers. Each convolution layer is composed of 32 3 × 3 filters followed by 2 × 2 max pooling. The flattened output of the convolutional layers is connected to a fully connected layer with 128 neurons and a softmax layer. The different additional losses, i.e., ours and Decov, are added only on top of the fully connected layer. The models are trained for 150 epochs using stochastic gradient decent with a learning rate of 0.01 and categorical cross entropy loss. For hyper-paramters tuning, we keep the model that performs best on the validation set and use it in the test phase. We experiment with three different activation functions for the hidden layers: sigmoid, Rectified Linear Units (ReLU) (Nair & Hinton, 2010), and LeakyReLU (Maas et al., 2013). All reported results are average performance over 4 trials with the standard deviation indicated alongside.\nTables 2 and 3 report the test error rates of the different approaches for both datasets. Compared to the Vanilla network, our within-layer diversity strategies consistently improve the performance of the model. For the CIFAR10, the direct variant yields more than 0.72% improvement for the ReLU and 2% improvement for the sigmoid activation. For the LeakyReLU case, the determinant variant achieves the lowest error rate. This is in accordance with the results on CIFAR100. Here, we note that our proposed approach outperforms both the Vanilla and the Decov models, especially in the sigmoid case. Compared to the Vanilla approach, we note that the model training time cost on CIFAR100 increases by 9% for the direct approach, by 36.1% for the determinant variant, and by 36.2%for the log of determinant variant." }, { "heading": "6 CONCLUSIONS", "text": "In this paper, we proposed a new approach to encourage ‘diversification’ of the layer-wise feature map outputs in neural networks. The main motivation is that by promoting within-layer activation diversity, neurons within the same layer learn to capture mutually distinct patterns. We proposed an additional loss term that can be added on top of any fully-connected layer. This term complements\nthe traditional ‘between-layer’ feedback with an additional ‘within-layer’ feedback encouraging diversity of the activations. We theoretically proved that the proposed approach decreases the estimation error bound, and thus improves the generalization ability of neural networks. This analysis was further supported by experimental results showing that such a strategy can indeed improve the performance of neural networks in regression and classification tasks. Our future work includes extensive experimental analysis on the relationship between the distribution of the neurons output and generalization." }, { "heading": "7 APPENDIX", "text": "In the following proofs, we use Lipschitz analysis. In particular, a function f : A → R, A ⊂ Rn, is said to be L-Lipschitz, if there exist a constant L ≥ 0, such that |f(a) − f(b)| ≤ L||a − b|| for every pair of points a, b ∈ A. Moreover:\n• supx∈A f ≤ sup(L||x||+ f(0)). • if f is continuous and differentiable, L = sup |f ′(x)|." }, { "heading": "7.1 PROOF OF LEMMA 3.5", "text": "Lemma 3.5. Under Assumptions 1, with a probability at least τQ, we have sup x,f |f(x)| ≤ √ J , (19)\nwhere Q is equal to the number of neuron pairs defined by M neurons, i.e. Q = M(M−1)2 , and J = C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3 + φ(0).\nProof.\nf2(x) = ( M∑ m=1 vmφm(x) )2 ≤ ( M∑ m=1 ||v||∞φm(x) )2 ≤ ||v||2∞ ( M∑ m=1 φm(x) )2 ≤ C24 ( M∑ m=1 φm(x) )2\n= C24 (∑ m,n φm(x)φn(x) ) = C24 ∑ m φm(x) 2 + ∑ m 6=n φn(x)φm(x) (20) We have supw,x φ(x) < sup(Lφ|wTx| + φ(0)) because φ is Lφ-Lipschitz. Thus, ||φ||∞ < LφC1C3 + φ(0) = C5. For the first term in equation 20, we have ∑ m φm(x)\n2 < M(LφC1C3 + φ(0))\n2 = MC25 . The second term, using the identity φm(x)φn(x) = 1 2 ( φm(x) 2 + φn(x) 2 − (φm(x)− φn(x))2 ) , can be rewritten as∑\nm 6=n\nφm(x)φn(x) = 1\n2 ∑ m 6=n φm(x) 2 + φn(x) 2 − ( φm(x)− φn(x) )2 . (21)\nIn addition, we have with a probability τ , ||φm(x) − φn(x)||2 ≥ dmin for m 6= n. Thus, we have with a probability at least τQ:∑\nm 6=n\nφm(x)φn(x) ≤ 1\n2 ∑ m6=n (2C25 − d2min) =M(M − 1)(C25 − d2min/2). (22)\nHere Q is equal to the number of neuron pairs defined by M neurons, i.e, Q = M(M−1)2 . By putting everything back to equation 20, we have with a probability τQ,\nf2(x) ≤ C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) = J . (23)\nThus, with a probability τQ,\nsup x,f |f(x)| ≤ √ sup x,f f(x)2 ≤ √ J . (24)" }, { "heading": "7.2 PROOF OF LEMMA 3.6", "text": "Lemma 3.6. Under Assumptions 1, with a probability at least τQ, we have sup x,y,f |l(f(x), y)| ≤ ( √ J + C2)2 (25)\nProof. We have supx,y,f |f(x) − y| ≤ 2 supx,y,f (|f(x)| + |y|) = 2( √ J + C2). Thus supx,y,f |l(f(x), y)| ≤ ( √ J + C2)2." }, { "heading": "7.3 PROOF OF THEOREM 3.7", "text": "Theorem 3.7. Under Assumptions 1, with probability at least τQ(1− δ), we have\nL(f̂)−L(f∗) ≤ 8 (√ J +C2 )( 2LφC134+C4|φ(0)| )√M√ N +( √ J +C2)2 √ 2 log(2/δ) N (26)\nwhere C134 = C1C3C4, J = C24 ( MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).\nProof. Given that l(·) is K-Lipschitz with a constant K = supx,y,f |f(x)− y| ≤ 2( √ J +C2), and using Lemma 3.3, we can show that RN (A) ≤ KRN (F) ≤ 2( √ J + C2)RN (F). For RN (F), we use the bound found in Lemma 3.4. Using Lemmas 3.2 and 3.6 completes the proof." }, { "heading": "7.4 PROOF OF THEOREM 3.10", "text": "Theorem 3.10. Under Assumptions 1, with probability of at least ∏P−1 p=0 (τ p)Q p (1− δ), we have\nL(f̂)− L(f∗) ≤ 8( √ J + C2)\n( (2Lφ)\nPC1C 0 3√\nN\nP−1∏ p=0 √ MpCp3 + |φ(0)|√ N P−1∑ p=0 (2Lφ) P−1−p P−1∏ j=p √ M jCj3\n)\n+ (√ J + C2 )2√2 log(2/δ) N\n(27)\nwhere Qp is the number of neuron pairs in the pth layer, defined as Qp = M p(Mp−1)\n2 , and J P is defined recursively using the following identities: J 0 = C03C1 and J p = MpCp2 ( Mp2(LφC p−1 3 J p−1 + φ(0))2 −M(M − 1)d2min/2) ) , for p = 1, . . . , P .\nProof. Lemma 5 in (Xie et al., 2015b) provides an upper-bound for the hypothesis class. We denote by vp denote the outputs of the pth hidden layer before applying the activation function:\nv0 = [w0 T 1 x, ....,w 0T M0x] (28)\nvp = [ Mp−1∑ j=1 wpj,1φ(v p−1 j ), ...., Mp−1∑ j=1 wpj,Mpφ(v p−1 j )] (29)\nvp = [wp1 T φp, ...,wpMp T φp], (30)\nwhere φp = [φ(vp−11 ), · · · , φ(v p−1 Mp−1)]. We have\n||vp||22 = Mp∑ m=1 (wpm Tφp)2 (31)\nand wpm Tφp ≤ Cp3 ∑ n φ p n. Thus,\n||vp||22 ≤ Mp∑ m=1 (Cp3 ∑ n φpn) 2 =MpCp3 2 ( ∑ n φpn) 2 =MpCp3 2 ∑ mn φpmφ p n. (32)\nWe use the same decomposition trick of φpmφ p n as in the proof of Lemma 3.5. We need to bound supx φ p:\nsup x φp < sup(Lφ|wp−1j T vp−1|+ φ(0)) < Lφ||W p−1||∞||vp−1||22 + φ(0). (33)\nThus, we have ||vp||22 ≤MpCp 2(M2(LφCp−13 ||vp−1||22 + φ(0))2 −M(M − 1)d2min/2)) = J P . (34) We found a recursive bound for ||vp||22, we note that for p = 0, we have ||v0||22 ≤ ||W 0||∞C1 ≤ C03C1 = J 0. Thus,\nsup x,fP∈FP |f(x)| = sup x,fP∈FP\n|vP | ≤ √ J P . (35)" }, { "heading": "7.5 PROOFS OF THEOREMS 3.11 AND 3.12", "text": "Theorem 3.11. For a multivariate regression trained with the squared error, we have with probability at least τQ(1− δ),\nL(f̂)−L(f∗) ≤ 8D( √ J +C2) ( 2LφC134+C4|φ(0)| )√M√ N +D( √ J +C2)2 √ 2 log(2/δ) N (36)\nwhere C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).\nProof. The squared loss ||f(x) − y||2 can be decomposed into D terms (f(x)k − yk)2. Using Theorem 3.7, we can derive the bound for each term.\nTheorem 3.12. For a multiclass classification task using the cross-entropy loss, we have with probability at least τQ(1− δ),\nL(f̂)− L(f∗) ≤ D(D − 1) D − 1 + e−2 √ J\n( 2LφC134 + C4|φ(0)| )√M√ N + log ( 1 + (D − 1)e2 √ J )√2 log(2/δ)\nN (37) where C134 = C1C3C4, J = C24 (MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).\nProof. Using Lemma 9 in (Xie et al., 2015b), we have supf,x,y l = log ( 1 + (D − 1)e2 √ J ) and l is D−1 D−1+e−2 √ J -Lipschitz. Thus, using the decomposition property of the Rademacher complexity, we have\nRn(A) ≤ D(D − 1)\nD − 1 + e−2 √ J\n( 2LφC134\n√ M√\nN + C4|φ(0)|\n√ M√\nN\n) . (38)" } ]
2,020
ON NEURAL NETWORK GENERALIZATION VIA PROMOTING WITHIN-LAYER ACTIVATION DIVERSITY
SP:4fd499ebe9fddb6a3f57663d76bb7bf3b5f29ef7
[ "The proposed NDP has two main advantages: 1- it has the capability to adapt the incoming data points in time-series (unlike NODE) without retraining, 2- it can provide a measure of uncertainty for the underlying dynamics of the time-series. NDP partitions the global latent context $z$ to a latent position $l$ and sub-context $z^\\prime$. Then it lets $l$ follow an ODE, called latent ODE. This part is actually the innovation of the paper where by defining a latent ODE, the authors take advantages of ODEs to find the underlying hidden dynamics of the time-series. This assumption helps find better dynamics when the generating processes of time-series meet some ODEs. Then the authors define a stochastic process very like the idea from Neural Processes (NP) paper, that is, by defining a latent context $z$ (which here is a concatenation of $l$ and sub-context $z^\\prime$) with a prior p(z) and integrating a Gaussian distribution of a function of $z$ (decoder $g(l,t,z^\\prime)$ which is a neural network) over $z$. " ]
Neural Ordinary Differential Equations (NODEs) use a neural network to model the instantaneous rate of change in the state of a system. However, despite their apparent suitability for dynamics-governed time-series, NODEs present a few disadvantages. First, they are unable to adapt to incoming data-points, a fundamental requirement for real-time applications imposed by the natural direction of time. Second, time-series are often composed of a sparse set of measurements that could be explained by many possible underlying dynamics. NODEs do not capture this uncertainty. In contrast, Neural Processes (NPs) are a new class of stochastic processes providing uncertainty estimation and fast data-adaptation, but lack an explicit treatment of the flow of time. To address these problems, we introduce Neural ODE Processes (NDPs), a new class of stochastic processes determined by a distribution over Neural ODEs. By maintaining an adaptive data-dependent distribution over the underlying ODE, we show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points. At the same time, we demonstrate that NDPs scale up to challenging high-dimensional time-series with unknown latent dynamics such as rotating MNIST digits.
[ { "affiliations": [], "name": "Alexander Norcliffe" }, { "affiliations": [], "name": "Cristian Bodnar" }, { "affiliations": [], "name": "Ben Day" }, { "affiliations": [], "name": "Jacob Moss" }, { "affiliations": [], "name": "Pietro Liò" } ]
[ { "authors": [ "Francesco Paolo Casale", "Adrian V Dalca", "Luca Saglietti", "Jennifer Listgarten", "Nicolo Fusi" ], "title": "Gaussian Process Prior Variational Autoencoders", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Ricky TQ Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Ruizhi Deng", "Bo Chang", "Marcus A. Brubaker", "Greg Mori", "Andreas Lehrmann" ], "title": "Modeling continuous stochastic processes with dynamic normalizing flows", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Emilien Dupont", "Arnaud Doucet", "Yee Whye Teh" ], "title": "Augmented neural odes", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Marta Garnelo", "Dan Rosenbaum", "Chris J. Maddison", "Tiago Ramalho", "David Saxton", "Murray Shanahan", "Yee Whye Teh", "Danilo J. Rezende", "S.M. Ali Eslami" ], "title": "Conditional neural processes, 2018a", "venue": null, "year": 2018 }, { "authors": [ "Marta Garnelo", "Jonathan Schwarz", "Dan Rosenbaum", "Fabio Viola", "Danilo J. Rezende", "S.M. Ali Eslami", "Yee Whye Teh" ], "title": "Neural processes, 2018b", "venue": null, "year": 2018 }, { "authors": [ "Jonathan Gordon", "Wessel P. Bruinsma", "Andrew Y.K. Foong", "James Requeima", "Yann Dubois", "Richard E. Turner" ], "title": "Convolutional conditional neural processes, 2019", "venue": null, "year": 2019 }, { "authors": [ "Junteng Jia", "Austin R. Benson" ], "title": "Neural jump stochastic differential equations, 2020", "venue": null, "year": 2020 }, { "authors": [ "Patrick Kidger", "James Morrill", "James Foster", "Terry Lyons" ], "title": "Neural controlled differential equations for irregular time series", "venue": "arXiv preprint arXiv:2005.08926,", "year": 2020 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih", "Jonathan Schwarz", "Marta Garnelo", "Ali Eslami", "Dan Rosenbaum", "Oriol Vinyals", "Yee Whye Teh" ], "title": "Attentive neural processes, 2019", "venue": null, "year": 2019 }, { "authors": [ "Tuan Anh Le", "Hyunjik Kim", "Marta Garnelo", "Dan Rosenbaum", "Jonathan Schwarz", "Yee Whye Teh" ], "title": "Empirical evaluation of neural process objectives", "venue": "In NeurIPS workshop on Bayesian Deep Learning,", "year": 2018 }, { "authors": [ "Xuechen Li", "Ting-Kam Leonard Wong", "Ricky T.Q. Chen", "David Duvenaud" ], "title": "Scalable gradients for stochastic differential equations, 2020", "venue": null, "year": 2020 }, { "authors": [ "Xuanqing Liu", "Tesi Xiao", "Si Si", "Qin Cao", "Sanjiv Kumar", "Cho-Jui Hsieh" ], "title": "Neural sde: Stabilizing neural ode networks with stochastic noise, 2019", "venue": null, "year": 2019 }, { "authors": [ "Stefano Massaroli", "Michael Poli", "Jinkyoo Park", "Atsushi Yamashita", "Hajime Asama" ], "title": "Dissecting neural odes, 2020", "venue": null, "year": 2020 }, { "authors": [ "James Morrill", "Patrick Kidger", "Cristopher Salvi", "James Foster", "Terry Lyons" ], "title": "Neural cdes for long time series via the log-ode method, 2020", "venue": null, "year": 2020 }, { "authors": [ "Alexander Norcliffe", "Cristian Bodnar", "Ben Day", "Nikola Simidjievski", "Pietro Liò" ], "title": "On second order behaviour in augmented neural odes, 2020", "venue": null, "year": 2020 }, { "authors": [ "Bernt Øksendal" ], "title": "Stochastic differential equations", "venue": "In Stochastic differential equations,", "year": 2003 }, { "authors": [ "Yulia Rubanova", "Ricky T.Q. Chen", "David Duvenaud" ], "title": "Latent odes for irregularly-sampled time series, 2019", "venue": null, "year": 2019 }, { "authors": [ "Gautam Singh", "Jaesik Yoon", "Youngsung Son", "Sungjin Ahn" ], "title": "Sequential neural processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Belinda Tzen", "Maxim Raginsky" ], "title": "Neural stochastic differential equations: Deep latent gaussian models in the diffusion", "venue": null, "year": 2019 }, { "authors": [ "Ben H Williams", "Marc Toussaint", "Amos J Storkey" ], "title": "Extracting motion primitives from natural handwriting data", "venue": "In International Conference on Artificial Neural Networks,", "year": 2006 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many time-series that arise in the natural world, such as the state of a harmonic oscillator, the populations in an ecological network or the spread of a disease, are the product of some underlying dynamics. Sometimes, as in the case of a video of a swinging pendulum, these dynamics are latent and do not manifest directly in the observation space. Neural Ordinary Differential Equations (NODEs) (Chen et al., 2018), which use a neural network to parametrise the derivative of an ODE, have become a natural choice for capturing the dynamics of such time-series (Çağatay Yıldız et al., 2019; Rubanova et al., 2019; Norcliffe et al., 2020; Kidger et al., 2020; Morrill et al., 2020).\nHowever, despite their fundamental connection to dynamics-governed time-series, NODEs present certain limitations that hinder their adoption in these settings. Firstly, NODEs cannot adjust predictions as more data is collected without retraining the model. This ability is particularly important for real-time applications, where it is desirable that models adapt to incoming data points as time passes and more data is collected. Secondly, without a larger number of regularly spaced measurements, there is usually a range of plausible underlying dynamics that can explain the data. However, NODEs do not capture this uncertainty in the dynamics. As many real-world time-series are comprised of sparse sets of measurements, often irregularly sampled, the model can fail to represent the diversity of suitable solutions. In contrast, the Neural Process (Garnelo et al., 2018a;b) family offers a class of (neural) stochastic processes designed for uncertainty estimation and fast adaptation to changes in the observed data. However, NPs modelling time-indexed random functions lack an explicit treatment of time. Designed for the general case of an arbitrary input domain, they treat time as an unordered set and do not explicitly consider the time-delay between different observations.\nTo address these limitations, we introduce Neural ODE Processes (NDPs), a new class of stochastic processes governed by stochastic data-adaptive dynamics. Our probabilistic Neural ODE formulation relies on and extends the framework provided by NPs, and runs parallel to other attempts to ∗Equal contribution. †Work done as an AI Resident at the University of Cambridge.\nincorporate application-specific inductive biases in this class of models such as Attentive NPs (Kim et al., 2019), ConvCNPs (Gordon et al., 2019), and MPNPs (Day et al., 2020). We demonstrate that NDPs can adaptively capture many potential dynamics of low-dimensional systems when faced with limited amounts of data. Additionally, we show that our approach scales to high-dimensional time series with latent dynamics such as rotating MNIST digits (Casale et al., 2018). Our code and datasets are available at https://github.com/crisbodnar/ndp." }, { "heading": "2 BACKGROUND AND FORMAL PROBLEM STATEMENT", "text": "Problem Statement We consider modelling random functions F : T → Y , where T = [t0,∞) represents time and Y ⊂ Rd is a compact subset of Rd. We assume F has a distribution D, induced by another distribution D′ over some underlying dynamics that govern the time-series. Given a specific instantation F of F , let C = {(tCi ,yCi )}i∈IC be a set of samples from F with some indexing set IC. We refer to C as the context points, as denoted by the superscript C. For a given context C, the task is to predict the values {yTj }j∈IT that F takes at a set of target times {tTj }j∈IT , where IT is another index set. We call T = {(tTj ,yTj )} the target set. Additionally let tC = {ti|i ∈ IC} and similarly define yC, tT and yT. Conventionally, as in Garnelo et al. (2018b), the target set forms a superset of the context set and we have C ⊆ T. Optionally, it might also be natural to consider that the initial time and observation (t0,y0) are always included in C. During training, we let the model learn from a dataset of (potentially irregular) time-series sampled from F . We are interested in learning the underlying distribution over the dynamics as well as the induced distribution over functions. We note that when the dynamics are not latent and manifest directly in the observation space Y , the distribution over ODE trajectories and the distribution over functions coincide.\nNeural ODEs NODEs are a class of models that parametrize the velocity ż of a state z with the help of a neural network ż = fθ(z, t). Given the initial time t0 and target time tTi , NODEs predict the corresponding state ŷTi by performing the following integration and decoding operations:\nz(t0) = h1(y0), z(t T i ) = z(t0) + ∫ tTi t0 fθ(z(t), t)dt, ŷ T i = h2(z(t T i )), (1)\nwhere h1 and h2 can be neural networks. When the dimensionality of z is greater than that of y and h1, h2 are linear, the resulting model is an Augmented Neural ODE (Dupont et al., 2019) with input layer augmentation (Massaroli et al., 2020). The extra dimensions offer the model additional flexibility as well as the ability to learn higher-order dynamics (Norcliffe et al., 2020).\nNeural Processes (NPs) NPs model a random function F : X → Y , where X ⊆ Rd1 and Y ⊆ Rd2 . The NP represents a given instantiation F of F through the global latent variable z,\nwhich parametrises the variation in F . Thus, we have F(xi) = g(xi, z). For a given context set C = {(xCi ,yCi )} and target set x1:n, y1:n, the generative process is given by:\np(y1:n, z|x1:n,C) = p(z|C) n∏ i=1 N (yi|g(xi, z), σ2), (2)\nwhere p(z) is chosen to be a multivariate standard normal distribution and y1:n is a shorthand for the sequence (y1, . . . ,yn). The model can be trained using an amortised variational inference procedure that naturally gives rise to a permutation-invariant encoder qθ(z|C), which stores the information about the context points. Conditioned on this information, the decoder g(x, z) can make predictions at any input location x. We note that while the domain X of the random function F is arbitrary, in this work we are interested only in stochastic functions with domain on the real line (time-series). Therefore, from here our notation will reflect that, using t as the input instead of x. The output y remains the same.\n3 NEURAL ODE PROCESSES\nModel Overview We introduce Neural ODE Processes (NDPs), a class of dynamics-based models that learn to approximate random functions defined over time. To that end, we consider an NP whose context is used to determine a distribution over ODEs. Concretely, the context infers a distribution over the initial position (and optionally – the initial velocity) and, at the same time, stochastically controls its derivative function. The positions given by the ODE trajectories at any time tTi are then decoded to give the predictions. In what follows, we offer a detailed description of each component of the model. A schematic of the model can be seen in Figure 1." }, { "heading": "3.1 GENERATIVE PROCESS", "text": "We first describe the generative process behind NDPs. A graphical model perspective of this process is also included in Figure 2.\nEncoder and Aggregator Consider a given context set C = {(tCi ,yCi )}i∈IC of observed points. We encode this context into two latent variables L(t0) ∼ qL(l(t0)|C) and D ∼ qD(d|C), representing the initial state and the global control of an ODE, respectively. To parametrise the distribution of the latter variable, the NDP encoder produces a representation ri = fe((tCi ,y C i )) for each context pair (tCi ,y C i ). The function fe is as a neural network, fully connected or convolutional, depending on the nature of y. An aggregator combines all the representations ri to form a global representation, r, that parametrises the distribution of the global latent context, D ∼ qD(d|C) = N ( z|µD(r),diag(σD(r)) ) . As the aggregator must preserve order invariance, we choose to take the element-wise mean. The distribution of L0 might be parametrised identically as a function of the whole context by qL(d|C), and, in particular, if the initial observation y0 is always known, then qL(l(0)|C) = qL(l(0)|y0) = N ( l(0)|µL(y0),diag(σL(y0)) ) .\nLatent ODE To obtain a distribution over functions, we are interested in capturing the dynamics that govern the time-series and exploiting the temporal nature of the data. To that end, we allow the latent context to evolve according to a Neural ODE (Chen et al., 2018) with initial position L(0) and controlled by D. These two random variables factorise the uncertainty in the underlying dynamics into an uncertainty over the initial conditions (given by L(t0)) and an uncertainty over the ODE derivative, given by D.\nBy using the target times, tT1:n = (t T 1 , ..., t T N ), the latent state at a given time is found by evolving a Neural ODE:\nl(tTi ) = l(t0) + ∫ tTi t0 fθ(l(t),d, t)dt, (3)\nwhere fθ is a neural network that models the derivative of l. As explained above, we allow d to modulate the derivative of this ODE by acting as a global control signal. Ultimately, for fixed initial conditions, this results in an uncertainty over the ODE trajectories.\nDecoder To obtain a prediction at a time tTi , we decode the random state of the ODE at time tTi , given by L(tTi ). Assuming that the outputs are noisy, for a given sample l(t T i ) from this stochastic\nstate, the decoder g produces a distribution over Y Tti ∼ p ( yTi |g(l(tTi ), ti) ) parametrised by the decoder output. Concretely, for regression tasks, we take the target output to be normally distributed with constant (or optionally learned) variance Y Tti ∼ N ( yi|g(l(ti), ti), σ2 ) . When Y Tti is a random vector formed of independent binary random variables (e.g. a black and white image), we use a Bernoulli distribution Y Tti ∼ ∏dim(Y ) j=1 Bernoulli ( g(l(ti), ti)j ) .\nPutting everything together, for a set of observed context points C, the generative process of NDPs is given by the expression below, where we emphasise once again that l(ti) also implicitly depends on l(0) and d.\np(y1:n, l(0),d|t1:n,C) = p ( l(0)|C ) p(d|C) n∏ i=1 p ( yi|g(l(ti), ti) ) , (4)\nWe remark that NDPs generalise NPs defined over time. If the latent NODE learns the trivial velocity fθ(l(t),d, t) = 0, the random state L(t) = L(t0) remains constant at all times t. In this case, the distribution over functions is directly determined by L(t0) ∼ p(l(t0)|C), which substitutes the random variable Z from a regular NP. For greater flexibility, the control signal d can also be supplied to the decoder g(l(t),d, t). This shows that, in principle, NDPs are at least as expressive as NPs. Therefore, NDPs could be a sensible choice even in applications where the time-series are not solely determined by some underlying dynamics, but are also influenced by other generative factors." }, { "heading": "3.2 LEARNING AND INFERENCE", "text": "Since the true posterior is intractable because of the highly non-linear generative process, the model is trained using an amortised variational inference procedure. The variational lower-bound on the probability of the target values given the known context log p(yT|tT, yC) is as follows:\nE q ( l(t0),d|tT,yT )[∑ i∈IT log p(yi|l(t0),d, ti) + log qL(l(t0)|tC, yC) qL(l(t0)|tT, yT) + log qD(d|tC, yC) qD(d|tT, yT) ] , (5)\nwhere qL, qD give the variational posteriors (the encoders described in Section 3.1). The full derivation can be found in Appendix B. We use the reparametrisation trick to backpropagate the gradients of this loss. During training, we sample random contexts of different sizes to allow the model to become sensitive to the size of the context and the location of its points. We train using mini-batches composed of multiple contexts. For that, we use an extended ODE that concatenates the independent ODE states of each sample in the batch and integrates over the union of all the times in the batch (Rubanova et al., 2019). Pseudo-code for this training procedure is also given in Appendix C." }, { "heading": "3.3 MODEL VARIATIONS", "text": "Here we present the different ways to implement the model. The majority of the variation is in the architecture of the decoder. However, it is possible to vary the encoder such that fe((tCi ,y C i )) can be a multi-layer-perceptron, or additionally contain convolutions.\nNeural ODE Process (NDP) In this setup the decoder is an arbitrary function g(l(tTi ),d, tTi ) of the latent position at the time of interest, the control signal, and time. This type of model is particularly suitable for high-dimensional time-series where the dynamics are fundamentally latent. The inclusion of d in the decoder offers the model additional flexibility and makes it a good default choice for most tasks.\nSecond Order Neural ODE Process (ND2P) This variation has the same decoder architecture as NDP, however the latent ODE evolves according to a second order ODE. The latent state, l, is split into a “position”, l1 and “velocity”, l2, with l̇1 = l2 and l̇2 = fθ(l1, l2,d, t). This model is designed for time-series where the dynamics are second-order, which is often the case for physical systems (Çağatay Yıldız et al., 2019; Norcliffe et al., 2020).\nNDP Latent-Only (NDP-L) The decoder is a linear transformation of the latent state g(l(tTi )) = W (l(tTi )) + b. This model is suitable for the setting when the dynamics are fully observed (i.e. they are not latent) and, therefore, do not require any decoding. This would be suitable for simple functions generated by ODEs, for example, sines and exponentials. This decoder implicitly contains information about time and d because the ODE evolution depends on these variables as described in Equation 3.\nND2P Latent-Only (ND2P-L) This model combines the assumption of second-order dynamics with the idea that the dynamics are fully observed. The decoder is a linear layer of the latent state as in NDP-L and the phase space dynamics are constrained as in ND2P." }, { "heading": "3.4 NEURAL ODE PROCESSES AS STOCHASTIC PROCESSES", "text": "The Kolmogorov Extension Theorem states that exchangeability and consistency are necessary and sufficient conditions for a collection of joint marginal distributions to define a stochastic process Øksendal (2003); Garnelo et al. (2018b). We define these conditions and show that the NDP model satisfies them. The proofs can be found in Appendix A. Definition 3.1 (Exchangeability). Exchangeability refers to the invariance of the joint distribution ρt1:n(y1:n) under permutations of y1:n. That is, for a permutation π of {1, 2, ..., n}, π(t1:n) = (tπ(1), ..., tπ(n)) and π(y1:n) = (yπ(1), ...,yπ(n)), the joint probability distribution ρt1:n(y1:n) is invariant if ρt1:n(y1:n) = ρπ(t1:n)(π(y1:n)). Proposition 3.1. NDPs satisfy the exchangeability condition. Definition 3.2 (Consistency). Consistency says if a part of a sequence is marginalised out, then the joint probability distribution is the same as if it was only originally taken from the smaller sequence ρt1:m(y1:m) = ∫ ρt1:n(y1:n)dym+1:n.\nProposition 3.2. NDPs satisfy the consistency condition.\nIt is important to note that the stochasticity comes from sampling the latent l(0) and d. There is no stochasticity within the ODE, such as Brownian motion, though stochastic ODEs have previously been explored (Liu et al., 2019; Tzen & Raginsky, 2019; Jia & Benson, 2020; Li et al., 2020). For any given pair l(0) and d, both the latent state trajectory and the observation space trajectory are fully determined. We also note that outside the NP family and differently from our approach, NODEs have been used to generate continuous stochastic processes by transforming the density of another latent process (Deng et al., 2020)." }, { "heading": "3.5 RUNNING TIME COMPLEXITY", "text": "For a model with n context points andm target points, an NP has running time complexityO(n+m), since the model only has to encode each context point and decode each target point. However, a Neural ODE Process has added complexity due to the integration process. Firstly, the integration itself has runtime complexity O(NFE), where NFE is the number of function evaluations. In turn, the worst-case NFE depends on the minimum step size δ the ODE solver has to use and the maximum time we are interested in, which we denote by tmax. Secondly, for settings where the target times are not already ordered, an additional O ( m log(m) ) term is added for sorting them. This ordering is required by the ODE solver.\nTherefore, given that m ≥ n and assuming a constant tmax exists, the worst-case complexity of NDPs is O ( m log(m) ) . For applications where the times are already sorted (e.g. real-time applica-\ntions), the complexity falls back to the original O ( n+m ) . In either case, NDPs scale well with the\nsize of the input. We note, however, that the integration steps tmax/δ could result in a very large constant, hidden by the big-O notation. Nonetheless, modern ODE solvers use adaptive step sizes that\nadjust to the data that has been supplied and this should alleviate this problem. In our experiments, when sorting is used, we notice the NDP models are between 1 and 1.5 orders of magnitude slower to train than NPs in terms of wall-clock time. At the same time, this limitation of the method is traded-off by a significantly faster loss decay per epoch and superior final performance. We provide a table of time ratios from our 1D experiments, from section 4.1, in Appendix D." }, { "heading": "4 EXPERIMENTS", "text": "To test the proposed advantages of NDPs we carried out various experiments on time series data. For the low-dimensional experiments in Sections 4.1 and 4.2, we use an MLP architecture for the encoder and decoder. For the high-dimensional experiments in Section 4.3, we use a convolutional architecture for both. We train the models using RMSprop (Tieleman & Hinton, 2012) with learning rate 1× 10−3. Additional model and task details can be found in Appendices F and G, respectively." }, { "heading": "4.1 ONE DIMENSIONAL REGRESSION", "text": "We begin with a set of 1D regression tasks of differing complexity—sine waves, exponentials, straight lines and damped oscillators—that can be described by ODEs. For each task, the functions are determined by a set of parameters (amplitude, shift, etc) with pre-defined ranges. To generate the distribution over functions, we sample these parameters from a uniform distribution over their respective ranges. We use 490 time-series for training and evaluate on 10 separate test time-series. Each series contains 100 points. We repeat this procedure across 5 different random seeds to compute the standard error. Additional details can be found in Appendix G.1.\nThe left and middle panels of Figure 3 show how NPs and NDPs adapt on the sine task to incoming data points. When a single data-point has been supplied, NPs have incorrectly collapsed the distribution over functions to a set of almost horizontal lines. NDPs, on the other hand, are able to produce a wide range of possible trajectories. Even when a large number of points have been supplied, the NP posterior does not converge on a good fit, whereas NDPs correctly capture the true sine curve. In the right panel of Figure 3, we show the test-set MSE as a function of the training epoch. It can be seen that NDPs train in fewer iterations to a lower test loss despite having approximately 10% fewer parameters than NPs. We conducted an ablation study, training all model variants on all the 1D datasets, with final test MSE losses provided in Table 1 and training plots in Appendix G.1.\nWe find that NDPs either strongly outperform NPs (sine, linear), or their standard errors overlap (exponential, oscillators). For the exponential and harmonic oscillator tasks, where the models perform similarly, many points are close to zero in each example and as such it is possible to achieve a low MSE score by producing outputs that are also around zero. In contrast, the sine and linear datasets have a significant variation in the y-values over the range, and we observe that NPs perform considerably worse than the NDP models on these tasks.\nThe difference between NDP and the best of the other model variants is not significant across the set of tasks. As such, we consider only NDPs for the remainder of the paper as this is the least constrained model version: they have unrestricted latent phase-space dynamics, unlike the secondorder counterparts, and a more expressive decoder architecture, unlike the latent-only variants. In addition, NDPs train in a faster wall clock time than the other variants, as shown in Appendix D.\nActive Learning We perform an active learning experiment on the sines dataset to evaluate both the uncertainty estimates produced by the models and how well they adapt to new information. Provided with an initial context point, additional points are greedily queried according to the model uncertainty. Higher quality uncertainty estimation and better adaptation will result in more information being acquired at each step, and therefore a faster and greater reduction in error. As shown in Figure 4, NDPs also perform better in this setting." }, { "heading": "4.2 PREDATOR-PREY DYNAMICS", "text": "The Lotka-Volterra Equations are used to model the dynamics of a two species system, where one species predates on the other. The populations of the prey, u, and the predator, v, are given by the differential equations u̇ = αu − βuv, v̇ = δuv − γv, for positive real parameters, α, β, δ, γ. Intuitively, when prey is plentiful, the predator population increases (+δuv), and when there are many predators, the prey population falls (−βuv). The populations exhibit periodic behaviour, with the phase-space orbit determined by the conserved quantity V = δu − γ ln(u) + βv − α ln(v). Thus for any predator-prey system there exists a range of stable functions describing the dynamics, with any particular realisation being determined by the initial conditions, (u0, v0). We consider the system (α, β, γ, δ) = (2/3, 4/3, 1, 1).\nWe generate sample time-series from the Lotka Volterra system by considering different starting configurations; (u0, v0) = (2E,E), where E is sampled from a uniform distribution in the range (0.25, 1.0). The training set consists of 40 such samples, with a further 10 samples forming the test\nset. As before, each time series consists of 100 time samples and we evaluate across 5 different random seeds to obtain a standard error.\nWe find that NDPs are able to train in fewer epochs to a lower loss (Appendix G.2). We record final test MSEs (×10−2) at 44 ± 4 for the NPs and 15 ± 2 for the NDPs. As in the 1D tasks, NDPs perform better despite having a representation r and context z with lower dimensionality, leading to 10% fewer parameters than NPs. Figure 5 presents these advantages for a single time series." }, { "heading": "4.3 VARIABLE ROTATING MNIST", "text": "To test our model on high-dimensional time-series with latent dynamics, we consider the rotating MNIST digits (Casale et al., 2018; Çağatay Yıldız et al., 2019). In the original task, samples of digit “3” start upright and rotate once over 16 frames (= 360◦s−1) (i.e. constant angular velocity, zero angular shift). However, since we are interested in time-series with variable latent dynamics and increased variability in the initial conditions as in our formal problem statement, we consider a more challenging version of the task. In our adaptation, the angular velocity varies between samples in the range (360◦ ± 60◦)s−1 and each sample starts at a random initial rotation. To induce some irregularity in each time-series in the training dataset, we remove five randomly chosen time-steps (excluding the initial time t0) from each time-series. Overall, we generate a dataset with 1, 000 training time-series, 100 validation time-series and 200 test time-series, each using disjoint combinations of different calligraphic styles and dynamics. We compare NPs and NDPs using identical convolutional networks for encoding the images in the context. We assume that the initial image y0 (i.e. the image at t0) is always present in the context. As such, for NDPs, we compute the distribution of L0 purely by encoding y0 and disregarding the other samples in the context, as described in Section 3. We train the NP for 500 epochs and use the validation set error to checkpoint the best model for testing. We follow a similar procedure for the NDP model but, due to the additional computational load introduced by the integration operation, only train for 50 epochs.\nTarget\nContext\nNP\nNDP\nextrapolation\nFigure 6: Predictions on the test set of Variable Rotating MNIST. NDP is able to extrapolate beyond the training time range whereas NP cannot even learn to reconstruct the digit.\nIn Figure 6, we include the predictions offered by the two models on a time-series from the test dataset, which was not seen in training by either of the models. Despite the lower number of epochs they are trained for, NDPs are able to interpolate and even extrapolate on the variable velocity MNIST dataset, while also accurately capturing the calligraphic style of the digit. NPs struggle on this challenging task and are unable to produce anything resembling the digits. In order to better understand this wide performance gap, we also train in Appendix G.3 the exact same models on the easier Rotating MNIST task from Çağatay Yıldız et al. (2019) where the angular velocity and initial rotation are constant. In this setting, the two models perform similarly since the NP model can rely on simple interpolations without learning any dynamics." }, { "heading": "5 DISCUSSION AND RELATED WORK", "text": "We now consider two perspectives on how Neural ODE Processes relate to existing work and discuss the model in these contexts.\nNDPs as Neural Processes From the perspective of stochastic processes, NDPs can be seen as a generalisation of NPs defined over time and, as such, existing improvements in this family are likely orthogonal to our own. For instance, following the work of Kim et al. (2019), we would expect adding an attention mechanism to NDPs to reduce uncertainty around context points. Additionally, the intrinsic sequential nature of time could be further exploited to model a dynamically changing sequence of NDPs as in Sequential NPs (Singh et al., 2019). For application domains where the observations evolve on a graph structure, such as traffic networks, relational information could be exploited with message passing operation as in MPNPs (Day et al., 2020).\nNDPs as Neural ODEs From a dynamics perspective, NDPs can be thought of as an amortised Bayesian Neural ODE. In this sense, ODE2VAE (Çağatay Yıldız et al., 2019) is the model that is most closely related to our method. While there are many common ideas between the two, significant differences exist. Firstly, NDPs do not use an explicit Bayesian Neural Network but are linked to them through the theoretical connections inherited from NPs (Garnelo et al., 2018b). NDPs handle uncertainty through latent variables, whereas ODE2VAE uses a distribution over the NODE’s weights. Secondly, NDPs stochastically condition the ODE derivative function and initial state on an arbitrary context set of variable size. In contrast, ODE2VAE conditions only the initial position and initial velocity on the first element and the firstM elements in the sequence, respectively. Therefore, our model can dynamically adapt the dynamics to any observed time points. From that point of view, our model also runs parallel to other attempts of making Neural ODEs capable of dynamically adapting to irregularly sampled data (Kidger et al., 2020). We conclude this section by remarking that any Latent NODE, as originally defined in Chen et al. (2018), is also a stochastic process. However, regular Latent NODEs are not trained to use a data-adaptive prior over the latent context, but use a fixed standard-normal prior. This corresponds to the case when C = T as remarked by Le et al. (2018). Additionally, they also only model an uncertainty in the initial position of the ODE, but do not consider an uncertainty in the derivative function." }, { "heading": "6 CONCLUSION", "text": "We introduce Neural ODE Processes (NDPs), a new class of stochastic processes suitable for modelling data-adaptive stochastic dynamics. First, NDPs tackle the two main problems faced by Neural ODEs applied to dynamics-governed time series: adaptability to incoming data points and uncertainty in the underlying dynamics when the data is sparse and, potentially, irregularly sampled. Second, they add an explicit treatment of time as an additional inductive bias inside Neural Processes. To do so, NDPs include a probabilistic ODE as an additional encoded structure, thereby incorporating the assumption that the time-series is the direct or latent manifestation of an underlying ODE. Furthermore, NDPs maintain the scalability of NPs to large inputs. We evaluate our model on synthetic 1D and 2D data, as well as higher-dimensional problems such as rotating MNIST digits. Our method exhibits superior training performance when compared with NPs, yielding a lower loss in fewer iterations. Whether or not the underlying ODE of the data is latent, we find that where there is a fundamental ODE governing the dynamics, NDPs perform well." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We’d like to thank Cătălina Cangea and Nikola Simidjievski for their feedback on an earlier version of this work, and Felix Opolka for many discussions in this area. We were greatly enabled and are indebted to the developers of a great number of open-source projects, most notably the torchdiffeq library. Jacob Moss is funded by a GSK grant." }, { "heading": "A STOCHASTIC PROCESS PROOFS", "text": "Before giving the proofs, we state the following important Lemma.\nLemma A.1. As in NPs, the decoder output g(l(t), t) can be seen as a function F(t) for a given fixed l(t0) and d. Proof. This follows directly from the fact that l(t) = l(t0) + ∫ T t0 fθ(l(t), t,d)dt can be seen as a function of t and that the integration process is deterministic for a given pair l(t0) and d (i.e. for fixed initial conditions and control).\nProposition 3.1 NDPs satisfy the exchangeability condition.\nProof. This follows directly from Lemma A.1, since any permutation on t1:n would automatically act on F1:n and consequently on p(y1:n, l(t0),d|t1:n), for any given l(t0),d.\nProposition 3.2 NDPs satisfy the consistency condition.\nProof. Based on Lemma A.1 we can write the joint distribution (similarly to a regular NP) as follows:\nρt1:n(y1:n) =\n∫ p(F) n∏ i=1 p(yi|F(ti))dF . (6)\nBecause the density of any yi depends only on the corresponding ti, integrating out any subset of y1:n gives the joint distribution of the remaining random variables in the sequence. Thus, consistency is guaranteed." }, { "heading": "B ELBO DERIVATION", "text": "As noted in Lemma A.1, the joint probability p(y, l(t0),d|t) = p(l(t0))p(d)p(y|g(l(t),d, t)) can still be seen as a function that depends only on t, since the ODE integration process is deterministic for a given l(t0) and d. Therefore, the ELBO derivation proceeds as usual (Garnelo et al., 2018b). For convenience, let z = (l(t0),d) denote the concatenation of the two latent vectors and q(z) = qL(l(t0))qD(d). First, we derive the ELBO for log p(yT|tT).\nlog p(yT|tT) = DKL ( q(z|tT, yT)‖p(z|tT, yT) ) + LELBO (7)\n≥ LELBO = Eq(z|tT,yT) [ − log q(z|tT, yT) + log p(yT, z|tT) ] (8)\n= −Eq(z|tT,yT) log q(z|tT, yT) + Eq(z|tT,yT) [ log p(z) + log p(yT|tT, z) ] (9)\n= Eq(z|tT,yT) [∑ i∈IT log p(yi|z, ti) + log p(z) q(z|tT, yT) ] (10)\nNoting that at training time, we want to maximise log p(yT|tT, yC). Using the derivation above, we obtain a similar lower-bound, but with a new prior p(z|tC, yC), updated to reflect the additional information supplied by the context.\nlog p(yT|tT, yC) ≥ Eq(z|tT,yT) [∑ i∈IT log p(yi|z, ti) + log p(z|tC, yC) q(z|tT, yT) ] (11)\nIf we approximate the true p(z|tC, yC) with the variational posterior, this takes the final form log p(yT|tT, yC) ≥ Eq(z|tT,yT) [∑ i∈IT log p(yi|z, ti) + log q(z|tC, yC) q(z|tT, yT) ] (12)\nSplitting z = (l(t0),d) back into its constituent parts, we obtain the loss function\nE q ( l(t0),d|tT,yT )[∑ i∈IT log p(yi|l(t0),d, ti) + log qL(l(t0)|tC, yC) qL(l(t0)|tT, yT) + log qD(d|tC, yC) qD(d|tT, yT) ] . (13)" }, { "heading": "C LEARNING AND INFERENCE PROCEDURE", "text": "We include below the pseudocode for training NDPs. For clarity of exposition, we give code for a single time-series. However, in practice, we batch all the operations in lines 6− 15.\nAlgorithm 1: Learning and Inference in Neural ODE Processes Input : A dataset of time-series {Xk}, k ≤ K, where K is the total number of time-series\n1 Initialise NDP model with parameters θ 2 Let m be the number of context points and n the number of extra target points 3 for i← 0 to training steps do 4 Sample m from U[1, max context points] 5 Sample n from U[1, max extra target points] 6 Uniformly sample a time-series Xk 7 Uniformly sample from Xk the target points T = (tT, yT), where tT is the time batch with shape (m+ n, 1) and yT is the corresponding outputs batch with shape (m+ n, dim(y)) 8 Extract the (unordered) context set C = T[0 : m] 9 Compute q(l(t0),d|C) using the variational encoder\n10 Compute q(l(t0),d|T) using the variational encoder // During training, we sample from q(l(t0),d|T) 11 Sample l(t0),d from q(l(t0),d|T) 12 Integrate to compute l(t) as in Equation 3 for all times t ∈ tT 13 foreach time t ∈ tT do 14 Use decoder to compute p(y(t)|g(l(t)), t) 15 Compute loss LELBO based on Equation 5 16 θ ←− θ − α∇θLELBO\nIt is worth highlighting that during training we sample l(t0),d from the target-conditioned posterior, rather than the context-conditioned posterior. In contrast, at inference time we sample from the context-conditioned posterior." }, { "heading": "D WALL CLOCK TRAINING TIMES", "text": "To explore the additional term in the runtime given in Section 3.5, we record the wall clock time for each model to train for 30 epochs on the 1D synthetic datasets, over 5 seeds. Then we take the ratio of a given model and the NP. The experiments were run on an Nvidia Titan XP. The results can be seen in Table 2." }, { "heading": "E SIZE OF LATENT ODE", "text": "To investigate how many dimensions the ODE l should have we carry out an ablation study, looking at the performance on the 1D sine dataset. We train models with l-dimension {1, 2, 5, 10, 15, 20} for 30 epochs. Figure 7 shows training plots for dim(l) = {1, 2, 10, 20}, and final MSE values are given in Table 3.\nWe see that when dim(l) = 1, NDPs are slow to train and require more epochs. This is because sine curves are second-order ODEs, and at least two dimensions are required to learn second-order dynamics (one for the position and one for the velocity). When dim(l) = 1, NDPs perform similarly to NPs, which is expected when the latent ODE is unable to capture the underlying dynamics. We then see that for all other dimensions, NDPs train at approximately the same rate (over epochs) and have similar final MSE scores. As the dimension increases beyond 10, the test MSE increases, indicating overfitting." }, { "heading": "F ARCHITECTURAL DETAILS", "text": "For the experiments with low dimensionality (1D, 2D), the architectural details are as follows:\n• Encoder: [ti, yi] −→ ri: Multilayer Perceptron, 2 hidden layers, ReLU activations. • Aggregator: r1:n −→ r: Taking the mean. • Representation to Hidden: r −→ h: One linear layer followed by ReLU. • Hidden to L(t0) Mean: h −→ µL: One linear layer. • Hidden to L(t0) Variance: h −→ σL: One linear layer, followed by sigmoid, multiplied by\n0.9 add 0.1, i.e. σL = 0.1 + 0.9× sigmoid(Wh+ b). • Hidden to D(t0) Mean: h −→ µL: One linear layer. • Hidden to D(t0) Variance: h −→ σD: One linear layer, followed by sigmoid, multiplied\nby 0.9 add 0.1, i.e. σD = 0.1 + 0.9× sigmoid(Wh+ b). • ODE Layers: [l,d, t] −→ l̇: Multilayer Perceptron, two hidden layers, tanh activations. • Decoder: g(l(tTi ),d, tTi ) −→ yTi , for the NDP model and ND2P described in section 3.3,\nthis function is a linear layer, acting on a concatenation of the latent state and a function of l(tTi ), d, and t T i . g(l(t T i ),d, t T i ) = W (l(t T i )||h(l(tTi ),d, tTi )) + b. Where h is a Multilayer Perceptron with two hidden layers and ReLU activations.\nFor the high-dimensional experiments (Rotating MNIST).\n• Encoder: [ti, yi] −→ ri: Convolutional Neural Network, 4 layers with 16, 32, 64, 128 channels respectively and kernel size of 5, stride 2. ReLU activations. Batch normalisation.\n• Aggregator: r1:n −→ r: Taking the mean. • Representation to D Hidden: r −→ hD: One linear layer followed by ReLU. • Hidden to D Mean: hD −→ µD: One linear layer. • Hidden to D Variance: hD −→ σz: One linear layer, followed by sigmoid, multiplied by\n0.9 add 0.1, i.e. σD = 0.1 + 0.9× sigmoid(Wh+ b). • y0 to L(t0) Hidden: y0 −→ hL: Convolutional Neural Network, 4 layers with 16, 32,\n64, 128 channels respectively and kernel size of 5, stride 2. ReLU activations. Batch normalisation.\n• L(t0) Hidden to L(t0) Mean: hL −→ µL: One linear layer. • L(t0) Hidden to L(t0) Variance: hL −→ σz: One linear layer, followed by sigmoid,\nmultiplied by 0.9 add 0.1, i.e. σL = 0.1 + 0.9× sigmoid(Wh+ b). • ODE Layers: [l,d, t] −→ l̇: Multilayer Perceptron, two hidden layers, tanh activations. • Decoder: g(l(tTi )) −→ yTi : 1 linear layer followed by a 4 layer transposed Convolutional\nNeural Network with 32, 128, 64, 32 channels respectively. ReLU activations. Batch normalisation." }, { "heading": "G TASK DETAILS AND ADDITIONAL RESULTS", "text": "G.1 ONE DIMENSIONAL REGRESSION\nWe carried out an ablation study over model variations on various 1D synthetic tasks—sines, exponentials, straight lines and harmonic oscillators. Each task is based on some function described by a set of parameters that are sampled over to produce a distribution over functions. In every case, the parameters are sampled from uniform distributions. A trajectory example is formed by sampling from the parameter distributions and then sampling from that function at evenly spaced timestamps, t, over a fixed range to produce 100 data points (t, y). We give the equations for these tasks in terms of their defining parameters and the ranges for these parameters in Table 4.\nTo test after each epoch, 10 random context points are taken, and then the mean-squared error and negative log probability are calculated over all the points (not just a subset of the target points). Each model was trained 5 times on each dataset (with different weight initialisation). We used a batch size of 5, with context size ranging from 1 to 10, and the extra target size ranging from 0 to 5.1 The results are presented in Figure 8.\n1As written in the problem statement in section 2, we make the context set a subset of the target set when training. So we define a context size range and an extra target size range for each task.\nAll models perform better than NPs, with fewer parameters (approximately 10% less). Because there are no significant differences between the different models, we use NDP in the remainder of the experiments, because it has the fewest model restrictions. The phase space dynamics are not restricted like its second-order variant, and the decoder has a more expressive architecture than the latent-only variants. It also trains the fastest in wall clock time seen in Appendix D.\nG.2 LOTKA-VOLTERRA SYSTEM\nTo generate samples from the Lotka Volterra system, we sample different starting configurations, (u0, v0) = (2E,E), where E is sampled from a uniform distribution in the range (0.25, 1.0). We then evolve the Lotka Volterra system\ndu dt = αu− βuv, dv dt = δuv − γv. (14)\nusing (α, β, γ, δ) = (2/3, 4/3, 1, 1). This is evolved from t = 0 to t = 15 and then the times are rescaled by dividing by 10.\nThe training for the Lotka-Volterra system can be seen in Figure 9. This was taken across 5 seeds, with a training set of 40 trajectories, 10 test trajectories and batch size 5. We use a context size ranging from 1 to 100, and extra target size ranging from 0 to 45. The test context size was fixed at 90 query times. NDP trains slightly faster with lower loss, as expected.\nG.3 ROTATING MNIST & ADDITIONAL RESULTS\nTo better understand what makes vanilla NPs fail on our Variable Rotating MNIST from Section 4.3, we train the exact same models on the simpler Rotating MNIST dataset (Çağatay Yıldız et al., 2019). In this dataset, all digits start in the same position and rotate with constant velocity. Additionally, the fourth rotation is removed from all the time-series in the training dataset. We follow the same training procedure as in Section 4.3.\nWe report in Figure 10 the predictions for the two models on a random time-series from the validation dataset. First, NPs and NDPs perform similarly well at interpolation and extrapolation within the time-interval used in training. As an exception but in agreement with the results from ODE2VAE, NDPs produces a slightly better reconstruction for the fourth time step in the time-series. Second, neither model is able to extrapolate the dynamics beyond the time-range seen in training (i.e. the last five time-steps).\nOverall, these observations suggest that for the simpler RotMNIST dataset, explicit modelling of the dynamics is not necessary and the tasks can be learnt easily by interpolating between the context points. And indeed, it seems that even NDPs, which should be able to learn solutions that extrapolate, collapse on these simpler solutions present in the parameter space, instead of properly learning the desired latent dynamics. A possible explanation is that the Variable Rotating MNIST dataset can be seen as an image augmentation process which makes the convolutional features to be approximately rotation equivariant. In this way, the NDP can also learn rotation dynamics in the spatial dimensions of the convolutional features.\nFinally, in Figure 11, we plot the reconstructions of different digit styles on the test dataset of Variable Rotating MNIST. This confirms that NDPs are able to capture different calligraphic styles.\nG.4 HANDWRITTEN CHARACTERS\nThe CharacterTrajectories dataset consists of single-stroke handwritten digits recorded using an electronic tablet (Williams et al., 2006; Dua & Graff, 2017). The trajectories of the pen tip in two dimensions, (x, y), are of varying length, with a force cut-off used to determine the start and end of a stroke. We consider a reduced dataset, containing only letters that were written in a single stroke, this disregards letters such as “f”, “i” and “t”. Whilst it is not obvious that character trajectories should follow an ODE, the related Neural Controlled Differential Equation (NCDEs) model has been applied successfully to this task (Kidger et al., 2020). We train with a training set with 49600 examples, a test set with 400 examples and a batch size of 200. We use a context size ranging between 1 and 100, an extra target size ranging between 0 and 100 and a fixed test context size of 20. We visualise the training of the models in Figure 12 and the models plotting posteriors in Figure 13.\nTarget\nContext\nNP\nNDP\nextrapolation\nFigure 10: Predictions on the simpler Rotating MNIST dataset. NPs are also able to perform well on this task, but NDPs are not able to extrapolate beyond the maximum training time.\nWe observe that NPs and NDPs are unable to successfully learn the time series as well as NCDEs. We record final test MSEs (×10−1) at 4.6±0.1 for NPs and a slightly lower 3.4±0.1 for NDPs. We believe the reason is because handwritten digits do not follow an inherent ODE solution, especially given the diversity of handwriting styles for the same letter. We conjecture that Neural Controlled Differential Equations were able to perform well on this dataset due to the control process. Controlled ODEs follow the equation:\nz(T ) = z(t0) + ∫ T t0 fθ(z(t), t) dX(t) dt dt, z(t0) = h1(x(t0)), x̂(T ) = h2(z(T )) (15)\nWhere X(t) is the natural cubic spline through the observed points x(t). If the learnt fθ is an identity operation, then the result returned will be the cubic spline through the observed points. Therefore, a controlled ODE can learn an identity with a small perturbation, which is easier to learn with the aid of a control process, rather than learning the entire ODE trajectory." } ]
2,021
NEURAL ODE PROCESSES
SP:1c2c08605956eb4660a8f8a33ce13e80276582ed
[ "This paper proposes a data-driven approach to choose an informative surrogate sub-dataset, termed \"a \\epsilon-approximation\", from the original data set. A meta-learning algorithm called Kernel Inducing Points (KIP ) is proposed to obtain such sub-datasets for (Linear) Kernel Ridge Regression (KRR), with the potential to extend to other machine learning algorithms such as neural networks. Some theoretical results are provided for the KRR with a linear kernel. The empirical performance of the proposed algorithm is evaluated by experiments based on synthetic data and some standard benchmark data sets. " ]
One of the most fundamental aspects of any machine learning algorithm is the training data used by the algorithm. We introduce the novel concept of approximation of datasets, obtaining datasets which are much smaller than or are significant corruptions of the original training data while maintaining similar model performance. We introduce a meta-learning algorithm called Kernel Inducing Points (KIP ) for obtaining such remarkable datasets, inspired by the recent developments in the correspondence between infinitely-wide neural networks and kernel ridge-regression (KRR). For KRR tasks, we demonstrate that KIP can compress datasets by one or two orders of magnitude, significantly improving previous dataset distillation and subset selection methods while obtaining state of the art results for MNIST and CIFAR-10 classification. Furthermore, our KIP -learned datasets are transferable to the training of finite-width neural networks even beyond the lazy-training regime, which leads to state of the art results for neural network dataset distillation with potential applications to privacy-preservation.
[ { "affiliations": [], "name": "Timothy Nguyen" }, { "affiliations": [], "name": "Zhourong Chen" }, { "affiliations": [], "name": "Jaehoon Lee" } ]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H. Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Oct 2016", "year": 2016 }, { "authors": [ "Sanjeev Arora", "Simon S Du", "Wei Hu", "Zhiyuan Li", "Russ R Salakhutdinov", "Ruosong Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sanjeev Arora", "Simon S Du", "Zhiyuan Li", "Ruslan Salakhutdinov", "Ruosong Wang", "Dingli Yu" ], "title": "Harnessing the power of infinitely wide deep nets on small-data tasks", "venue": "arXiv preprint arXiv:1910.01663,", "year": 2019 }, { "authors": [ "Ondrej Bohdal", "Yongxin Yang", "Timothy Hospedales" ], "title": "Flexible dataset distillation: Learn labels instead of images", "venue": "arXiv preprint arXiv:2006.08572,", "year": 2020 }, { "authors": [ "Antoine Bordes", "Seyda Ertekin", "Jason Weston", "Léon Bottou" ], "title": "Fast kernel classifiers with online and active learning", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Zalán Borsos", "Mojmı́r Mutnỳ", "Andreas Krause" ], "title": "Coresets via bilevel optimization for continual learning and streaming", "venue": "arXiv preprint arXiv:2006.03875,", "year": 2020 }, { "authors": [ "James Bradbury", "Roy Frostig", "Peter Hawkins", "Matthew James Johnson", "Chris Leary", "Dougal Maclaurin", "Skye Wanderman-Milne" ], "title": "JAX: composable transformations of Python+NumPy programs, 2018", "venue": "URL http://github.com/google/jax", "year": 2018 }, { "authors": [ "Petros Drineas", "Michael W Mahoney" ], "title": "On the nyström method for approximating a gram matrix for improved kernel-based learning", "venue": "journal of machine learning research,", "year": 2005 }, { "authors": [ "Adrià Garriga-Alonso", "Laurence Aitchison", "Carl Edward Rasmussen" ], "title": "Deep convolutional networks as shallow gaussian processes", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "T.N.E. Greville" ], "title": "Note on the generalized inverse of a matrix product", "venue": "SIAM Review,", "year": 1966 }, { "authors": [ "Trevor Hastie", "Andrea Montanari", "Saharon Rosset", "Ryan J. Tibshirani" ], "title": "Surprises in highdimensional ridgeless least squares interpolation, 2019", "venue": null, "year": 2019 }, { "authors": [ "Yangsibo Huang", "Zhao Song", "Kai Li", "Sanjeev Arora" ], "title": "Instahide: Instance-hiding schemes for private distributed learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clement Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ibrahim Jubran", "Alaa Maalouf", "Dan Feldman" ], "title": "Introduction to coresets: Accurate coresets", "venue": "arXiv preprint arXiv:1910.08707,", "year": 2019 }, { "authors": [ "T. Kato" ], "title": "Perturbation Theory of Linear Operators", "venue": "Springer-Verlag, Berlin,", "year": 1976 }, { "authors": [ "Mahmut Kaya", "H.s Bilge" ], "title": "Deep metric learning: A survey", "venue": "Symmetry, 11:1066,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "Jaehoon Lee", "Yasaman Bahri", "Roman Novak", "Sam Schoenholz", "Jeffrey Pennington", "Jascha Sohldickstein" ], "title": "Deep neural networks as gaussian processes", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jaehoon Lee", "Lechao Xiao", "Samuel S. Schoenholz", "Yasaman Bahri", "Roman Novak", "Jascha SohlDickstein", "Jeffrey Pennington" ], "title": "Wide neural networks of any depth evolve as linear models under gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jaehoon Lee", "Samuel S Schoenholz", "Jeffrey Pennington", "Ben Adlam", "Lechao Xiao", "Roman Novak", "Jascha Sohl-Dickstein" ], "title": "Finite versus infinite neural networks: an empirical study", "venue": null, "year": 2007 }, { "authors": [ "Jonathan Lorraine", "Paul Vicol", "David Duvenaud" ], "title": "Optimizing millions of hyperparameters by implicit differentiation", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Dougal Maclaurin", "David Duvenaud", "Ryan Adams" ], "title": "Gradient-based hyperparameter optimization through reversible learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Julien Mairal", "Piotr Koniusz", "Zaid Harchaoui", "Cordelia Schmid" ], "title": "Convolutional kernel networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Alexander G. de G. Matthews", "Jiri Hron", "Mark Rowland", "Richard E. Turner", "Zoubin Ghahramani" ], "title": "Gaussian process behaviour in wide deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Radford M. Neal" ], "title": "Priors for infinite networks (tech", "venue": "rep. no. crg-tr-94-1). University of Toronto,", "year": 1994 }, { "authors": [ "Roman Novak", "Lechao Xiao", "Jaehoon Lee", "Yasaman Bahri", "Greg Yang", "Jiri Hron", "Daniel A. Abolafia", "Jeffrey Pennington", "Jascha Sohl-Dickstein" ], "title": "Bayesian deep convolutional networks with many channels are gaussian processes", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Roman Novak", "Lechao Xiao", "Jiri Hron", "Jaehoon Lee", "Alexander A. Alemi", "Jascha Sohl-Dickstein", "Samuel S. Schoenholz" ], "title": "Neural tangents: Fast and easy infinite neural networks in python", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jeff M Phillips" ], "title": "Coresets and sketches", "venue": "arXiv preprint arXiv:1601.00617,", "year": 2016 }, { "authors": [ "Vaishaal Shankar", "Alex Chengyu Fang", "Wenshuo Guo", "Sara Fridovich-Keil", "Ludwig Schmidt", "Jonathan Ragan-Kelley", "Benjamin Recht" ], "title": "Neural kernels without tangents", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Sam Shleifer", "Eric Prokop" ], "title": "Using small proxy datasets to accelerate hyperparameter search", "venue": "arXiv preprint arXiv:1906.04887,", "year": 2019 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Edward Snelson", "Zoubin Ghahramani" ], "title": "Sparse gaussian processes using pseudo-inputs", "venue": "In Advances in neural information processing systems,", "year": 2006 }, { "authors": [ "Jascha Sohl-Dickstein", "Roman Novak", "Samuel S Schoenholz", "Jaehoon Lee" ], "title": "On the infinite width limit of neural networks with a standard parameterization", "venue": "arXiv preprint arXiv:2001.07301,", "year": 2020 }, { "authors": [ "Ingo Steinwart" ], "title": "Sparseness of support vector machines", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Ilia Sucholutsky", "Matthias Schonlau" ], "title": "Soft-label dataset distillation and text dataset distillation", "venue": "arXiv preprint arXiv:1910.02551,", "year": 2019 }, { "authors": [ "Michalis Titsias" ], "title": "Variational learning of inducing variables in sparse gaussian processes", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy P. Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "CoRR, abs/1606.04080,", "year": 2016 }, { "authors": [ "Tongzhou Wang", "Jun-Yan Zhu", "Antonio Torralba", "Alexei" ], "title": "A Efros. Dataset distillation", "venue": "arXiv preprint arXiv:1811.10959,", "year": 2018 }, { "authors": [ "Christopher KI Williams", "Matthias Seeger" ], "title": "Using the nyström method to speed up kernel machines", "venue": "In Advances in neural information processing systems,", "year": 2001 }, { "authors": [ "Bo Zhao", "Konda Reddy Mopuri", "Hakan Bilen" ], "title": "Dataset condensation with gradient matching", "venue": "arXiv preprint arXiv:2006.05929,", "year": 2020 }, { "authors": [ "Novak" ], "title": "2020) can be parameterized in either the “NTK” parameterization or “standard” parameterization Sohl-Dickstein et al", "venue": null, "year": 2020 }, { "authors": [ "Lee" ], "title": "ZCA preprocessing was used for a Myrtle-10 kernel (denoted with ZCA) on CIFAR-10 dataset", "venue": null, "year": 2020 } ]
[ { "heading": null, "text": "One of the most fundamental aspects of any machine learning algorithm is the training data used by the algorithm. We introduce the novel concept of - approximation of datasets, obtaining datasets which are much smaller than or are significant corruptions of the original training data while maintaining similar model performance. We introduce a meta-learning algorithm called Kernel Inducing Points (KIP ) for obtaining such remarkable datasets, inspired by the recent developments in the correspondence between infinitely-wide neural networks and kernel ridge-regression (KRR). For KRR tasks, we demonstrate that KIP can compress datasets by one or two orders of magnitude, significantly improving previous dataset distillation and subset selection methods while obtaining state of the art results for MNIST and CIFAR-10 classification. Furthermore, our KIP -learned datasets are transferable to the training of finite-width neural networks even beyond the lazy-training regime, which leads to state of the art results for neural network dataset distillation with potential applications to privacy-preservation." }, { "heading": "1 INTRODUCTION", "text": "Datasets are a pivotal component in any machine learning task. Typically, a machine learning problem regards a dataset as given and uses it to train a model according to some specific objective. In this work, we depart from the traditional paradigm by instead optimizing a dataset with respect to a learning objective, from which the resulting dataset can be used in a range of downstream learning tasks.\nOur work is directly motivated by several challenges in existing learning methods. Kernel methods or instance-based learning (Vinyals et al., 2016; Snell et al., 2017; Kaya & Bilge, 2019) in general require a support dataset to be deployed at inference time. Achieving good prediction accuracy typically requires having a large support set, which inevitably increases both memory footprint and latency at inference time—the scalability issue. It can also raise privacy concerns when deploying a support set of original examples, e.g., distributing raw images to user devices. Additional challenges to scalability include, for instance, the desire for rapid hyper-parameter search (Shleifer & Prokop, 2019) and minimizing the resources consumed when replaying data for continual learning (Borsos et al., 2020). A valuable contribution to all these problems would be to find surrogate datasets that can mitigate the challenges which occur for naturally occurring datasets without a significant sacrifice in performance.\nThis suggests the following\nQuestion: What is the space of datasets, possibly with constraints in regards to size or signal preserved, whose trained models are all (approximately) equivalent to some specific model?\nIn attempting to answer this question, in the setting of supervised learning on image data, we discover a rich variety of datasets, diverse in size and human interpretability while also robust to model architectures, which yield high performance or state of the art (SOTA) results when used as training data. We obtain such datasets through the introduction of a novel meta-learning algorithm called Kernel Inducing Points (KIP ). Figure 1 shows some example images from our learned datasets.\nWe explore KIP in the context of compressing and corrupting datasets, validating its effectiveness in the setting of kernel-ridge regression (KRR) and neural network training on benchmark datasets MNIST and CIFAR-10. Our contributions can be summarized as follows:" }, { "heading": "1.1 SUMMARY OF CONTRIBUTIONS", "text": "• We formulate a novel concept of -approximation of a dataset. This provides a theoretical framework for understanding dataset distillation and compression.\n• We introduce Kernel Inducing Points (KIP ), a meta-learning algorithm for obtaining - approximation of datasets. We establish convergence in the case of a linear kernel in Theorem 1. We also introduce a variant called Label Solve (LS ), which gives a closed-form solution for obtaining distilled datasets differing only via labels.\n• We explore the following aspects of -approximation of datasets:\n1. Compression (Distillation) for Kernel Ridge-Regression: For kernel ridge regression, we improve sample efficiency by over one or two orders of magnitude, e.g. using 10 images to outperform hundreds or thousands of images (Tables 1, 2 vs Tables A1, A2). We obtain state of the art results for MNIST and CIFAR-10 classification while using few enough images (10K) to allow for in-memory inference (Tables A3, A4).\n2. Compression (Distillation) for Neural Networks: We obtain state of the art dataset distillation results for the training of neural networks, often times even with only a single hidden layer fully-connected network (Tables 1 and 2).\n3. Privacy: We obtain datasets with a strong trade-off between corruption and test accuracy, which suggests applications to privacy-preserving dataset creation. In particular, we produce images with up to 90% of their pixels corrupted with limited degradation in performance as measured by test accuracy in the appropriate regimes (Figures 3, A3, and Tables A5-A10) and which simultaneously outperform natural images, in a wide variety of settings.\n• We provide an open source implementation of KIP and LS , available in an interactive Colab notebook1." }, { "heading": "2 SETUP", "text": "In this section we define some key concepts for our methods.\n1https://colab.research.google.com/github/google-research/google-research/blob/master/kip/KIP.ipynb\nDefinition 1. A dataset in Rd is a set of n distinct vectors in Rd for some n ≥ 1. We refer to each such vector as a datapoint. A dataset is labeled if each datapoint is paired with a label vector in RC , for some fixed C. A datapoint along with its corresponding label is a labeled datapoint. We use the notation D = (X, y), where X ∈ Rn×d and y ∈ Rn×C , to denote the tuple of unlabeled datapoints X with their corresponding labels y.\nWe henceforth assume all datasets are labeled. Next, we introduce our notions of approximation, both of functions (representing learned algorithms) and of datasets, which are characterized in terms of performance with respect to a loss function rather than closeness with respect to a metric. A loss function ` : RC × RC → R is one that is nonnegative and satisfies `(z, z) = 0 for all z. Definition 2. Fix a loss function ` and let f, f̃ : Rd → RC be two functions. Let ≥ 0.\n1. Given a distribution P on Rd × RC , we say f and f̃ are weakly -close with respect to (`,P) if ∣∣∣E(x,y)∼P(`(f(x), y))− E(x,y)∼P(`(f̃(x), y))∣∣∣ ≤ . (1)\n2. Given a distribution P on Rd we say f and f̃ are strongly -close with respect to (`,P) if Ex∼P ( `(f(x), f̃(x)) ) ≤ . (2)\nWe drop explicit reference to (`,P) if their values are understood or immaterial.\nGiven a learning algorithm A (e.g. gradient descent with respect to the loss function of a neural network), let AD denote the resulting model obtained after training A on D. We regard AD as a mapping from datapoints to prediction labels.\nDefinition 3. Fix learning algorithms A and Ã. Let D and D̃ be two labeled datasets in Rd with label space RC . Let ≥ 0. We say D̃ is a weak -approximation of D with respect to (Ã, A, `,P) if ÃD̃ and AD are weakly -close with respect to (`,P), where ` is a loss function and P is a distribution on Rd×RC . We define strong -approximation similarly. We drop explicit reference to (some of) the Ã, A, `,P if their values are understood or immaterial.\nWe provide some justification for this definition in the Appendix. In this paper, we will measure - approximation with respect to 0-1 loss for multiway classification (i.e. accuracy). We focus on weak -approximation, since in most of our experiments, we consider models in the low-data regime with large classification error rates, in which case, sample-wise agreement of two models is not of central importance. On the other hand, observe that if two models have population classification error rates less than /2, then (2) is automatically satisfied, in which case, the notions of weak-approximation and strong-approximation converge.\nWe list several examples of -approximation, with = 0, for the case when à = A are given by the following:\nExample 1: Support Vector Machines. Given a datasetD of sizeN , train an SVM onD and obtain M support vectors. TheseM support vectors yield a dataset D̃ that is a strong 0-approximation toD in the linearly separable case, while for the nonseparable case, one has to also include the datapoints with positive slack. Asymptotic lower bounds asserting M = O(N) have been shown in Steinwart (2003).2\nExample 2: Ridge Regression. Any two datasets D and D̃ that determine the same ridge-regressor are 0-approximations of each other. In particular, in the scalar case, we can obtain arbitrarily small 0-approximating D̃ as follows. Given training data D = (X, y) in Rd, the corresponding ridgeregressor is the predictor\nx∗ 7→ w · x∗, (3) w = Φλ(X)y, (4)\nΦλ(X) = X T (XXT + λI)−1 (5)\n2As a specific example, many thousands of support vectors are needed for MNIST classification (Bordes et al. (2005)).\nwhere for λ = 0, we interpret the inverse as a pseudoinverse. It follows that for any givenw ∈ Rd×1, we can always find (X̃, ỹ) of arbitrary size (i.e. X̃ ∈ Rn×d, y ∈ Rn×1 with n arbitrarily small) that satisfiesw = Φλ(X̃)ỹ. Simply choose X̃ such thatw is in the range of Φλ(X̃). The resulting dataset (X̃, ỹ) is a 0-approximation to D. If we have a C-dimensional regression problem, the preceding analysis can be repeated component-wise in label-space to show 0-approximation with a dataset of size at least C (since then the rank of Φλ(X̃) can be made at least the rank of w ∈ Rd×C). We are interested in learning algorithms given by KRR and neural networks. These can be investigated in unison via neural tangent kernels. Furthermore, we study two settings for the usage of -approximate datasets, though there are bound to be others:\n1. (Sample efficiency / compression) Fix . What is the minimum size of D̃ needed in order for D̃ to be an -approximate dataset?\n2. (Privacy guarantee) Can an -approximate dataset be found such that the distribution from which it is drawn and the distribution from which the original training dataset is drawn satisfy a given upper bound in mutual information?\nMotivated by these questions, we introduce the following definitions:\nDefinition 4. (Heuristic) Let D̃ and D be two datasets such that D̃ is a weak -approximation of D, with |D̃| ≤ |D| and small. We call |D|/|D̃| the compression ratio.\nIn other words, the compression ratio is a measure of how well D̃ compresses the information available in D, as measured by approximate agreement of their population loss. Our definition is heuristic in that is not precisely quantified and so is meant as a soft measure of compression. Definition 5. Let Γ be an algorithm that takes a dataset D in Rd and returns a (random) collection of datasets in Rd. For 0 ≤ ρ ≤ 1, we say that Γ is ρ-corrupted if for any input dataset D, every datapoint3 drawn from the datasets of Γ(D) has at least ρ fraction of its coordinates independent of D.\nIn other words, datasets produced by Γ have ρ fraction of its entries contain no information about the dataset D (e.g. because they have a fixed value or are filled in randomly). Corrupting information is naturally a way of enhancing privacy, as it makes it more difficult for an attacker to obtain useful information about the data used to train a model. Adding noise to the inputs to neural network or of its gradient updates can be shown to provide differentially private guarantees (Abadi et al. (2016))." }, { "heading": "3 KERNEL INDUCING POINTS", "text": "Given a dataset D sampled from a distribution P , we want to find a small dataset D̃ that is an - approximation to D (or some large subset thereof) with respect to (Ã, A, `,P). Focusing on à = A for the moment, and making the approximation\nE(x,y)∈P `(ÃD̃(x), y) ≈ E(x,y)∈D `(ÃD̃(x), y), (6)\nthis suggests we should optimize the right-hand side of (6) with respect to D̃, using D as a validation set. For general algorithms Ã, the outer optimization for D̃ is computationally expensive and involves second-order derivatives, since one has to optimize over the inner loop encoded by the learning algorithm Ã. We are thus led to consider the class of algorithms drawn from kernel ridgeregression. The reason for this are two-fold. First, KRR performs convex-optimization resulting in a closed-form solution, so that when optimizing for the training parameters of KRR (in particular, the support data), we only have to consider first-order optimization. Second, since KRR for a neural tangent kernel (NTK) approximates the training of the corresponding wide neural network (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019a; Lee et al., 2020), we expect the use of neural kernels to yield -approximations of D for learning algorithms given by a broad class of neural networks trainings as well. (This will be validated in our experiments.)\n3We ignore labels in our notion of ρ-corrupted since typically the label space has much smaller dimension than that of the datapoints.\nAlgorithm 1: Kernel Inducing Point (KIP ) Require: A target labeled dataset (Xt, yt) along with a kernel or family of kernels.\n1: Initialize a labeled support set (Xs, ys). 2: while not converged do 3: Sample a random kernel. Sample a random batch (X̄s, ȳs) from the support set. Sample a random batch (X̄t, ȳt) from the target dataset. 4: Compute the kernel ridge-regression loss given by (7) using the sampled kernel and the sampled support and target data. 5: Backpropagate through X̄s (and optionally ȳs and any hyper-parameters of the kernel) and update the support set (Xs, ys) by updating the subset (X̄s, ȳs). 6: end while 7: return Learned support set (Xs, ys)\nThis leads to our first-order meta-learning algorithm KIP (Kernel Inducing Points), which uses kernel-ridge regression to learn -approximate datasets. It can be regarded as an adaption of the inducing point method for Gaussian processes (Snelson & Ghahramani, 2006) to the case of KRR. Given a kernel K, the KRR loss function trained on a support dataset (Xs, ys) and evaluated on a target dataset (Xt, yt) is given by\nL(Xs, ys) = 1\n2 ‖yt −KXtXs(KXsXs + λI)−1ys‖2, (7)\nwhere if U and V are sets, KUV is the matrix of kernel elements (K(u, v))u∈U,v∈V . Here λ > 0 is a fixed regularization parameter. The KIP algorithm consists of optimizing (7) with respect to the support set (either just the Xs or along with the labels ys), see Algorithm 1. Depending on the downstream task, it can be helpful to use families of kernels (Step 3) because then KIP produces datasets that are -approximations for a variety of kernels instead of a single one. This leads to a corresponding robustness for the learned datasets when used for neural network training. We remark on best experimental practices for sampling methods and initializations for KIP in the Appendix. Theoretical analysis for the convergence properties of KIP for the case of a linear kernel is provided by Theorem 1. Sample KIP -learned images can be found in Section F.\nKIP variations: i) We can also randomly augment the sampled target batches in KIP . This effectively enhances the target dataset (Xt, yt), and we obtain improved results in this way, with no extra computational cost with respect to the support size. ii) We also can choose a corruption fraction 0 ≤ ρ < 1 and do the following. Initialize a random ρ-percent of the coordinates of each support datapoint via some corruption scheme (zero out all such pixels or initialize with noise). Next, do not update such corrupted coordinates during the KIP training algorithm (i.e. we only perform gradient updates on the complementary set of coordinates). Call this resulting algorithm KIPρ. In this way, KIPρ is ρ-corrupted according to Definition 5 and we use it to obtain our highly corrupted datasets.\nLabel solving: In addition to KIP , where we learn the support dataset via gradient descent, we propose another inducing point method, Label Solve (LS ), in which we directly find the minimum of (7) with respect to the support labels while holding Xs fixed. This is simple because the loss function is quadratic in ys. We refer to the resulting labels\ny∗s = Φ0 ( KXtXs(KXsXs + λI) −1 ) yt (8)\nas solved labels. As Φ0 is the pseudo-inverse operation, y∗s is the minimum-norm solution among minimizers of (7). IfKXtXs is injective, using the fact that Φ0(AB) = Φ0(B)Φ0(A) forA injective and B surjective (Greville (1966)), we can rewrite (8) as\ny∗s = (KXsXs + λI)Φ0(KXtXs) yt." }, { "heading": "4 EXPERIMENTS", "text": "We perform three sets of experiments to validate the efficacy of KIP and LS for dataset learning. The first set of experiments investigates optimizing KIP and LS for compressing datasets and achieving\nstate of the art performance for individual kernels. The second set of experiments explores transferability of such learned datasets across different kernels. The third set of experiments investigate the transferability of KIP -learned datasets to training neural networks. The overall conclusion is that KIP -learned datasets, even highly corrupted versions, perform well in a wide variety of settings. Experimental details can be found in the Appendix.\nWe focus on MNIST (LeCun et al., 2010) and CIFAR-10 (Krizhevsky et al., 2009) datasets for comparison to previous methods. For LS , we also use Fashion-MNIST. These classification tasks are recast as regression problems by using mean-centered one-hot labels during training and by making class predictions via assigning the class index with maximal predicted value during testing. All our kernel-based experiments use the Neural Tangents library (Novak et al., 2020), built on top of JAX (Bradbury et al., 2018). In what follows, we use FCm and Convm to denote a depth m fully-connected or fully-convolutional network. Whether we mean a finite-width neural network or else the corresponding neural tangent kernel (NTK) will be understood from the context. We will sometimes also use the neural network Gaussian process (NNGP) kernel associated to a neural network in various places. By default, a neural kernel refers to NTK unless otherwise stated. RBF denotes the radial-basis function kernel. Myrtle-N architecture follows that of Shankar et al. (2020), where an N -layer neural network consisting of a simple combination of N − 1 convolutional layers along with (2, 2) average pooling layers are inter-weaved to reduce internal patch-size.\nWe would have used deeper and more diverse architectures for KIP , but computational limits, which will be overcome in future work, placed restrictions, see the Experiment Details in Section D." }, { "heading": "4.1 SINGLE KERNEL RESULTS", "text": "We apply KIP to learn support datasets of various sizes for MNIST and CIFAR-10. The objective is to distill the entire training dataset down to datasets of various fixed, smaller sizes to achieve high compression ratio. We present these results against various baselines in Tables 1 and 2. These comparisons occur cross-architecturally, but aside from Myrtle LS results, all our results involve the simplest of kernels (RBF or FC1), whereas prior art use deeper architectures (LeNet, AlexNet, ConvNet).\nWe obtain state of the art results for KRR on MNIST and CIFAR-10, for the RBF and FC1 kernels, both in terms of accuracy and number of images required, see Tables 1 and 2. In particular, our method produces datasets such that RBF and FC1 kernels fit to them rival the performance of deep convolutional neural networks on MNIST (exceeding 99.2%). By comparing Tables 2 and A2, we see that, e.g. 10 or 100 KIP images for RBF and FC1 perform on par with tens or hundreds times more natural images, resulting in a compression ratio of one or two orders of magnitude.\nFor neural network trainings, for CIFAR-10, the second group of rows in Table 2 shows that FC1 trained on KIP images outperform prior art, all of which have deeper, more expressive architectures. On MNIST, we still outperform some prior baselines with deeper architectures. This, along with the state of the art KRR results, suggests that KIP , when scaled up to deeper architectures, should continue to yield strong neural network performance.\nFor LS , we use a mix of NNGP kernels4 and NTK kernels associated to FC1, Myrtle-5, Myrtle-10 to learn labels on various subsets of MNIST, Fashion-MNIST, and CIFAR-10. Our results comprise the bottom third of Tables 1 and 2 and Figure 2. As Figure 2 shows, the more targets are used, the better the performance. When all possible targets are used, we get an optimal compression ratio of roughly one order of magnitude at intermediate support sizes." }, { "heading": "4.2 KERNEL TO KERNEL RESULTS", "text": "Here we investigate robustness of KIP and LS learned datasets when there is variation in the kernels used for training and testing. We draw kernels coming from FC and Conv layers of depths 1-3, since such components form the basic building blocks of neural networks. Figure A1 shows that KIP - datasets trained with random sampling of all six kernels do better on average than KIP -datasets trained using individual kernels.\n4For FC1, NNGP and NTK perform comparably whereas for Myrtle, NNGP outperforms NTK.\nFor LS , transferability between FC1 and Myrtle-10 kernels on CIFAR-10 is highly robust, see Figure A2. Namely, one can label solve using FC1 and train Myrtle-10 using those labels and vice versa. There is only a negligible difference in performance in nearly all instances between data with transferred learned labels and with natural labels." }, { "heading": "4.3 KERNEL TO NEURAL NETWORKS RESULTS", "text": "Significantly, KIP -learned datasets, even with heavy corruption, transfer remarkably well to the training of neural networks. Here, corruption refers to setting a random ρ fraction of the pixels of each image to uniform noise between −1 and 1 (for KIP , this is implemented via KIPρ )5. The deterioriation in test accuracy for KIP -images is limited as a function of the corruption fraction, especially when compared to natural images, and moreover, corrupted KIP -images typically outperform uncorrupted natural images. We verify these conclusions along the following dimensions:\nRobustness to dataset size: We perform two sets of experiments.\n(i) First, we consider small KIP datasets (10, 100, 200 images) optimized using multiple kernels (FC1-3, Conv1-2), see Tables A5, A6. We find that our in-distribution transfer (the downstream neural network has its neural kernel included among the kernels sampled by KIP ) performs re-\n5Our images are preprocessed so as to be mean-centered and unit-variance per pixel. This choice of corruption, which occurs post-processing, is therefore meant to (approximately) match the natural pixel distribution.\nmarkably well, with both uncorrupted and corrupted KIP images beating the uncorrupted natural images of corresponding size. Out of distribution networks (LeNet (LeCun et al., 1998) and Wide Resnet (Zagoruyko & Komodakis, 2016)) have less transferability: the uncorrupted images still outperform natural images, and corrupted KIP images still outperform corrupted natural images, but corrupted KIP images no longer outperform uncorrupted natural images.\n(ii) We consider larger KIP datasets (1K, 5K, 10K images) optimized using a single FC1 kernel for training of a corresponding FC1 neural network, where the KIP training uses augmentations (with and without label learning), see Tables A7-A10 and Figure A3. We find, as before, KIP images outperform natural images by an impressive margin: for instance, on CIFAR-10, 10K KIP -learned images with 90% corruption achieves 49.9% test accuracy, exceeding 10K natural images with no corruption (acc: 45.5%) and 90% corruption (acc: 33.8%). Interestingly enough, sometimes higher corruption leads to better test performance (this occurs for CIFAR-10 with cross entropy loss for both natural and KIP -learned images), a phenomenon to be explored in future work. We also find that KIPwith label-learning often tends to harm performance, perhaps because the labels are overfitting to KRR.\nRobustness to hyperparameters: For CIFAR-10, we took 100 images, both clean and 90% corrupted, and trained networks on a wide variety of hyperparameters for various neural architectures. We considered both neural networks whose corresponding neural kernels were sampled during KIP - training those that were not. We found that in both cases, the KIP -learned images almost always outperform 100 random natural images, with the optimal set of hyperparameters yielding a margin close to that predicted from the KRR setting, see Figure 3. This suggests that KIP -learned images can be useful in accelerating hyperparameter search." }, { "heading": "5 RELATED WORK", "text": "Coresets: A classical approach for compressing datasets is via subset selection, or some approximation thereof. One notable work is Borsos et al. (2020), utilizing KRR for dataset subselection. For an overview of notions of coresets based on pointwise approximatation of datasets, see Phillips (2016).\nNeural network approaches to dataset distillation: Maclaurin et al. (2015); Lorraine et al. (2020) approach dataset distillation through learning the input images from large-scale gradient-based metalearning of hyper-parameters. Properties of distilled input data was first analyzed in Wang et al. (2018). The works Sucholutsky & Schonlau (2019); Bohdal et al. (2020) build upon Wang et al.\n(2018) by distilling labels. More recently, Zhao et al. (2020) proposes condensing training set by gradient matching condition and shows improvement over Wang et al. (2018).\nInducing points: Our approach has as antecedant the inducing point method for Gaussian Processes (Snelson & Ghahramani, 2006; Titsias, 2009). However, whereas the latter requires a probabilistic framework that optimizes for marginal likelihood, in our method we only need to consider minimizing mean-square loss on validation data.\nLow-rank kernel approximations: Unlike common low-rank approximation methods (Williams & Seeger, 2001; Drineas & Mahoney, 2005), we obtain not only a low-rank support-support kernel matrix with KIP , but also a low-rank target-support kernel matrix. Note that the resulting matrices obtained from KIP need not approximate the original support-support or target-support matrices since KIP only optimizes for the loss function.\nNeural network kernels: Our work is motivated by the exact correspondence between infinitelywide neural networks and kernel methods (Neal, 1994; Lee et al., 2018; Matthews et al., 2018; Jacot et al., 2018; Novak et al., 2019; Garriga-Alonso et al., 2019; Arora et al., 2019a). These correspondences allow us to view both Bayesian inference and gradient descent training of wide neural networks with squared loss as yielding a Gaussian process or kernel ridge regression with neural kernels.\nInstance-Based Encryption: A related approach to corrupting datasets involves encrypting individual images via sign corruption (Huang et al. (2020))." }, { "heading": "6 CONCLUSION", "text": "We introduced novel algorithms KIP and LS for the meta-learning of datasets. We obtained a variety of compressed and corrupted datasets, achieving state of the art results for KRR and neural network dataset distillation methods. This was achieved even using the simplest of kernels and neural networks (shallow fully-connected networks and purely-convolutional networks without pooling), which notwithstanding their limited expressiveness, outperform most baselines that use deeper architectures. Follow-up work will involve scaling up KIP to deeper architectures with pooling (achievable with multi-device training) for which we expect to obtain even more highly performant datasets, both in terms of overall accuracy and architectural flexibility. Finally, we obtained highly corrupt datasets whose performance match or exceed natural images, which when developed at scale, could lead to practical applications for privacy-preserving machine learning." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Dumitru Erhan, Yang Li, Hossein Mobahi, Jeffrey Pennington, Si Si, Jascha Sohl-Dickstein, and Lechao Xiao for helpful discussions and references." }, { "heading": "A REMARKS ON DEFINITION OF -APPROXIMATION", "text": "Here, we provide insights into the formulation of Definition 3. One noticeable feature of our definition is that it allows for different algorithms A and à when comparing datasets D and D̃. On the one hand, such flexibility is required, since for instance, a mere preprocessing of the dataset (e.g. rescaling it), should be regarded as producing an equivalent (0-approximate) dataset. Yet such a rescaling may affect the hyperparameters needed to train an equivalent model (e.g. the learning rate). Thus, one must allow the relevant hyperparameters of an algorithm to vary when the datasets are also varying. On the other hand, it would be impossible to compare two datasets meaningfully if the learned algorithms used to train them differ too significantly. For instance, if D is a much larger dataset than D̃, but A is a much less expressive algorithm than Ã, then the two datasets may be -approximations of each other, but it would be strange to compare D and D̃ in this way. Thus, we treat the notion of what class of algorithms to consider informally, and leave its specification as a practical matter for each use case. In practice, the pair of algorithms we use to compare datasets should be drawn from a family in which some reasonable range of hyperparameters are varied, the ones typically tuned when learning on an unknown dataset. The main case for us with differing A and à is when we compare neural network training alongside kernel ridge-regression.\nAnother key feature of our definition is that datapoints of an -approximating dataset must have the same shape as those of the original dataset. This makes our notion of an -approximate dataset more restrictive than returning a specialized set of extracted features from some initial dataset.\nAnalogues of our -approximation definition have been formulated in the unsupervised setting, e.g. in the setting of clustering data (Phillips, 2016; Jubran et al., 2019).\nFinally, note that the loss function ` used for comparing datasets does not have to coincide with any loss functions optimized in the learning algorithms A and Ã. Indeed, for kernel ridge-regression, training mimimizes mean square loss while ` can be 0-1 loss.\nB TUNING KIP\nSampling: When optimizing for KRR performance with support dataset size N , it is best to learn a support set D̃ of size N and sample this entire set during KIP training. It is our observation that subsets of size M < N of D̃ will not perform as well as optimizing directly for a size M dataset through KIP . Conversely, sampling subsets of size M from a support dataset of size N during KIPwill not lead to a dataset that does as well as optimizing for all N points. This is sensible: optimizing for small support size requires a resourceful learning of coarse features at the cost of learning fine-grained features from many support datapoints. Conversely, optimizing a large support set means the learned support set has leveraged higher-order information, which will degrade when restricted to smaller subsets.\nFor sampling from the target set, which we always do in a class-balanced way, we found larger batch sizes typically perform better on the test set if the train and test kernels agree. If the train and test kernels differ, then smaller batch sizes lead to less overfitting to the train kernel.\nInitialization: We tried two sets of initializations. The first (“image init”) initializes (Xs, ys) to be a subset of (Xt, yt). The second (“noise init”) initializes Xs with uniform noise and ys with mean-centered, one-hot labels (in a class-balanced way). We found image initialization to perform better.\nRegularization: The regularization parameter λ in (7) can be replaced with 1nλ · tr(KXsXs), where n is the number of datapoints in Xs. This makes the loss function invariant with respect to rescaling of the kernel function K and also normalizes the regularization with respect to support size. In practice, we use this scale-invariant regularization with λ = 10−6.\nNumber of Training Iterations: Remarkably, KIP converges very quickly in all experimental settings we tried. After only on the order of a hundred iterations, independently of the support size, kernel, and corruption factor, the learned support set has already undergone the majority of its learning (test accuracy is within more than 90% of the final test accuracy). For the platforms available to us, using a single V100 GPU, one hundred training steps for the experiments we ran involving\ntarget batch sizes that were a few thousand takes on the order of about 10 minutes. When we add augmentations to our targets, performance continues to improve slowly over time before flattening out after several thousands of iterations." }, { "heading": "C THEORETICAL RESULTS", "text": "Here, we analyze convergence properties of KIP in returning an -approximate dataset. In what follows, we refer to gradient-descent KIP as the case when we sample from the entire support and train datasets for each update step to KIP . We also assume that the distribution P used to evaluate -approximation is supported on inputs x ∈ Rd with ‖x‖ ≤ 1 (merely to provide a convenient normalization when evaluating loss on regression algorithms).\nFor the case of a linear kernel, we prove the below convergence theorem: Theorem 1. Let D = (Xt, yt) ∈ Rnt×d × Rnt×C be an arbitrary dataset. Let wλ ∈ Rd×C be the coefficients obtained from training λ ridge-regression (λ-RR) on (Xt, yt), as given by (4).\n1. For generic6 initial conditions for the support set (Xs, ys) ⊂ Rns×d × Rns×C and sufficiently small λ > 0, gradient descent KIPwith target dataset D converges to a dataset D̃.\n2. The dataset D̃ is a strong -approximation to D with respect to algorithms (λ-RR, 0-RR) and loss function equal to mean-square loss, where\n≤ 1 2 ‖w̃ − w0‖22 (A1)\nand w̃ ∈ Rd×C are the coefficients of the linear classifier obtained from training λ-RR on D̃. If the size of D̃ is at leastC, then w̃ is also a least squares classifier forD. In particular, if D has a unique least squares classifier, then = 0.\nProof. We discuss the case where Xs is optimized, with the case where both (Xs, ys) are optimized proceeding similarly. In this case, by genericity, we can assume ys 6= 0, else the learning dynamics is trivial. Furthermore, to simplify notation for the time being, assume the dimensionality of the label space is C = 1 without loss of generality. First, we establish convergence. For a linear kernel, we can write our loss function as\nL(Xs) = 1\n2 ‖yt −XtXTs (XsXTs + λI)−1ys‖2, (A2)\ndefined on the space Mns×d of ns × d matrices. It is the pullback of the loss function\nLRd×ns (Φ) = 1\n2 ‖yt −XtΦys‖2, Φ∈ Rd×ns (A3)\nunder the map Xs 7→ Φλ(Xs) = XTs (XsXTs + λI)−1. The function (A3) is quadratic in Φ and all its local minima are global minima given by an affine subspaceM ⊂ Md×ns . Moreover, each point ofM has a stable manifold of maximal dimension equal to the codimension ofM. Thus, the functional L has global minima given by the inverse image Φ−1λ (M) (which will be nonempty for sufficiently small λ).\nNext, we claim that given a fixed initial (Xs, ys), then for sufficiently small λ, gradient-flow of (A2) starting from (Xs, ys) cannot converge to a non-global local minima. We proceed as follows. If X = UΣV T is a singular value decomposition of X , with Σ a ns × ns diagional matrix of singular values (and any additional zeros for padding), then Φ(X) = V φ(Σ)UT where φ(Σ) denotes the diagonal matrix with the map\nφ : R≥0 → R≥0 (A4)\nφ(µ) = µ\nµ2 + λ (A5)\n6A set can be generic by either being open and dense or else having probability one with respect to some measure absolutely continuous with respect to Lebesgue measure. In our particular case, generic refers to the complement of a submanifold of codimension at least one.\napplied to each singular value of Σ. The singular value decomposition depends analytically on X (Kato (1976)). Given that φ : R≥0 → R≥0 is a local diffeomorphism away from its maximum value at µ = µ∗ := λ1/2, it follows that Φλ : Mns×d → Md×ns is locally surjective, i.e. for every X , there exists a neighborhood U of X such that Φλ(U) contains a neighborhood of Φλ(X). Thus, away from the locus of matrices in Mns×d that have a singular value equaling µ∗, the function (A2) cannot have any non-global local minima, since the same would have to be true for (A3). We are left to consider those matrices with some singular values equaling µ∗. Note that as λ→ 0, we have φ(µ∗) → ∞. On the other hand, for any initial choice of Xs, the matrices Φλ(Xs) have uniformly bounded singular values as a function of λ. Moreover, as Xs = Xs(t) evolves, ‖Φλ(Xs(t))‖ never needs to be larger than some large constant times ‖Φλ(Xs(0))‖+ ‖yt‖µ+‖ys‖ , where µ+ is the smallest positive singular value of Xt. Consequently, Xs(t) never visits a matrix with singular value µ∗ for sufficiently small λ > 0; in particular, we never have to worry about convergence to a non-global local minimum.\nThus, a generic gradient trajectory γ of L will be such that Φλ ◦ γ is a gradient-like7 trajectory for LRd×ns that converges toM. We have to show that γ itself converges. It is convenient to extend φ to a map defined on the one-point compactification [0,∞] ⊃ R≥0, so as to make φ a two-to-one map away from µ∗. Applying this compactification to every singular value, we obtain a compactification Mns×d of Mns×d, and we can naturally extend Φλ to such a compactification. We have that γ converges to an element of M̃ := Φ−1λ (M) ⊂ M\nns×d, where we need the compactification to account for the fact that when Φλ ◦ γ converges to a matrix that has a zero singular value, γ may have one of its singular values growing to infinity. Let M0 denote the subset of M with a zero singular value. Then γ converges to an element of Mns×d precisely when γ does not converge to an element of M̃∞ := Φ−1λ (M0)∩ (M\nns×d \\Mns×d). However,M0 ⊂M has codimension one and hence so does M̃∞ ⊂ Φ−1λ (M0). Thus, the stable set to M̃∞ has codimension one in M\nns×d, and hence its complement is nongeneric. Hence, we have generic convergence of a gradient trajectory of L to a (finite) solution. This establishes the convergence result of Part 1.\nFor Part 2, the first statement is a general one: the difference of any two linear models, when evaluated on P , can be pointwise bounded by the spectral norm of the difference of the model coefficient matrices. Thus D̃ is a strong -approximation to D with respect to (λ-RR, 0-RR) where is given by (A1). For the second statement, observe that L is also the pullback of the loss function\nLRd×C (w) = 1\n2 ‖yt −Xtw‖2, w∈ Rd×C . (A6)\nunder the map Xs 7→ w(Xs) = Φλ(Xs)ys. The function LRd×C (w) is quadratic in w and has a unique minimum value, with the space of global minima being an affine subspaceW ∗ of Rd given by the least squares classifiers for the dataset (Xt, yt). Thus, the global minima of L are the preimage of W ∗ under the map w(Xs). For generic initial (Xs, ys), we have ys ∈ Rns×C is full rank. This implies, for ns ≥ C, that the range of all possible w(Xs) for varying Xs is all of Rd×C , so that the minima of (A6) and (A3) coincide. This implies the final parts of Part 2.\nWe also have the following result about -approximation using the label solve algorithm: Theorem 2. Let D = (Xt, yt) ∈ Rnt×d × Rnt×C be an arbitrary dataset. Let wλ ∈ Rd×C be the coefficients obtained from training λ ridge-regression (λ-RR) on (Xt, yt), as given by (4). Let Xs ∈ Rns×d be an arbitrary initial support set and let λ ≥ 0. Define y∗s = y∗s (λ) via (8). Then (Xs, y∗s ) yields a strong (λ)-approximation of (Xt, yt) with respect to algorithms (λ-RR, 0-RR) and mean-square loss, where\n(λ) = 1\n2 ‖w∗(λ)− w0‖22 (A7)\nand w∗(λ) is the solution to w∗(λ) = argminw∈W ‖yt −Xtw‖2, W = im ( Φλ(Xs) : ker ( XtΦλ(Xs) )⊥ → Rd×C ) .\n(A8) 7A vector field v is gradient-like for a function f if v · grad(f) ≥ 0 everywhere.\nMoreover, for λ = 0, if rank(Xs) = rank(Xt) = d, then w∗(λ) = w0. This implies y∗s = Xsw0, i.e. y∗s coincides with the predictions of the 0-RR classifier trained on (Xt, yt) evaluated on Xs.\nProof. By definition, y∗s is the minimizer of\nL(ys) = 1\n2 ‖yt −XtΦλ(Xs)ys‖2,\nwith minimum norm. This implies y∗s ∈ ker ( XtΦλ(Xs) )⊥ and that w∗(λ) = Φλ(Xs)y∗s satisfies (A8). At the same time, w∗(λ) = Φλ(Xs)y∗s are the coefficients of the λ-RR classifier trained on (Xs, y ∗ s ). If rank(Xs) = rank(Xt) = d, then Φ0(Xs) is surjective and Xt is injective, in which case\nω∗(0) = Φ0(Xs)y ∗ s\n= Φ0(Xs)Φ0(XtΦ0(Xs))yt\n= Φ0(Xs)XsΦ0(Xt)yt\n= ω0.\nThe results follow.\nFor general kernels, we make the following simple observation concerning the optimal output of KIP .\nTheorem 3. Fix a target dataset (Xt, yt). Consider the family of all subspaces S of Rnt given by {imKXtXs : Xs ∈ Rns×d}, i.e. all possible column spaces of KXtXs . Then the infimum of the loss (7) over all possible (Xs, ys) is equal to infS∈S 12‖Π ⊥ S yt‖2 where Π⊥S is orthogonal projection onto the orthogonal complement of S (acting identically on each label component).\nProof. Since ys is trainable, (KXsXs + λ) −1ys is an arbitrary vector in Rns×C . Thus, minimizing the training objective corresponds to maximizing the range of the linear mapKXtXs over all possible Xs. The result follows." }, { "heading": "D EXPERIMENT DETAILS", "text": "In all KIP trainings, we used the Adam optimizer. All our labels are mean-centered 1-hot labels. We used learning rates 0.01 and 0.04 for the MNIST and CIFAR-10 datasets, respectively. When sampling target batches, we always do so in a class-balanced way. When augmenting data, we used the ImageGenerator class from Keras, which enables us to add horizontal flips, height/width shift, rotatations (up to 10 degrees), and channel shift (for CIFAR-10). All datasets are preprocessed using channel-wise standardization (i.e. mean subtraction and division by standard-deviation). For neural (tangent) kernels, we always use weight and bias variance σ2w = 2 and σ 2 b = 10\n−4, respectively. For both neural kernels and neural networks, we always use ReLU activation. Convolutional layers all use a (3, 3) filter with stride 1 and same padding.\nCompute Limitations: Our neural kernel computations, implemented using Neural Tangents libraray (Novak et al., 2020) are such that computation scales (i) linearly with depth; (ii) quadratically in the number of pixels for convolutional kernels; (iii) quartically in the number of pixels for pooling layers. Such costs mean that, using a single V100 GPU with 16GB of RAM, we were (i) only able to sample shallow kernels; (ii) for convolutional kernels, limited to small support sets and small target batch sizes; (iii) unable to use pooling if learning more than just a few images. Scaling up KIP to deeper, more expensive architectures, achievable using multi-device training, will be the subject of future exploration.\nKernel Parameterization: Neural tangent kernels, or more precisely each neural network layer of such kernels, as implemented in Novak et al. (2020) can be parameterized in either the “NTK” parameterization or “standard” parameterization Sohl-Dickstein et al. (2020). The latter depends on the width of a corresponding finite-width neural network while the former does not. Our experiments mix both these parameterizations for variety. However, because we use a scale-invariant\nregularization for KRR (Section B), the choice of parameterization has a limited effect compared to other more significant hyperparameters (e.g. the support dataset size, learning rate, etc.)8.\nSingle kernel results: (Tables 1 and 2) For FC, we used kernels with NTK parametrization. For RBF, our rbf kernel is given by rbf(x1, x2) = exp(−γ‖x1 − x2‖2/d) (A9) where d is the dimension of the inputs and γ = 1. We found that treating γ as a learnable parameter during KIP had mixed results9 and so keep it fixed for simplicity.\nFor MNIST, we found target batch size equal to 6K sufficient. For CIFAR-10, it helped to sample the entire training dataset of 50K images per step (hence, along with sampling the full support set, we are doing full gradient descent training). When support dataset size is small or if augmentations are employed, there is no overfitting (i.e. the train and test loss/accuracy stay positively correlated). If the support dataset size is large (5K or larger), sometimes there is overfitting when the target batch size is too large (e.g. for the RBF kernel on CIFAR10, which is why we exclude in Table 2 the entries for 5K and 10K). We could have used a validation dataset for a stopping criterion, but that would have required reducing the target dataset from the entire training dataset.\nWe train KIP for 10-20k iterations and took 5 random subsets of images for initializations. For each such training, we took 5 checkpoints with lowest train and loss and computed the test accuracy. This gives 25 evaluations, for which we can compute the mean and standard deviation for our test accuracy numbers in Tables 1 and 2.\nKernel transfer results: For transfering of KIP images, both to other kernels and to neural networks, we found it useful to use smaller target batch sizes (either several hundred or several thousand), else the images overfit to their source kernel. For random sampling of kernels used in Figure A1 and producing datasets for training of neural networks, we used FC kernels with width 1024 and Conv kernels with width 128, all with standard parametrization.\nNeural network results: Neural network trainings on natural data with mean-square loss use meancentered one-hot labels for consistency with KIP trainings. For cross entropy loss, we use one-hot labels. For neural network trainings on KIP -learned images with label learning, we transfer over the labels directly (as with the images), whatever they may be.\nFor neural network transfer experiments occurring in Table 1, Table 2, Figure 3, Table A5, and Table A6, we did the following. First, the images were learned using kernels FC1-3, Conv1-2. Second, we trained for a few hundred iterations, after which optimal test performance was achieved. On MNIST images, we trained the networks with constant learning rate and Adam optimizer with cross entropy loss. Learning rate was tuned over small grid search space. For the FC kernels and networks, we use width of 1024. On CIFAR-10 images, we trained the networks with constant learning rate, momentum optimizer with momentum 0.9. Learning rate, L2 regularization, parameterization (standard vs NTK) and loss type (mean square, softmax-cross-entropy) was tuned over small grid search space. Vanilla networks use constant width at each layer: for FC we use width of 1024, for Conv2 we use 512 channels, and for Conv8 we use 128 channels. No pooling layers are used except for the WideResNet architecture, where we follow the original architecture of Zagoruyko & Komodakis (2016) except that our batch normalization layer is stateless (i.e. no exponential moving average of batch statistics are recorded).\nFor neural network transfer experiments in Figure A3, Tables A7-A10, we did the following. Our KIP -learned images were trained using only an FC1 kernel. The neural network FC1 has an increased width 4096, which helps with the larger number of images. We used learning rate 4× 10−4 and the Adam optimizer. The KIP learned images with only augmentations used target batch size equal to half the training dataset size and were trained for 10k iterations, since the use of augmentations allows for continued gains after longer training. The KIP learned images with augmentations\n8All our final readout layers use the fixed NTK parameterization and all our statements about which parameterization we are using should be interpreted accordingly. This has no effect on the training of our neural networks while for kernel results, this affects the recursive formula for the NTK at the final layer if using standard parameterization (by the changing the relative scales of the terms involved). Since the train/test kernels are consistently parameterized and KIP can adapt to the scale of the kernel, the difference between our hybrid parameterization and fully standard parameterization has a limited affect.\n9On MNIST it led to very slight improvement. For CIFAR10, for small support sets, the effect was a small improvement on the test set, whereas for large support sets, we got worse performance.\nTable A1: Accuracy on random subsets of MNIST. Standard deviations over 20 resamplings.\n# Images \\ Kernel Linear RBF FC1 10 44.6±3.7 45.3±3.9 45.8±3.9 20 51.9±3.1 54.6±2.9 54.7±2.8 40 59.4±2.4 66.9±2.0 66.0±1.9 80 62.6±2.7 75.6±1.6 74.3±1.7 160 62.2±2.1 82.7±1.4 81.1±1.6 320 52.3±1.9 88.1±0.8 86.9±0.9 640 41.9±1.4 91.8±0.5 91.1±0.5 1280 71.0±0.9 94.2±0.3 93.6±0.3 2560 79.7±0.5 95.7±0.2 95.3±0.2 5000 83.2±0.4 96.8±0.2 96.4±0.2 10000 84.9±0.4 97.5±0.2 97.2±0.2\nand label learning used target batch size equal to a tenth of the training dataset size and were trained for 2k iterations (the learned data were observed to overfit to the kernel and have less transferability if larger batch size were used or if trainings were carried out longer).\nAll neural network trainings were run with 5 random initializations to compute mean and standard deviation of test accuracies.\nIn Table 2, regularized ZCA preprocessing was used for a Myrtle-10 kernel (denoted with ZCA) on CIFAR-10 dataset. Shankar et al. (2020) and Lee et al. (2020) noticed that for neural (convolutional) kernels on image classification tasks, regularized ZCA preprocessing can improve performance significantly compared to standard preprocessing. We follow the prepossessing scheme used in Shankar et al. (2020), with regularization strength of 10−5 without augmentation." }, { "heading": "E TABLES AND FIGURES", "text": "E.1 KERNEL BASELINES\nWe report various baselines of KRR trained on natural images. Tables A1 and A2 shows how various kernels vary in performance with respect to random subsets of MNIST and CIFAR-10. Linear denotes a linear kernel, RBF denotes the rbf kernel (A9) with γ = 1, and FC1 uses standard parametrization and width 1024. Interestingly enough, we observe non-monotonicity for the linear kernel, owing to double descent phenomenon Hastie et al. (2019). We include additional columns for deeper kernel architectures in Table A2, taken from Shankar et al. (2020) for reference.\nComparing Tables 1, 2 with Tables A1, A2, we see that 10 KIP -learned images, for both RBF and FC1, has comparable performance to several thousand natural images, thereby achieving a compression ratio of over 100. This compression ratio narrows as the support size increases towards the size of the training data.\nNext, Table A3 compares FC1, RBF, and other kernels trained on all of MNIST to FC1 and RBF trained on KIP -learned images. We see that our KIP approach, even with 10K images (which fits into memory), leads to RBF and FC1 matching the performance of convolutional kernels on the original 60K images. Table A4 shows state of the art of FC kernels on CIFAR-10. The prior state of the art used kernel ensembling on batches of augmented data in Lee et al. (2020) to obtain test accuracy of 61.5% (32 ensembles each of size 45K images). By distilling augmented images using KIP , we are able to obtain 64.7% test accuracy using only 10K images.\nE.2 KIP AND LS TRANSFER ACROSS KERNELS\nFigure A1 plots how KIP (with only images learned) performs across kernels. There are seven training scenarios: training individually on FC1, FC2, FC3, Conv1, Conv2, Conv3 NTK kernels and random sampling from among all six kernels uniformly (Avg All). Datasets of size 10, 100, 200\nTable A2: Accuracy on random subsets of CIFAR-10. Standard deviations over 20 resamplings.\n# Images \\ Kernel Linear RBF FC1 CNTK† Myrtle10-G‡" }, { "heading": "10 16.2±1.3 15.7±2.1 16.4±1.8 15.33 ± 2.43 19.15 ± 1.94", "text": "" }, { "heading": "20 17.1±1.6 17.1±1.7 18.0±1.9 18.79 ± 2.13 21.65 ± 2.97", "text": "" }, { "heading": "40 17.8±1.6 19.7±1.8 20.6±1.8 21.34 ± 1.91 27.20 ± 1.90", "text": "" }, { "heading": "80 18.6±1.5 23.0±1.5 23.9±1.6 25.48 ± 1.91 34.22 ± 1.08", "text": "" }, { "heading": "160 18.5±1.4 25.8±1.4 26.5±1.4 30.48 ± 1.17 41.89 ± 1.34", "text": "" }, { "heading": "320 18.1±1.1 29.2±1.2 29.9±1.1 36.57 ± 0.88 50.06 ± 1.06", "text": "" }, { "heading": "640 16.8±0.8 32.8±0.9 33.4±0.8 42.63 ± 0.68 57.60 ± 0.48", "text": "" }, { "heading": "1280 15.1±0.5 35.9±0.7 36.7±0.6 48.86 ± 0.68 64.40 ± 0.48", "text": "2560 13.0±0.5 39.1±0.7 40.2±0.7 - - 5000 17.8±0.4 42.1±0.5 43.7±0.6 - - 10000 24.9±0.6 45.3±0.6 47.7±0.6 - - † Conv14 kernel with global average pooling (Arora et al., 2019b) ‡ Myrtle10-Gaussian kernel (Shankar et al., 2020)\nTable A3: Classification performance on MNIST. Our KIP -datasets, fit to FC1 or RBF kernels, outperform non-convolutional kernels trained on all training images.\nKernel Method Accuracy\nFC1 Base1 98.6 ArcCosine Kernel2 Base 98.8 Gaussian Kernel Base 98.8 FC1 KIP (a+l)3, 10K images 99.2 LeNet-5 (LeCun et al., 1998) Base 99.2 RBF KIP (a+l), 10K images 99.3 Myrtle5 Kernel (Shankar et al., 2020) Base 99.5 CKN (Mairal et al., 2014) Base 99.6 1 Base refers to training on entire training dataset of natural images. 2 Non RBF/FC numbers taken from (Shankar et al., 2020) 3 (a + l) denotes KIPwith augmentations and label learning during training.\nare thereby trained then evaluated by averaging over all of FC1-3, Conv1-3, both with the NTK and NNGP kernels for good measure. Moreover, the FC and Conv train kernel widths (1024 and 128) were swapped at test time (FC width 128 and Conv width 1024), as an additional test of robustness. The average performance is recorded along the y-axis. AvgAll leads to overall boost in performance across kernels. Another observation is that Conv kernels alone tend to do a bit better, averaged over the kernels considered, than FC kernels alone.\nAvg All FC1 FC2 FC3 Conv1 Conv2 Conv3 0.0\n0.1\n0.2\n0.3\n0.4\n0.5\nAv er\nag e\nTe st\nAc cu\nra cy\n0.39\n0.28 0.34\n0.31 0.32 0.35 0.35\n0.46\n0.39 0.36 0.35 0.37 0.40 0.39\n0.48 0.41 0.38 0.37 0.40\n0.41 0.40\nKIP Transfer to the Other Kernels Support Set Size\n10 100 200\nFigure A1: Studying transfer between kernels.\nTable A4: CIFAR-10 test accuracy for FC/RBF kernels. Our KIP -datasets, fit to RBF/FC1, outperform baselines with many more images. Notation same as in Table A3.\nKernel Method Accuracy FC1 Base 57.6 FC3 Ensembling (Lee et al., 2020) 61.5 FC1 KIP (a+l), 10k images 64.7 RBF Base 52.7 RBF KIP (a+l), 10k images 66.3\n100 102 104 Support Set Size\n0.2\n0.4\nTe st\nAc cu\nra cy Myrtle-10 -> FC: 1k target LS Raw\n100 102 104 Support Set Size\nMyrtle-10 -> FC: 2k target LS Raw\n100 102 104 Support Set Size\nMyrtle-10 -> FC: 5k target LS Raw\n100 102 104 Support Set Size\nMyrtle-10 -> FC: 10k target LS Raw\n100 102 104 Support Set Size\nMyrtle-10 -> FC: 50k target LS Raw\n100 102 104 Support Set Size\n0.2\n0.4\n0.6\n0.8\nTe st\nAc cu\nra cy FC -> Myrtle-10: 1k target LS Raw\n100 102 104 Support Set Size\nFC -> Myrtle-10: 5k target LS Raw\n100 102 104 Support Set Size\nFC -> Myrtle-10: 10k target LS Raw\n100 102 104 Support Set Size\nFC -> Myrtle-10: 20k target LS Raw\n100 102 104 Support Set Size\nFC -> Myrtle-10: 50k target LS Raw\nFigure A2: Label Solve transfer between Myrtle-10 and FC for CIFAR10. Top row: LS labels using Myrtle-10 applied to FC1. Bottom row: LS labels using FC1 applied to Myrtle-10. Results averaged over 3 samples per support set size. In all these plots, NNGP kernels were used and Myrtle-10 used regularized ZCA preprocessing.\nIn Figure A2, we plot how LS learned labels using Myrtle-10 kernel transfer to the FC1 kernel and vice versa. We vary the number of targets and support size. We find remarkable stability across all these dimensions in the sense that while the gains from LSmay be kernel-specific, LS -labels do not perform meaningfully different from natural labels when switching the train and evaluation kernels.\nE.3 KIP TRANSFER TO NEURAL NETWORKS AND CORRUPTION EXPERIMENTS\nTable A5: KIP transfer to NN vs NN baselines on MNIST. For each group of four experiments, the best number is marked boldface, while the second best number is in italics. Corruption refers to 90% noise corruption. KIP images used FC1-3, Conv1-2 kernel during training.\nMethod 10 uncrpt 10 crpt 100 uncrpt 100 crpt 200 uncrpt 200 crpt FC1, KIP 73.57±1.51 44.95±1.23 86.84±1.65 79.73±1.10 89.55±0.94 83.38±1.37 FC1, Natural 42.28±1.59 35.00±2.33 72.65±1.17 45.39±2.25 81.70±1.03 54.20±2.61 LeNet, KIP 59.69±8.98 38.25±6.42 87.85±1.46 69.45±3.99 91.08±1.65 70.52±4.39 LeNet, Natural 48.69±4.10 30.56±4.35 80.32±1.26 59.99±0.95 89.03±1.13 62.00±0.94\nTable A6: KIP transfer to NN vs NN baselines on CIFAR-10. Notation same as in Table A5.\nMethod 100 uncrpt 100 crpt FC3, KIP 43.09±0.20 37.71±0.38 FC3, Natural 24.48±0.15 18.92±0.61 Conv2, KIP 43.68±0.46 37.08±0.48 Conv2, Natural 26.23 ±0.69 17.10±1.33 WideResNet, KIP 33.29±1.14 23.89±1.30 WideResNet, Natural 27.93±0.75 19.00±1.01\nTable A7: MNIST. KIP and natural images on FC1. MSE Loss. Test accuracy of image datasets of size 1K, 5K, 10K, trained using FC1 neural network using mean-square loss. Dataset size, noise corruption percent, and dataset type are varied: natural refers to natural images, KIP refers to KIP - learned images with either augmentations only (a) or both augmentations with label learning (a + l). Only FC1 kernel was used for KIP . For each KIP row, we place a * next to the most corrupt entry whose performance exceeds the corresponding 0% corrupt natural images. For each dataset size, we boldface the best performing entry.\nDataset 0% crpt 50% crpt 75% crpt 90% crpt Natural 1000 92.8±0.4 87.3±0.5 82.3±0.9 74.3±1.4 KIP (a) 1000 94.5±0.4 95.9±0.1 94.4±0.2* 92.0±0.3 KIP (a+l) 1000 96.3±0.2 95.9±0.3 95.1±0.3 94.6±1.9* Natural 5000 96.4±0.1 92.8±0.2 88.5±0.5 80.0±0.9 KIP (a) 5000 97.0±0.6 97.1±0.6 96.3±0.2 96.6±0.4* KIP (a+l) 5000 97.6±0.0* 95.8±0.0 94.5±0.4 91.4±2.3 Natural 10000 97.3±0.1 93.9±0.1 90.2±0.1 81.3±1.0 KIP (a) 10000 97.8±0.1* 96.1±0.2 95.8±0.2 96.0±0.2 KIP (a+l) 10000 97.9±0.1* 95.8±0.1 94.7±0.2 88.1±3.5\nTable A8: MNIST. KIP and natural images on FC1. Cross Entropy Loss. Test accuracy of image datasets trained using FC1 neural network using cross entropy loss. Notation same as in Table A7.\nDataset 0% crpt 50% crpt 75% crpt 90% crpt Natural 1000 91.3±0.4 86.3±0.3 81.9±0.5 75.0±1.3 KIP (a) 1000 95.9±0.1 95.0±0.1 93.5±0.3* 90.9±0.3 Natural 5000 95.8±0.1 91.9±0.2 87.3±0.3 80.4±0.5 KIP (a) 5000 98.3±0.0 96.8±0.8* 95.5±0.3 95.1±0.2 Natural 10000 96.9±0.1 93.8±0.1 89.6±0.2 81.3±0.5 KIP (a) 10000 98.8±0.0 97.0±0.0* 95.2±0.2 94.7±0.3\nTable A9: CIFAR-10. KIP and natural images on FC1. MSE Loss. Test accuracy of image datasets trained using FC1 neural network using mean-square loss. Notation same as in Table A7.\nDataset 0% crpt 50% crpt 75% crpt 90% crpt Natural 1000 34.1±0.5 34.3±0.4 31.7±0.5 27.7±0.8 KIP (a) 1000 48.0±0.5 46.7±0.2 45.7±0.5 44.3±0.5* KIP (a+l) 1000 47.5±0.3 46.7±0.8 44.3±0.4 41.6±0.5* Natural 5000 41.4±0.6 41.3±0.4 37.2±0.2 32.5±0.7 KIP (a) 5000 51.4±0.4 50.0±0.4 48.8±0.6 47.5±0.3* KIP (a+l) 5000 50.6±0.5 48.5±0.9 44.7±0.6 43.4±0.5* Natural 10000 44.5±0.3 43.2±0.2 39.5±0.2 34.3±0.2 KIP (a) 10000 53.3±0.8 50.5±1.3 49.4±0.2 48.2±0.6* KIP (a+l) 10000 51.9±0.4 50.0±0.5 46.5±1.0 43.8±1.3*\nTable A10: CIFAR-10. KIP and natural images on FC1. Cross Entropy Loss. Test accuracy of image datasets trained using FC1 neural network using cross entropy loss. Notation same as in Table A7.\nDataset 0% crpt 50% crpt 75% crpt 90% crpt Natural 1000 35.4±0.3 35.4±0.3 31.7±0.9 27.2±0.8 KIP (a) 1000 49.2±0.8 47.6±0.4 47.4±0.4 45.0±0.3* Natural 5000 43.1±0.8 42.0±0.2 38.0±0.4 31.7±0.6 KIP (a) 5000 44.5±1.0 51.5±0.3 51.0±0.4 48.9±0.4* Natural 10000 45.3±0.2 44.8±0.1 40.6±0.3 33.8±0.2 KIP (a) 10000 46.9±0.4 54.0±0.3 52.1±0.3 49.9±0.2*\n0 0.5 0.75 0.9 corruption fraction\n0.75\n0.80\n0.85\n0.90\n0.95\nte st\na cc\nur ac\ny\nMNIST. KIP vs Natural Images (MSE Loss)\n1000,natural 5000,natural 10000,natural 1000,kip+aug 5000,kip+aug 10000,kip+aug 1000,kip+aug+label 5000,kip+aug+label 10000,kip+aug+label\n0 0.5 0.75 0.9 corruption fraction\n0.75\n0.80\n0.85\n0.90\n0.95\n1.00\nte st\na cc\nur ac\ny\nMNIST. KIP vs Natural Images (XENT Loss)\n0 0.5 0.75 0.9 corruption fraction\n0.30\n0.35\n0.40\n0.45\n0.50\n0.55\nte st\na cc\nur ac\ny\nCIFAR-10. KIP vs Natural Images (MSE Loss)\n0 0.5 0.75 0.9 corruption fraction\n0.25\n0.30\n0.35\n0.40\n0.45\n0.50\n0.55\nte st\na cc\nur ac\ny\nCIFAR-10. KIP vs Natural Images (XENT Loss)\nFigure A3: KIP vs natural images, FC1. Data plotted from Tables A7-A10, showing natural images vs. KIP images for FC1 neural networks across dataset size, corruption type, dataset type, and loss type. For instance, the upper right figure shows that on MNIST using cross entropy loss, 1k KIP+ aug learned images with 90% corruption achieves 90.9% test accuracy, comparable to 1k natural images (acc: 91.3%) and far exceeding 1k natural images with 90% corruption (acc: 75.0%). Similarly, the lower right figure shows on CIFAR10 using cross entropy loss, 10k KIP+ aug learned images with 90% corruption achieves 49.9%, exceeding 10k natural images (acc: 45.3%) and 10k natural images with 90% corruption (acc: 33.8%).\nF EXAMPLES OF KIP LEARNED SAMPLES\nFigure A4: KIP learned images (left) vs natural MNIST images (right). Samples from 100 learned images. Top row: 0% corruption. Bottom row: 90% noise corruption.\nAirplane Automobile Bird Cat Deer Dog Frog Horse Ship Truck Airplane Automobile Bird Cat Deer Dog Frog Horse Ship Truck\nAirplane Automobile Bird Cat Deer Dog Frog Horse Ship Truck Airplane Automobile Bird Cat Deer Dog Frog Horse Ship Truck\nFigure A5: KIP learned images (left) vs natural CIFAR-10 images (right). Samples from 100 learned images. Top row: 0% corruption. Bottom row: 90% noise corruption." } ]
2,021
null
SP:c06539b9986064977dec933dcce4b81d42f47cc2
[ "This paper focuses on the problem of multi-agent cooperation in social dilemmas, in which mutual defection is individually rational but collectively suboptimal. The authors use the bias toward status-quo in human psychology to motivate a new training method, called SQLoss: 1) for repeated matrix games, each agent is trained with additional imagined episodes in which the actions taken by both agents are repeated for a random number of steps; 2) for settings where cooperation and defection are associated with a sequence of actions, the authors provide a procedure called GameDistill based on trajectory encoding, clustering, and action prediction to arive at oracles for \"cooperative action\" and \"defection action\" at each state, which can then be used for the imagination episodes. Experiments show that SQL achieve better social welfare than LOLA and standard independent RL in classic iterated matrix games, as well as in the Coin Game with higher dimensional image observations." ]
Individual rationality, which involves maximizing expected individual return, does not always lead to optimal individual or group outcomes in multi-agent problems. For instance, in social dilemma situations, Reinforcement Learning (RL) agents trained to maximize individual rewards converge to mutual defection that is individually and socially sub-optimal. In contrast, humans evolve individual and socially optimal strategies in such social dilemmas. Inspired by ideas from human psychology that attribute this behavior in humans to the status-quo bias, we present a status-quo loss (SQLoss) and the corresponding policy gradient algorithm that incorporates this bias in an RL agent. We demonstrate that agents trained with SQLoss evolve individually as well as socially optimal behavior in several social dilemma matrix games. To apply SQLoss to games where cooperation and defection are determined by a sequence of non-trivial actions, we present GameDistill, an algorithm that reduces a multi-step game with visual input to a matrix game. We empirically show how agents trained with SQLoss on a GameDistill reduced version of the Coin Game evolve optimal policies.
[]
[ { "authors": [ "Dilip Abreu", "David Pearce", "Ennio Stacchetti" ], "title": "Toward a theory of discounted repeated games with imperfect monitoring", "venue": "URL http://www.jstor.org/stable/2938299", "year": 1990 }, { "authors": [ "Robert Axelrod" ], "title": "The Evolution of Cooperation", "venue": "Axelrod’s", "year": 1984 }, { "authors": [ "Dipyaman Banerjee", "Sandip Sen" ], "title": "Reaching pareto-optimality in prisoner’s dilemma using conditional joint action learning", "venue": "Autonomous Agents and Multi-Agent Systems,", "year": 2007 }, { "authors": [ "Michael Bowling", "Manuela Veloso" ], "title": "Multiagent learning using a variable learning rate", "venue": "Artificial Intelligence,", "year": 2002 }, { "authors": [ "Steven Damer", "Maria Gini" ], "title": "Achieving cooperation in a minimally constrained environment", "venue": null, "year": 2008 }, { "authors": [ "Enrique Munoz de Cote", "Alessandro Lazaric", "Marcello Restelli" ], "title": "Learning to cooperate in multiagent social dilemmas", "venue": "In Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS", "year": 2006 }, { "authors": [ "Thomas Dietz", "Elinor Ostrom", "Paul C. Stern" ], "title": "The struggle to govern the commons", "venue": "doi: 10.1126/science.1091015", "year": 1907 }, { "authors": [ "Christina Fang", "Steven Orla Kimbrough", "Stefano Pace", "Annapurna Valluri", "Zhiqiang Zheng" ], "title": "On adaptive emergence of trust behavior in the game of stag hunt", "venue": "Group Decision and Negotiation,", "year": 2002 }, { "authors": [ "Jakob Foerster", "Richard Y Chen", "Maruan Al-Shedivat", "Shimon Whiteson", "Pieter Abbeel", "Igor Mordatch" ], "title": "Learning with opponent-learning awareness", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 122–130. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2018 }, { "authors": [ "Jerome Friedman", "Trevor Hastie", "Robert Tibshirani" ], "title": "The elements of statistical learning, volume 1. Springer series in statistics", "venue": "New York,", "year": 2001 }, { "authors": [ "Drew Fudenberg", "Eric Maskin" ], "title": "The folk theorem in repeated games with discounting or with incomplete information", "venue": null, "year": 1986 }, { "authors": [ "Drew Fudenberg", "David Levine", "Eric Maskin" ], "title": "The folk theorem with imperfect public", "venue": "information. Econometrica,", "year": 1994 }, { "authors": [ "Edward J Green", "Robert H Porter" ], "title": "Noncooperative Collusion under Imperfect Price Information", "venue": null, "year": 1984 }, { "authors": [ "Begum Guney", "Michael Richter" ], "title": "Costly switching from a status quo", "venue": "Journal of Economic Behavior & Organization,", "year": 2018 }, { "authors": [ "Garrett Hardin" ], "title": "The tragedy of the commons", "venue": "Science, 162(3859):1243–1248,", "year": 1968 }, { "authors": [ "Edward Hughes", "Joel Z. Leibo", "Matthew Phillips", "Karl Tuyls", "Edgar Dueñez Guzman", "Antonio Garcı́a Castañeda", "Iain Dunning", "Tina Zhu", "Kevin McKee", "Raphael Koster", "Heather Roff", "Thore Graepel" ], "title": "Inequity aversion improves cooperation in intertemporal social dilemmas", "venue": "In Proceedings of the 32Nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Pérolat Julien", "JZ Leibo", "V Zambaldi", "C Beattie", "Karl Tuyls", "Thore Graepel" ], "title": "A multi-agent reinforcement learning model of common-pool resource appropriation", "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17,", "year": 2017 }, { "authors": [ "Daniel Kahneman", "Jack L Knetsch", "Richard H Thaler" ], "title": "Anomalies: The endowment effect, loss aversion, and status quo bias", "venue": "Journal of Economic perspectives,", "year": 1991 }, { "authors": [ "Yuichiro Kamada", "Scott Kominers" ], "title": "Information can wreck cooperation: A counterpoint to kandori", "venue": "Economics Letters, 107:112–114,", "year": 1992 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Max Kleiman-Weiner", "Mark K Ho", "Joseph L Austerweil", "Michael L Littman", "Joshua B Tenenbaum" ], "title": "Coordinate to cooperate or compete: abstract goals and joint intentions in social interaction", "venue": "CogSci,", "year": 2016 }, { "authors": [ "King Lee", "K Louis" ], "title": "The Application of Decision Theory and Dynamic Programming to Adaptive Control Systems", "venue": "PhD thesis,", "year": 1967 }, { "authors": [ "Joel Z. Leibo", "Vinicius Zambaldi", "Marc Lanctot", "Janusz Marecki", "Thore Graepel" ], "title": "Multi-agent reinforcement learning in sequential social dilemmas", "venue": "In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’17. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2017 }, { "authors": [ "Adam Lerer", "Alexander Peysakhovich" ], "title": "Maintaining cooperation in complex social dilemmas using deep reinforcement learning, 2017", "venue": null, "year": 2017 }, { "authors": [ "R Duncan Luce", "Howard Raiffa" ], "title": "Games and decisions: Introduction and critical survey", "venue": "Courier Corporation,", "year": 1989 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Dr Macy", "Andreas Flache" ], "title": "Learning dynamics in social dilemmas", "venue": "Proceedings of the National Academy of Sciences of the United States of America,", "year": 2002 }, { "authors": [ "Martin Nowak", "Karl Sigmund" ], "title": "A strategy of win-stay, lose-shift that outperforms tit-for-tat in the prisoner’s dilemma game", "venue": "Nature, 364:56–8,", "year": 1993 }, { "authors": [ "Martin A. Nowak", "Karl Sigmund" ], "title": "Tit for tat in heterogeneous populations", "venue": null, "year": 1992 }, { "authors": [ "Martin A. Nowak", "Karl Sigmund" ], "title": "Evolution of indirect reciprocity by image scoring", "venue": "Nature, 393(6685):573–577,", "year": 1998 }, { "authors": [ "Hisashi Ohtsuki", "Christoph Hauert", "Erez Lieberman", "Martin A. Nowak" ], "title": "A simple rule for the evolution of cooperation on graphs and social", "venue": "networks. Nature,", "year": 2006 }, { "authors": [ "E. Ostrom" ], "title": "Governing the commons-The evolution of institutions for collective actions", "venue": "Political economy of institutions and decisions,", "year": 1990 }, { "authors": [ "Elinor Ostrom", "Joanna Burger", "Christopher B. Field", "Richard B. Norgaard", "David Policansky" ], "title": "Revisiting the commons: Local lessons, global challenges", "venue": "Science, 284(5412):278–282,", "year": 1999 }, { "authors": [ "Alexander Peysakhovich", "Adam Lerer" ], "title": "Consequentialist conditional cooperation in social dilemmas with imperfect information", "venue": "In International Conference on Learning Representations, ICLR 2018,Vancouver, BC,", "year": 2018 }, { "authors": [ "William H Press", "Freeman J Dyson" ], "title": "Iterated prisoner’s dilemma contains strategies that dominate any evolutionary opponent", "venue": "Proceedings of the National Academy of Sciences,", "year": 2012 }, { "authors": [ "William Samuelson", "Richard Zeckhauser" ], "title": "Status quo bias in decision making", "venue": "Journal of risk and uncertainty,", "year": 1988 }, { "authors": [ "Tuomas W. Sandholm", "Robert H. Crites" ], "title": "Multiagent reinforcement learning in the iterated prisoner’s dilemma", "venue": "Bio Systems,", "year": 1996 }, { "authors": [ "Felipe Santos", "J Pacheco" ], "title": "A new route to the evolution of cooperation", "venue": "Journal of evolutionary biology, 19:726–33,", "year": 2006 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": null, "year": 2011 }, { "authors": [ "Richard S Sutton", "David A McAllester", "Satinder P Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "Richard H Thaler", "Cass R" ], "title": "Sunstein. Nudge: Improving decisions about health, wealth, and happiness", "venue": null, "year": 2009 }, { "authors": [ "Robert Trivers" ], "title": "The evolution of reciprocal altruism", "venue": "Quarterly Review of Biology, 46:35–57.,", "year": 1971 }, { "authors": [ "Jane X. Wang", "Edward Hughes", "Chrisantha Fernando", "Wojciech M. Czarnecki", "Edgar A. Duéñez Guzmán", "Joel Z. Leibo" ], "title": "Evolving intrinsic motivations for altruistic behavior", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2019 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Michael Wunder", "Michael Littman", "Monica Babes" ], "title": "Classes of multiagent q-learning dynamics with -greedy exploration", "venue": "In Proceedings of the 27th International Conference on International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "C. Yu", "M. Zhang", "F. Ren", "G. Tan" ], "title": "Emotional multiagent reinforcement learning in spatial social dilemmas", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "In sequential social dilemmas, individually rational behavior leads to outcomes that are sub-optimal for each individual in the group (Hardin, 1968; Ostrom, 1990; Ostrom et al., 1999; Dietz et al., 2003). Current state-of-the-art Multi-Agent Deep Reinforcement Learning (MARL) methods that train agents independently can lead to agents that play selfishly and do not converge to optimal policies, even in simple social dilemmas (Foerster et al., 2018; Lerer & Peysakhovich, 2017).\nTo illustrate why it is challenging to evolve optimal policies in such dilemmas, we consider the Coin Game (Foerster et al., 2018). Each agent can play either selfishly (pick all coins) or cooperatively (pick only coins of its color). Regardless of the other agent’s behavior, the individually rational choice for an agent is to play selfishly, either to minimize losses (avoid being exploited) or to maximize gains (exploit the other agent). However, when both agents behave rationally, they try to pick all coins and achieve an average long term reward of −0.5. In contrast, if both play cooperatively, then the average long term reward for each agent is 0.5. Therefore, when agents cooperate, they are both better off. Training Deep RL agents independently in the Coin Game using state-of-the-art methods leads to mutually harmful selfish behavior (Section 2.2).\nThe problem of how independently learning agents evolve optimal behavior in social dilemmas has been studied by researchers through human studies and simulation models (Fudenberg & Maskin, 1986; Green & Porter, 1984; Fudenberg et al., 1994; Kamada & Kominers, 2010; Abreu et al., 1990). A large body of work has looked at the mechanism of evolution of cooperation through reciprocal behaviour and indirect reciprocity (Trivers, 1971; Axelrod, 1984; Nowak & Sigmund, 1992; 1993; 1998), through variants of reinforcement using aspiration (Macy & Flache, 2002), attitude (Damer & Gini, 2008) or multi-agent reinforcement learning (Sandholm & Crites, 1996; Wunder et al., 2010), and under specific conditions (Banerjee & Sen, 2007) using different learning rates (de Cote et al., 2006) similar to WoLF (Bowling & Veloso, 2002) as well as using embedded emotion (Yu et al., 2015), social networks (Ohtsuki et al., 2006; Santos & Pacheco, 2006).\nHowever, these approaches do not directly apply to Deep RL agents (Leibo et al., 2017). Recent work in this direction (Kleiman-Weiner et al., 2016; Julien et al., 2017; Peysakhovich & Lerer, 2018) focuses on letting agents learn strategies in multi-agent settings through interactions with\nother agents. Leibo et al. (2017) defines the problem of social dilemmas in the Deep RL framework and analyzes the outcomes of a fruit-gathering game (Julien et al., 2017). They vary the abundance of resources and the cost of conflict in the fruit environment to generate degrees of cooperation between agents. Hughes et al. (2018) defines an intrinsic reward (inequality aversion) that attempts to reduce the difference in obtained rewards between agents. The agents are designed to have an aversion to both advantageous (guilt) and disadvantageous (unfairness) reward allocation. This handcrafting of loss with mutual fairness evolves cooperation, but it leaves the agent vulnerable to exploitation. LOLA (Foerster et al., 2018) uses opponent awareness to achieve high cooperation levels in the Coin Game and the Iterated Prisoner’s Dilemma game. However, the LOLA agent assumes access to the other agent’s network architecture, observations, and learning algorithms. This access level is analogous to getting complete access to the other agent’s private information and therefore devising a strategy with full knowledge of how they are going to play. Wang et al. (2019) proposes an evolutionary Deep RL setup to evolve cooperation. They define an intrinsic reward that is based on features generated from the agent’s past and future rewards, and this reward is shared with other agents. They use evolution to maximize the sum of rewards among the agents and thus evolve cooperative behavior. However, sharing rewards in this indirect way enforces cooperation rather than evolving it through independently learning agents.\nInterestingly, humans evolve individual and socially optimal strategies in such social dilemmas without sharing rewards or having access to private information. Inspired by ideas from human psychology (Samuelson & Zeckhauser, 1988; Kahneman et al., 1991; Kahneman, 2011; Thaler & Sunstein, 2009) that attribute this behavior in humans to the status-quo bias (Guney & Richter, 2018), we present the SQLoss and the corresponding status-quo policy gradient formulation for RL. Agents trained with SQLoss evolve optimal policies in multi-agent social dilemmas without sharing rewards, gradients, or using a communication channel. Intuitively, SQLoss encourages an agent to stick to the action taken previously, with the encouragement proportional to the reward received previously. Therefore, mutually cooperating agents stick to cooperation since the status-quo yields higher individual reward, while unilateral defection by any agent leads to the other agent also switching to defection due to the status-quo loss. Subsequently, the short-term reward of exploitation is overcome by the long-term cost of mutual defection, and agents gradually switch to cooperation.\nTo apply SQLoss to games where a sequence of non-trivial actions determines cooperation and defection, we present GameDistill, an algorithm that reduces a dynamic game with visual input to a matrix game. GameDistill uses self-supervision and clustering to extract distinct policies from a sequential social dilemma game automatically.\nOur key contributions can be summarised as:\n1. We introduce a Status-Quo loss (SQLoss, Section 2.3) and an associated policy gradientbased algorithm to evolve optimal behavior for agents playing matrix games that can act in either a cooperative or a selfish manner, by choosing between a cooperative and selfish policy. We empirically demonstrate that agents trained with the SQLoss evolve optimal behavior in several social dilemmas iterated matrix games (Section 4).\n2. We propose GameDistill (Section 2.4), an algorithm that reduces a social dilemma game with visual observations to an iterated matrix game by extracting policies that implement cooperative and selfish behavior. We empirically demonstrate that GameDistill extracts cooperative and selfish policies for the Coin Game (Section 4.2).\n3. We demonstrate that when agents run GameDistill followed by MARL game-play using SQLoss, they converge to individually as well as socially desirable cooperative behavior in a social dilemma game with visual observations (Section 4.2)." }, { "heading": "2 APPROACH", "text": "" }, { "heading": "2.1 SOCIAL DILEMMAS MODELED AS ITERATED MATRIX GAMES", "text": "To remain consistent with previous work, we adopt the notations from Foerster et al. (2018). We model social dilemmas as general-sum Markov (simultaneous move) games. A multi-agent Markov game is specified byG = 〈S,A, U , P , r, n, γ〉. S denotes the state space of the game. n denotes the\nnumber of agents playing the game. At each step of the game, each agent a ∈ A, selects an action ua ∈ U . ~u denotes the joint action vector that represents the simultaneous actions of all agents. The joint action ~u changes the state of the game from s to s′ according to the state transition function P (s′|~u, s) : S × U × S → [0, 1]. At the end of each step, each agent a gets a reward according to the reward function ra(s, ~u) : S × U → R. The reward obtained by an agent at each step is a function of the actions played by all agents. For an agent a, the discounted future return from time t is defined as Rat = ∑∞ l=0 γ\nlrat+l, where γ ∈ [0, 1) is the discount factor. Each agent independently attempts to maximize its expected discounted return.\nMatrix games are the special case of two-player perfectly observable Markov games (Foerster et al., 2018). Table 1 shows examples of matrix games that represent social dilemmas. Consider the Prisoner’s Dilemma game in Table 1a. Each agent can either cooperate (C) or defect (D). Playing D is the rational choice for an agent, regardless of whether the other agent plays C or D. Therefore, if both agents play rationally, they each receive a reward of −2. However, if each agent plays C, then it will obtain a reward of−1. This fact that individually rational behavior leads to a sub-optimal group (and individual) outcome highlights the dilemma.\nIn Infinitely Iterated Matrix Games, agents repeatedly play a particular matrix game against each other. In each iteration of the game, each agent has access to the actions played by both agents in the previous iteration. Therefore, the state input to an RL agent consists of both agents’ actions in the previous iteration of the game. We adopt this state formulation as is typically done in such games (Press & Dyson, 2012; Foerster et al., 2018). The infinitely iterated variations of the matrix games in Table 1 represent sequential social dilemmas. We refer to infinitely iterated matrix games as iterated matrix games in subsequent sections for ease of presentation." }, { "heading": "2.2 LEARNING POLICIES IN ITERATED MATRIX GAMES: THE SELFISH LEARNER", "text": "The standard method to model agents in iterated matrix games is to model each agent as an RL agent that independently attempts to maximize its expected total discounted reward. Several approaches to model agents in this way use policy gradient-based methods (Sutton et al., 2000; Williams, 1992). Policy gradient methods update an agent’s policy, parameterized by θa, by performing gradient ascent on the expected total discounted reward E[Ra0 ]. Formally, let θa denote the parameterized version of an agent’s policy πa and V aθ1,θ2 denote the total expected discounted reward for agent a. Here, V a is a function of the policy parameters (θ1, θ2) of both agents. In the ith iteration of the game, each agent updates θai to θ a i+1, such that it maximizes it’s total expected discounted reward. θai+1 is computed as follows:\nθ1i+1 = argmaxθ1V 1(θ1, θ2i ) and θ 2 i+1 = argmaxθ2V 2(θ1i , θ 2) (1)\nFor agents trained using reinforcement learning, the gradient ascent rule to update θ1i+1 is,\nf1nl = ∇θi1V 1(θ1i , θ 2 i ) · δ and θ1i+1 = θ1i + f1nl(θ1i , θ2i ) (2)\nwhere δ is the step size of the updates. In the Iterated Prisoner’s Dilemma (IPD) game, agents trained with the policy gradient update method converge to a sub-optimal mutual defection equilibrium (Figure 3a, Lerer & Peysakhovich (2017)). This sub-optimal equilibrium attained by Selfish Learners motivates us to explore alternative methods that could lead to a desirable cooperative equilibrium. We denote the agent trained using policy gradient updates as a Selfish Learner (SL)." }, { "heading": "2.3 LEARNING POLICIES IN ITERATED MATRIX GAMES: THE STATUS-QUO AWARE", "text": "LEARNER (SQLoss)\nFigure 1 shows the high-level architecture of our approach.\n2.3.1 SQLoss: INTUITION\nWhy do independent, selfish learners converge to mutually harmful behavior in the IPD? To understand this, consider the payoff matrix for a single iteration of the IPD in Table 1a. In each iteration, an agent can play either C or D. Mutual defection (DD) is worse for each agent than mutual cooperation (CC). However, one-sided exploitation (DC or CD) is better than mutual cooperation for the exploiter and far worse for the exploited. Therefore, as long as an agent perceives the possibility\nof exploitation, it is drawn to defect, both to maximize the reward (through exploitation) and minimize its loss (through being exploited). To increase the likelihood of cooperation, it is important to reduce instances of exploitation between agents. We posit that, if agents either mutually cooperate (CC) or mutually defect (DD), then they will learn to prefer C overD and achieve a socially desirable equilibrium. (for a detailed illustration of the evolution of cooperation, see Appendix C, which is part of the Supplementary Material)\nMotivated by ideas from human psychology that attribute cooperation in humans to the status-quo bias (Guney & Richter, 2018), we introduce a status-quo loss (SQLoss) for each agent, derived from the idea of imaginary game-play (Figure 2). Intuitively, the loss encourages an agent to imagine an episode where the status-quo (current situation) is repeated for several steps. This imagined episode causes the exploited agent (in DC) to perceive a continued risk of exploitation and, therefore, quickly move to (DD). Hence, for the exploiting agent, the short-term gain from exploitation (DC) is overcome by the long-term loss from mutual defection (DD). Therefore, agents move towards mutual cooperation (CC) or mutual defection (DD). With exploitation (and subsequently, the fear of being exploited) being discouraged, agents move towards cooperation.\n2.3.2 SQLoss: FORMULATION\nWe describe below the formulation of SQLoss with respect to agent 1. The formulation for agent 2 is identical to that of agent 1. Let τa = (s0, u10, u 2 0, r 1 0, · · · sT , u1T , u2T , r1T ) denote the collection\nof an agent’s experiences after T time steps. Let R1t (τ1) = ∑T l=t γ\nl−tr1l denote the discounted future return for agent 1 starting at st in actual game-play. Let τ̂1 denote the collection of an agent’s imagined experiences. For a state st, where t ∈ [0, T ], an agent imagines an episode by starting at st and repeating u1t−1, u 2 t−1 for κt steps. This is equivalent to imagining a κt step repetition of already played actions. We sample κt from a Discrete Uniform distribution U{1, z} where z is a hyper-parameter ≥ 1. To simplify notation, let φt(st, κt) denote the ordered set of state, actions, and rewards starting at time t and repeated κt times for imagined game-play. Let R̂1t (τ̂1) denote the discounted future return starting at st in imagined status-quo game-play.\nφt(st, κt) = [ (st, u 1 t−1, u 2 t−1, r 1 t−1)0, (st, u 1 t−1, u 2 t−1, r 1 t−1)1, · · · , (st, u1t−1, u2t−1, r1t−1)κt−1 ] (3)\nτ̂1 = ( φt(st, κt), (st+1, u 1 t+1, u 2 t+1, r 1 t+1)κt+1, · · · , (sT , u1T , u2T , r1T )T+κt−t ) (4)\nR̂1t (τ̂1) = (1− γκ 1− γ ) r1t−1 + γ κR1t (τ1) = (1− γκ 1− γ ) r1t−1 + γ κ T∑ l=t γl−tr1l (5)\nV 1θ1,θ2 and V̂ 1 θ1,θ2 are approximated by E[R 1 0(τ1)] and E[R̂10(τ̂1)] respectively. These V values are the expected rewards conditioned on both agents’ policies (π1, π2). For agent 1, the regular gradients and the Status-Quo gradients, ∇θ1E[R10(τ1)] and ∇θ1E[R̂10(τ̂1)], can be derived from the policy gradient formulation as\n∇θ1E[R10(τ1)] = E[R10(τ1)∇θ1 logπ1(τ1)] = E [ T∑ t=1 ∇θ1 logπ1(u1t |st) · T∑ l=t γlr1l ] = E\n[ T∑ t=1 ∇θ1 logπ1(u1t |st)γt ( R1t (τ1)− b(st) )] (6)\n∇θ1E[R̂10(τ̂1)] = E [R̂10(τ̂1)∇θ1 logπ1(τ̂1)]\n= E [ T∑ t=1 ∇θ1 logπ1(u1t−1|st)× ( t+κ∑ l=t γlr1t−1 + T∑ l=t γl+κr1l )]\n= E [ T∑ t=1 ∇θ1 logπ1(u1t−1|st)× ((1− γκ 1− γ ) γtr1t−1 + γ κ T∑ l=t γlr1l )]\n= E [ T∑ t=1 ∇θ1 logπ1(u1t−1|st)γt ( R̂1t (τ̂1)− b(st)\n)] (7)\nwhere b(st) is a baseline for variance reduction.\nThen the update rule fsql,pg for the policy gradient-based Status-Quo Learner (SQL-PG) is, f1sql,pg = ( α · ∇θ1E[R10(τ1)] + β · ∇θ1E[R̂10(τ1)] ) · δ (8)\nwhere α, β denote the loss scaling factor for REINFORCE, imaginative game-play respectively.\n2.4 LEARNING POLICIES IN DYNAMIC NON-MATRIX GAMES USING SQLoss AND GameDistill\nThe previous section focused on evolving optimal policies in iterated matrix games that represent sequential social dilemmas. In such games, an agent can take one of a discrete set of policies at each step. For instance, in IPD, an agent can either cooperate or defect at each step. However, in social dilemmas such as the Coin Game (Appendix A), cooperation and defection policies are composed of a sequence of state-dependent actions. To apply the Status-Quo policy gradient to these games, we present GameDistill, a self-supervised algorithm that reduces a dynamic game with visual input to a matrix game. GameDistill takes as input game-play episodes between agents with random policies and learns oracles (or policies) that lead to distinct outcomes. GameDistill (Figure 1) works as follows.\n1. We initialize agents with random weights and play them against each other in the game. In these random game-play episodes, whenever an agent receives a reward, we store the sequence of states along with the rewards for both agents.\n2. This collection of state sequences is used to train the GameDistill network, which is a self-supervised trajectory encoder. It takes as input a sequence of states and predicts the rewards of both agents during training.\n3. We then extract the embeddings from the penultimate layer of the trained GameDistill network for each state sequence. Each embedding is a finite-dimensional representation of the corresponding state sequence. We cluster these embeddings using Agglomerative Clustering (Friedman et al., 2001). Each cluster represents a collection of state sequences that lead to a consistent outcome (with respect to rewards). For the Coin Game, when we" }, { "heading": "H (+1, -1) (-1, +1)", "text": "" }, { "heading": "T (-1, +1) (+1, -1)", "text": "use the number of clusters as 2, we find that one cluster consists of state sequences that represent cooperative behavior (cooperation cluster) while the other cluster represents state sequences that lead to defection (defection cluster).\n4. Using the state sequences in each cluster, we train an oracle to predict the next action given the current state. For the Coin Game, the oracle trained on state sequences from the cooperation cluster predicts the cooperative action for a given state. Similarly, the oracle trained on the defection cluster predicts the defection action for a given state. Each agent uses GameDistill independently to extract a cooperation and a defection oracle. Figure 8 (Appendix D.4) illustrates the cooperation and defection oracles extracted by the Red agent using GameDistill.\nDuring game-play, an agent can consult either oracle at each step. In the Coin Game, this is equivalent to either cooperating (consulting the cooperation oracle) or defecting (consulting the defection oracle). In this way, an agent reduces a dynamic game to its matrix equivalent using GameDistill. We then apply the Status-Quo policy gradient to evolve optimal policies in this matrix game. For the Coin Game, this leads to agents who cooperate by only picking coins of their color (Figure 4a). It is important to note that for games such as the Coin Game, we could have also learned cooperation and defection oracles by training agents using the sum of rewards for both agents and individual reward, respectively (Lerer & Peysakhovich, 2017). However, GameDistill learns these distinct policies without using hand-crafted reward functions.\nAppendix B provides additional details about the architecture and pseudo-code for GameDistill." }, { "heading": "3 EXPERIMENTAL SETUP", "text": "In order to compare our results to previous work, we use the Normalized Discounted Reward or NDR = (1 − γ) ∑T t=0 γ\ntrt. A higher NDR implies that an agent obtains a higher reward in the environment. We compare our approach (Status-Quo Aware Learner or SQLearner) to Learning with Opponent-Learning Awareness (Lola-PG) (Foerster et al., 2018) and the Selfish Learner (SL) agents. For all experiments, we perform 20 runs and report average NDR, along with variance across runs. The bold line in all the figures is the mean, and the shaded region is the one standard deviation region around the mean. All of our code is available at https://github.com/user12423/MARL-with-SQLoss/." }, { "heading": "3.1 ITERATED MATRIX GAME SOCIAL DILEMMAS", "text": "For our experiments with social dilemma matrix games, we use the Iterated Prisoners Dilemma (IPD) (Luce & Raiffa, 1989), Iterated Matching Pennies (IMP) (Lee & Louis, 1967), and the Iterated Stag Hunt (ISH) (Fang et al., 2002). Each matrix game in Table 1 represents a different dilemma. In the Prisoner’s Dilemma, the rational policy for each agent is to defect, regardless of the other agent’s policy. However, when each agent plays rationally, each is worse off. In Matching Pennies, if an agent plays predictably, it is prone to exploitation by the other agent. Therefore, the optimal policy is to randomize between H and T , obtaining an average NDR of 0. The Stag Hunt game represents a coordination dilemma. In the game, given that the other agent will cooperate, an agent’s optimal action is to cooperate as well. However, each agent has an attractive alternative at each step, that of defecting and obtaining a guaranteed reward of −1. Therefore, the promise of a safer alternative\nand the fear that the other agent might select the safer choice could drive an agent to select the safer alternative, thereby sacrificing the higher reward of mutual cooperation.\nIn iterated matrix games, at each iteration, agents take an action according to a policy and receive the rewards in Table 1. To simulate an infinitely iterated game, we let the agents play 200 iterations of the game against each other, and do not provide an agent with any information about the number of remaining iterations. In an iteration, the state for an agent is the actions played by both agents in the previous iteration." }, { "heading": "3.2 ITERATED DYNAMIC GAME SOCIAL DILEMMAS", "text": "For our experiments on a social dilemma with extended actions, we use the Coin Game (Figure 5a) (Foerster et al., 2018) and the non-matrix variant of the Stag Hunt (Figure 5b). We provide details of these games in Appendix A due to space considerations." }, { "heading": "4 RESULTS", "text": "" }, { "heading": "4.1 LEARNING OPTIMAL POLICIES IN ITERATED MATRIX DILEMMAS", "text": "Iterated Prisoner’s Dilemma (IPD): We train different learners to play the IPD game. Figure 3a shows the results. For all learners, agents initially defect and move towards an NDR of −2.0. This initial bias towards defection is expected, since, for agents trained with random game-play episodes, the benefits of exploitation outweigh the costs of mutual defection. For Selfish Learner (SL) agents, the bias intensifies, and the agents converge to mutually harmful selfish behavior (NDR of −2.0). Lola-PG agents learn to predict each other’s behavior and realize that defection is more likely to lead to mutual harm. They subsequently move towards cooperation, but occasionally defect (NDR of −1.2). In contrast, SQLearner agents quickly realize the costs of defection, indicated by the small initial dip in the NDR curves. They subsequently move towards close to 100% cooperation, with an NDR of −1.0. Finally, it is important to note that SQLearner agents have close to zero variance, unlike other methods where the variance in NDR across runs is significant.\nIterated Matching Pennies (IMP): We train different learners to play the IMP game. Figure 3b shows the results. SQLearner agents learn to play optimally and obtain an NDR close to 0. Interestingly, Selfish Learner and Lola-PG agents converge to an exploiter-exploited equilibrium where one agent consistently exploits the other agent. This asymmetric exploitation equilibrium is more pronounced for Selfish Learner agents than for Lola-PG agents. As before, we observe that SQLearner agents have close to zero variance across runs, unlike other methods where the variance in NDR across runs is significant.\nIterated Stag Hunt (ISH): Appendix D.5 shows additional results for the ISH game.\n4.2 LEARNING OPTIMAL POLICIES IN ITERATED DYNAMIC DILEMMAS\nGameDistill: To evaluate the Agglomerative clustering step in GameDistill, we make two tSNE (Maaten & Hinton, 2008) plots of the 100-dimensional feature vectors extracted from the penultimate layer of the trained GameDistill network in Figure 4b. In the first plot, we color each point (or state sequence) by the rewards obtained by both agents in the format r1|r2. In the second, we color each point by the cluster label output by the clustering technique. GameDistill correctly learns two clusters, one for state sequences that represent cooperation (Red cluster) and the other for state sequences that represent defection (Blue cluster). We experiment with different values for feature vector dimensions and obtain similar results (see Appendix B for details). Results on Stag Hunt using GameDistill are presented in Appendix D.3. To evaluate the trained oracles that represent cooperation and a defection policy, we alter the Coin Game environment to contain only a single agent (the Red agent). We then play two variations of the game. In the first variation, the Red agent is forced to play the action suggested by the first oracle. In this variation, we find that the Red agent picks only 8.4% of Blue coins, indicating a high cooperation rate. Therefore, the first oracle represents a cooperation policy. In the second variation, the Red agent is forced to play the action suggested by the second oracle. We find that the Red agent picks 99.4% of Blue coins, indicating a high defection rate, and the second oracle represents a defection policy.\nSQ Loss: During game-play, at each step, an agent follows either the action suggested by its cooperation oracle or the action suggested by its defection oracle. We compare approaches using the degree of cooperation between agents, measured by the probability that an agent will pick the coin of its color (Foerster et al., 2018). Figure 4a shows the results. The probability that an SQLearner agent will pick the coin of its color is close to 1. This high probability indicates that the other SQLearner agent is cooperating with this agent and only picking coins of its color. In contrast, the probability that a Lola-PG agent will pick a coin of its color is close to 0.8, indicating higher defection rates. As expected, the probability of an agent picking its own coin is the smallest for the selfish learner (SL)." }, { "heading": "5 CONCLUSION", "text": "We presented a status-quo policy gradient inspired by human psychology that encourages an agent to imagine the counterfactual of sticking to the status quo. We demonstrated how agents trained with SQLoss evolve optimal policies in several social dilemmas without sharing rewards, gradients, or using a communication channel. To work with dynamic games, we proposedGameDistill, an algorithm that reduces a dynamic game with visual input to a matrix game. We combined GameDistill and SQLoss to demonstrate how agents evolve optimal policies in dynamic social dilemmas with visual observations." }, { "heading": "Appendix for", "text": "STATUS-QUO POLICY GRADIENT IN MULTI-AGENT REINFORCEMENT LEARNING" }, { "heading": "A DESCRIPTION OF ENVIRONMENTS USED FOR DYNAMIC SOCIAL DILEMMAS", "text": "A.1 COIN GAME\nFigure 5a illustrates the agents playing the Coin Game. The agents, along with a Blue or Red coin, appear at random positions in a 3× 3 grid. An agent observes the complete 3× 3 grid as input and can move either left, right, up, or down. When an agent moves into a cell with a coin, it picks the coin, and a new instance of the game begins where the agent remains at their current positions, but a Red/Blue coin randomly appears in one of the empty cells. If the Red agent picks the Red coin, it gets a reward of +1, and the Blue agent gets no reward. If the Red agent picks the Blue coin, it gets a reward of +1, and the Blue agent gets a reward of −2. The Blue agent’s reward structure is symmetric to that of the Red agent.\nA.2 STAG-HUNT\nFigure 5b shows the illustration of two agents (Red and Blue) playing the visual Stag Hunt game. The STAG represents the maximum reward the agents can achieve with HARE in the center of the figure. An agent observes the full 7 × 7 grid as input and can freely move across the grid in only the empty cells, denoted by white (yellow cells denote walls that restrict the movement). Each agent can either pick the STAG individually to obtain a reward of +4, or coordinate with the other agent to capture the HARE and obtain a better reward of +25.\nB GameDistill: ARCHITECTURE AND PSEUDO-CODE\nB.1 GameDistill: ARCHITECTURE DETAILS\nGameDistill consists of two components.\nThe first component is the state sequence encoder that takes as input a sequence of states (input size is 4× 4× 3× 3, where 4× 3× 3 is the dimension of the game state, and the first index in the state input represents the data channel where each channel encodes data from both all the different colored agents and coins) and outputs a fixed dimension feature representation. We encode each state in the sequence using a common trunk of 3 convolution layers with relu activations and kernel-size 3 × 3, followed by a fully-connected layer with 100 neurons to obtain a finite-dimensional feature representation. This unified feature vector, called the trajectory embedding, is then given as input to\nthe different prediction branches of the network. We also experiment with different dimensions of this embedding and provide results in Figure 6.\nThe two branches, which predict the self-reward and the opponent-reward (as shown in Figure 1), independently use this trajectory embedding as input to compute appropriate output. These branches take as input the trajectory embedding and use a dense hidden layer (with 100 neurons) with linear activation to predict the output. We use the mean-squared error (MSE) loss for the regression tasks in the prediction branches. Linear activation allows us to cluster the trajectory embeddings using a linear clustering algorithm, such as Agglomerative Clustering (Friedman et al., 2001). In general, we can choose the number of clusters based on our desired level of granularity in differentiating outcomes. In the games considered in this paper, agents broadly have two types of policies. Therefore, we fix the number of clusters to two.\nWe use the Adam (Kingma & Ba, 2014) optimizer with learning-rate of 3e− 3. We also experiment with K-Means clustering in addition to Agglomerative Clustering, and it also gives similar results. We provide additional results of the clusters obtained using GameDistill in Appendix D.\nThe second component is the oracle network that outputs an action given a state. For each oracle network, we encode the input state using 3 convolution layers with kernel-size 2×2 and relu activation. To predict the action, we use 3 fully-connected layers with relu activation and the cross-entropy loss. We use L2 regularization, and Gradient Descent with the Adam optimizer (learning rate 1e−3) for all our experiments.\nB.2 GameDistill: PSEUDO-CODE\nAlgorithm 1: Pseduo-code for GameDistill 1 Collect list of episodes with (r1, r2) > 0 from random game play; 2 for agents do 3 Create dataset: {listEpisodes,myRewards, opponentRewards} ← {[ ], [ ], [ ]}; 4 for episode in episodes do 5 for (s,a,r,s’) in episode do 6 if r > 0 then 7 add sequence of last three states leading up to s′ to listEpisodes ; 8 add respective rewards to myRewards and opponentRewards 9 end\n10 end 11 end 12 Train Sequence Encoding Network; 13 Train with NetLoss; 14 Cluster embeddings using Agglomerative Clustering; 15 Map episode to clusters from Step 14; 16 Train oracle for each cluster. 17 end\nC SQLoss: EVOLUTION OF COOPERATION\nEquation 6 (Section 2.3.2) describes the gradient for standard policy gradient. It has two terms. The logπ1(u1t |st) term maximises the likelihood of reproducing the training trajectories [(st−1, ut−1, rt−1), (st, ut, rt), (st+1, ut+1, rt+1), . . . ]. The return term pulls down trajectories that have poor return. The overall effect is to reproduce trajectories that have high returns. We refer to this standard loss as Loss for the following discussion.\nLemma 1. For agents trained with random exploration in the IPD, Qπ(D|st) > Qπ(C|st) for all st.\nLet Qπ(at|st) denote the expected return of taking at in st. Let Vπ(st) denote the expected return from state st.\nQπ(C|CC) = 0.5[(−1) + Vπ(CC)] + 0.5[(−3) + Vπ(CD)] Qπ(C|CC) = −2 + 0.5[Vπ(CC) + Vπ(CD)] Qπ(D|CC) = −1 + 0.5[Vπ(DC) + Vπ(DD)] Qπ(C|CD) = −2 + 0.5[Vπ(CC) + Vπ(CD)] Qπ(D|CD) = −1 + 0.5[Vπ(DC) + Vπ(DD)] Qπ(C|DC) = −2 + 0.5[Vπ(CC) + Vπ(CD)] Qπ(D|DC) = −1 + 0.5[Vπ(DC) + Vπ(DD)] Qπ(C|DD) = −2 + 0.5[Vπ(CC) + Vπ(CD)] Qπ(D|DD) = −1 + 0.5[Vπ(DC) + Vπ(DD)]\n(9)\nSince Vπ(CC) = Vπ(CD) = Vπ(DC) = Vπ(DD) for randomly playing agents, Qπ(D|st) > Qπ(C|st) for all st.\nLemma 2. Agents trained to only maximize the expected reward in IPD will converge to mutual defection.\nThis lemma follows from Lemma 1. Agents initially collect trajectories from random exploration. They use these trajectories to learn a policy that optimizes for a long-term return. These learned policies always play D as described in Lemma 1.\nEquation 7 describes the gradient for SQLoss. The logπ1(u1t−1|st) term maximises the likelihood of taking ut−1 in st. The imagined episode return term pulls down trajectories that have poor imagined return.\nLemma 3. Agents trained on random trajectories using only SQLoss oscillate between CC and DD.\nFor IPD, st = (u1t−1, u 2 t−1). The SQLoss maximises the likelihood of taking ut−1 in st when the return of the imagined trajectory R̂t(τ̂1) is high.\nConsider state CC, with u1t−1 = C. π 1(D|CC) is randomly initialised. The SQLoss term reduces the likelihood of π1(C|CC) because R̂t(τ̂1) < 0. Therefore, π1(D|CC) > π1(C|CC). Similarly, for CD, the SQLoss term reduces the likelihood of π1(C|CD). Therefore, π1(D|CD) > π1(C|CD). For DC, R̂t(τ̂1) = 0, therefore π1(D|DC) > π1(C|DC). Interestingly, for DD, the SQLoss term reduces the likelihood of π1(D|DD) and therefore π1(C|DD) > π1(D|DD). Now, if st is CC or DD, then st+1 is DD or CC and these states oscillate. If st is CD or DC, then st+1 is DD, st+2 is CC and again CC and DD oscillate. This oscillation is key to the emergence of cooperation as explained in section 2.3.1.\nLemma 4. For agents trained using both standard loss and SQLoss, π(C|CC) > π1(D|CC).\nFor CD, DC, both the standard loss and SQLoss push the policy towards D. For DD, with sufficiently high κ, the SQLoss term overcomes the standard loss and pushes the agent towards C. For CC, initially, both the standard loss and SQLoss push the policy towards D. However, as training progresses, the incidence of CD and DC diminish because of SQLoss as described in Lemma 3. Therefore, Vπ(CD) ≈ Vπ(DC) since agents immediately move from both states to DD. Intuitively, agents lose the opportunity to exploit the other agent. In equation 9, with Vπ(CD) ≈ Vπ(DC), Qπ(C|CC) > Qπ(D|CC) and the standard loss pushes the policy so that π(C|CC) > π(D|CC). This depends on the value of κ. For very low values, the standard loss overcomes SQLoss and agents defect. For very high values, SQLoss overcomes standard loss, and agents oscillate between cooperation and defection. For moderate values of κ (as shown in our experiments), the two loss terms work together so that π(C|CC) > π(D|CC)." }, { "heading": "D EXPERIMENTAL DETAILS AND ADDITIONAL RESULTS", "text": "D.1 INFRASTRUCTURE FOR EXPERIMENTS\nWe performed all our experiments on an AWS instance with the following specifications. We use a 64-bit machine with Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz installed with Ubuntu 16.04LTS operating system. It had a RAM of 189GB and 96 CPU cores with two threads per core. We use the TensorFlow framework for our implementation.\nD.2 SQLOSS\nFor our experiments with the Selfish and Status-Quo Aware Learner (SQLearner), we use policy gradient-based learning to train an agent with the Actor-Critic method (Sutton & Barto, 2011). Each agent is parameterized with a policy actor and critic for variance reduction in policy updates. During training, we use α = 1.0 for the REINFORCE and β = 0.5 for the imaginative game-play. We use gradient descent with step size, δ = 0.005 for the actor and δ = 1 for the critic. We use a batch size of 4000 for Lola-PG (Foerster et al., 2018) and use the results from the original paper. We use a batch size of 200 for SQLearner for roll-outs and an episode length of 200 for all iterated matrix games. We use a discount rate (γ) of 0.96 for the Iterated Prisoners’ Dilemma, Iterated Stag Hunt, and Coin Game. For the Iterated Matching Pennies, we use γ = 0.9 to be consistent with earlier works. The high value of γ allows for long time horizons, thereby incentivizing long-term rewards. Each agent randomly samples κ from U ∈ (1, z) (z = 10, discussed in Appendix D.7) at each step.\nD.3 GameDistill CLUSTERING\nFigures 6 and 7 show the clusters obtained for the state sequence embedding for the Coin Game and the dynamic variant of Stag Hunt respectively. In the figures, each point is a t-SNE projection of the\nfeature vector (in different dimensions) output by the GameDistill network for an input sequence\nof states. For each of the sub-figures, the figure on the left is colored based on actual rewards obtained by each agent (r1|r2). The figure on the right is colored based on clusters, as learned by GameDistill. GameDistill correctly identifies two types of trajectories, one for cooperation and the other for defection for both the games Coin Game and Stag-Hunt.\nFigure 6 also shows the clustering results for different dimensions of the state sequence embedding for the Coin Game. We observe that changing the size of the embedding does not have any effect on the results.\nD.4 ILLUSTRATIONS OF TRAINED ORACLE NETWORKS FOR THE COIN GAME\nFigure 8 shows the predictions of the oracle networks learned by the Red agent using GameDistill in the Coin Game. We see that the cooperation oracle suggests an action that avoids picking the coin of the other agent (the Blue coin). Analogously, the defection oracle suggests a selfish action that picks the coin of the other agent.\nD.5 RESULTS FOR THE ITERATED STAG HUNT (ISH) USING SQLOSS\nWe provide the results of training two SQLearner agents on the Iterated Stag Hunt game in Figure 9. In this game also, SQLearner agents coordinate successfully to obtain a near-optimal NDR value (0) for this game.\nD.6 RESULTS FOR THE CHICKEN GAME USING SQLOSS\nWe provide the results of training two SQLearner agents on the Iterated Chicken game in Figure 10. The payoff matrix for the game is shown in the Table 2. From the payoff, it is clear that the agents may defect out of greed. In this game also, SQLearner agents coordinate successfully to" }, { "heading": "C (-1, -1) (-3, 0)", "text": "" }, { "heading": "D (0, -3) (-4, -4)", "text": "obtain a near-optimal NDR value (0) for this game.\nD.7 SQLoss: EFFECT OF z ON CONVERGENCE TO COOPERATION\nWe explore the effect of the hyper-parameter z (Section 2) on convergence to cooperation, we also experiment with varying values of z. In the experiment, to imagine the consequences of maintaining the status quo, each agent samples κt from the Discrete Uniform distribution U{1, z}. A larger value of z thus implies a larger value of κt and longer imaginary episodes. We find that larger z (and hence κ) leads to faster cooperation between agents in the IPD and Coin Game. This effect plateaus for z > 10. However varying and changing κt across time also increases the variance in the gradients and thus affects the learning. We thus use κ = 10 for all our experiments.\nD.8 SQLEARNER: EXPLOITABILITY AND ADAPTABILITY\nGiven that an agent does not have any prior information about the other agent, it must evolve its strategy based on its opponent’s strategy. To evaluate an SQLearner agent’s ability to avoid exploitation by a selfish agent, we train one SQLearner agent against an agent that always defects in the Coin Game. We find that the SQLearner agent also learns to always defect. This persistent defection is important since given that the other agent is selfish, the SQLearner agent can do no better than also be selfish. To evaluate an SQLearner agent’s ability to exploit a cooperative agent, we train one SQLearner agent with an agent that always cooperates in the Coin Game. In this case, we find that the SQLearner agent learns to always defect. This persistent defection is important since given that the other agent is cooperative, the SQLearner agent obtains maximum reward by behaving selfishly. Hence, the SQLearner agent is both resistant to exploitation and able to exploit, depending on the other agent’s strategy." } ]
2,020
null
SP:72f379cefb57913386cbd76978943bdc8d0545a7
[ "The work uses diffusion probabilistic models for conditional speech synthesis tasks, specifically to convert mel-spectrogram to the raw audio waveform. Results from the proposed approach match the state-of-the-art WaveRNN model. The paper is very well-written and it is quite easy to follow. The study of the total number of diffusion steps and two different ways (continuous and discrete) ways to feed it in the network is very interesting. It is quite relevant and important for speech synthesis tasks. Using this, authors are able to find a 6-step inference procedure that yields very competitive performance to WaveRNN while still being computationally feasible." ]
This paper introduces WaveGrad, a conditional model for waveform generation which estimates gradients of the data density. The model is built on prior work on score matching and diffusion probabilistic models. It starts from a Gaussian white noise signal and iteratively refines the signal via a gradient-based sampler conditioned on the mel-spectrogram. WaveGrad offers a natural way to trade inference speed for sample quality by adjusting the number of refinement steps, and bridges the gap between non-autoregressive and autoregressive models in terms of audio quality. We find that it can generate high fidelity audio samples using as few as six iterations. Experiments reveal WaveGrad to generate high fidelity audio, outperforming adversarial non-autoregressive baselines and matching a strong likelihood-based autoregressive baseline using fewer sequential operations. Audio samples are available at https://wavegrad.github.io/.
[ { "affiliations": [], "name": "Nanxin Chen" }, { "affiliations": [], "name": "Yu Zhang" }, { "affiliations": [], "name": "Heiga Zen" }, { "affiliations": [], "name": "Ron J. Weiss" }, { "affiliations": [], "name": "Mohammad Norouzi" }, { "affiliations": [], "name": "William Chan" } ]
[ { "authors": [ "Yang Ai", "Zhen-Hua Ling" ], "title": "Knowledge-and-Data-Driven Amplitude Spectrum Prediction for Hierarchical Neural Vocoders", "venue": "arXiv preprint arXiv:2004.07832,", "year": 2020 }, { "authors": [ "Eric Battenberg", "RJ Skerry-Ryan", "Soroosh Mariooryad", "Daisy Stanton", "David Kao", "Matt Shannon", "Tom Bagby" ], "title": "Location-relative Attention Mechanisms for Robust Long-form Speech Synthesis", "venue": "In ICASSP,", "year": 2020 }, { "authors": [ "Fadi Biadsy", "Ron J. Weiss", "Pedro J. Moreno", "Dimitri Kanevsky", "Ye Jia" ], "title": "Parrotron: An End-toEnd Speech-to-Speech Conversion Model and its Applications to Hearing-Impaired Speech and Speech Separation", "venue": null, "year": 2019 }, { "authors": [ "Mikołaj Bińkowski", "Jeff Donahue", "Sander Dieleman", "Aidan Clark", "Erich Elsen", "Norman Casagrande", "Luis C. Cobo", "Karen Simonyan" ], "title": "High Fidelity Speech Synthesis with Adversarial Networks", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Ruojin Cai", "Guandao Yang", "Hadar Averbuch-Elor", "Zekun Hao", "Serge Belongie", "Noah Snavely", "Bharath Hariharan" ], "title": "Learning Gradient Fields for Shape Generation", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Harris Chan", "Jamie Kiros", "William Chan" ], "title": "Multilingual KERMIT: It’s Not Easy Being Generative", "venue": "In NeurIPS: Workshop on Perception as Generative Reasoning,", "year": 2019 }, { "authors": [ "William Chan", "Nikita Kitaev", "Kelvin Guu", "Mitchell Stern", "Jakob Uszkoreit" ], "title": "KERMIT: Generative Insertion-Based Modeling for Sequences", "venue": "arXiv preprint arXiv:1906.01604,", "year": 2019 }, { "authors": [ "William Chan", "Mitchell Stern", "Jamie Kiros", "Jakob Uszkoreit" ], "title": "An Empirical Study of Generation Order for Machine Translation", "venue": "arXiv preprint arXiv:1910.13437,", "year": 2019 }, { "authors": [ "William Chan", "Chitwan Saharia", "Geoffrey Hinton", "Mohammad Norouzi", "Navdeep Jaitly" ], "title": "Imputer: Sequence Modelling via Imputation and Dynamic Programming", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Wei Chu", "Abeer Alwan" ], "title": "Reducing F0 Frame Error of F0 Tracking Algorithms under Noisy Conditions with an Unvoiced/Voiced Classification Frontend", "venue": "In ICASSP,", "year": 2009 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "venue": "In NAACL,", "year": 2019 }, { "authors": [ "Chris Donahue", "Julian McAuley", "Miller Puckette" ], "title": "Adversarial Audio Synthesis", "venue": "arXiv preprint arXiv:1802.04208,", "year": 2018 }, { "authors": [ "Vincent Dumoulin", "Ethan Perez", "Nathan Schucher", "Florian Strub", "Harm de Vries", "Aaron Courville", "Yoshua Bengio" ], "title": "Feature-wise Transformations. Distill, 2018", "venue": "doi: 10.23915/distill.00011", "year": 2018 }, { "authors": [ "Jesse Engel", "Kumar Krishna Agrawal", "Shuo Chen", "Ishaan Gulrajani", "Chris Donahue", "Adam Roberts" ], "title": "GANSynth: Adversarial Neural Audio Synthesis", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Jesse Engel", "Lamtharn Hantrakul", "Chenjie Gu", "Adam Roberts" ], "title": "DDSP: Differentiable Digital Signal Processing", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Marjan Ghazvininejad", "Omer Levy", "Yinhan Liu", "Luke Zettlemoyer" ], "title": "Mask-Predict: Parallel Decoding of Conditional Masked Language Models", "venue": "In EMNLP,", "year": 2019 }, { "authors": [ "Alexey A. Gritsenko", "Tim Salimans", "Rianne van den Berg", "Jasper Snoek", "Nal Kalchbrenner" ], "title": "A Spectral Energy Distance for Parallel Speech Synthesis", "venue": "arXiv preprint arXiv:2008.01160,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": null, "year": 2016 }, { "authors": [ "Jonathan Ho", "Ajay Jain", "Pieter Abbeel" ], "title": "Denoising Diffusion Probabilistic Models", "venue": "arXiv preprint arXiv:2006.11239,", "year": 2020 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Estimation of Non-Normalized Statistical Models by Score Matching", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Keith Ito", "Linda Johnson" ], "title": "https://keithito.com/ LJ-Speech-Dataset", "venue": "The LJ Speech Dataset,", "year": 2017 }, { "authors": [ "Ye Jia", "Ron J. Weiss", "Fadi Biadsy", "Wolfgang Macherey", "Melvin Johnson", "Zhifeng Chen", "Yonghui Wu" ], "title": "Direct Speech-to-Speech Translation with a Sequence-to-Sequence", "venue": null, "year": 2019 }, { "authors": [ "Lauri Juvela", "Bajibabu Bollepalli", "Vassilis Tsiaras", "Paavo Alku" ], "title": "GlotNet—A Raw Waveform Model for the Glottal Excitation in Statistical Parametric Speech Synthesis", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2019 }, { "authors": [ "Nal Kalchbrenner", "Erich Elsen", "Karen Simonyan", "Seb Noury", "Norman Casagrande", "Edward Lockhart", "Florian Stimberg", "Aäron van den Oord", "Sander Dieleman", "Koray Kavukcuoglu" ], "title": "Efficient Neural Audio Synthesis", "venue": null, "year": 2018 }, { "authors": [ "Hyeongju Kim", "Hyeonseung Lee", "Woo Hyun Kang", "Sung Jun Cheon", "Byoung Jin Choi", "Nam Soo Kim" ], "title": "WaveNODE: A Continuous Normalizing Flow for Speech Synthesis", "venue": "In ICML: Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models,", "year": 2020 }, { "authors": [ "Sungwon Kim", "Sang-Gil Lee", "Jongyoon Song", "Jaehyeon Kim", "Sungroh Yoon" ], "title": "FloWaveNet: A Generative Flow for Raw Audio", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Zhifeng Kong", "Wei Ping", "Jiaji Huang", "Kexin Zhao", "Bryan Catanzaro" ], "title": "DiffWave: A Versatile Diffusion Model for Audio Synthesis", "venue": "arXiv preprint arXiv:2009.09761,", "year": 2020 }, { "authors": [ "Robert Kubichek" ], "title": "Mel-Cepstral Distance Measure for Objective Speech Quality Assessment", "venue": "In IEEE PACRIM,", "year": 1993 }, { "authors": [ "Kundan Kumar", "Rithesh Kumar", "Thibault de Boissiere", "Lucas Gestin", "Wei Zhen Teoh", "Jose Sotelo", "Alexandre de Brebisson", "Yoshua Bengio", "Aaron Courville" ], "title": "MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Jason Lee", "Elman Mansimov", "Kyunghyun Cho" ], "title": "Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "Lala Li", "William Chan" ], "title": "Big Bidirectional Insertion Representations for Documents", "venue": "In EMNLP: Workshop of Neural Generation and Translation,", "year": 2019 }, { "authors": [ "Ollie McCarthy", "Zohaib Ahmed" ], "title": "HooliGAN: Robust, High Quality Neural Vocoding", "venue": "arXiv preprint arXiv:2008.02493,", "year": 2020 }, { "authors": [ "Soroush Mehri", "Kundan Kumar", "Ishaan Gulrajani", "Rithesh Kumar", "Shubham Jain", "Jose Sotelo", "Aaron Courville", "Yoshua Bengio" ], "title": "SampleRNN: An Unconditional End-to-End Neural Audio Generation", "venue": null, "year": 2017 }, { "authors": [ "Aäron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "WaveNet: A Generative Model for Raw Audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Aäron van den Oord", "Yazhe Li", "Igor Babuschkin", "Karen Simonyan", "Oriol Vinyals", "Koray Kavukcuoglu", "George van den Driessche", "Edward Lockhart", "Luis C. Cobo", "Florian Stimberg", "Norman Casagrande", "Dominik Grewe", "Seb Noury", "Sander Dieleman", "Erich Elsen", "Nal Kalchbrenner", "Heiga Zen", "Alex Graves", "Helen King", "Tom Walters", "Dan Belov", "Demis Hassabis" ], "title": "Parallel WaveNet: Fast High-Fidelity Speech Synthesis", "venue": null, "year": 2018 }, { "authors": [ "Taesung Park", "Ming-Yu Liu", "Ting-Chun Wang", "Jun-Yan Zhu" ], "title": "Semantic Image Synthesis with Spatially-Adaptive Normalization", "venue": null, "year": 2019 }, { "authors": [ "Kainan Peng", "Wei Ping", "Zhao Song", "Kexin Zhao" ], "title": "Non-Autoregressive Neural Text-to-Speech", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Wei Ping", "Kainan Peng", "Jitong Chen" ], "title": "ClariNet: Parallel Wave Generation in End-to-End Textto-Speech", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ryan Prenger", "Rafael Valle", "Bryan Catanzaro" ], "title": "WaveGlow: A Flow-based Generative Network for Speech Synthesis", "venue": "In ICASSP,", "year": 2019 }, { "authors": [ "Laura Ruis", "Mitchell Stern", "Julia Proskurnia", "William Chan" ], "title": "Insertion-Deletion Transformer", "venue": "In EMNLP: Workshop of Neural Generation and Translation,", "year": 2019 }, { "authors": [ "Sara Sabour", "William Chan", "Mohammad Norouzi" ], "title": "Optimal Completion Distillation for Sequence Learning", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Chitwan Saharia", "William Chan", "Saurabh Saxena", "Mohammad Norouzi" ], "title": "Non-Autoregressive Machine Translation with Latent Alignments", "venue": "arXiv preprint arXiv:2004.07437,", "year": 2020 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P. Kingma" ], "title": "PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Saeed Saremi", "Arash Mehrjou", "Bernhard Schölkopf", "Aapo Hyvärinen" ], "title": "Deep Energy Estimator Networks", "venue": "arXiv preprint arXiv:1805.08306,", "year": 2018 }, { "authors": [ "Andrew M. Saxe", "James L. McClelland", "Surya Ganguli" ], "title": "Exact Solutions to the Nonlinear Dynamics of Learning in Deep Linear Neural Networks", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Jonathan Shen", "Ruoming Pang", "Ron J. Weiss", "Mike Schuster", "Navdeep Jaitly", "Zongheng Yang", "Zhifeng Chen", "Yu Zhang", "Yuxuan Wang", "RJ Skerrv-Ryan", "Rif A. Saurous", "Yannis Agiomyrgiannakis", "Yonghui Wu" ], "title": "Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions", "venue": null, "year": 2018 }, { "authors": [ "Jascha Sohl-Dickstein", "Eric A. Weiss", "Niru Maheswaranathan", "Surya Ganguli" ], "title": "Deep Unsupervised Learning using Nonequilibrium Thermodynamics", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Eunwoo Song", "Kyungguen Byun", "Hong-Goo Kang" ], "title": "ExcitNet Vocoder: A Neural Excitation Model for Parametric Speech Synthesis Systems", "venue": "In EUSIPCO,", "year": 2019 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Generative Modeling by Estimating Gradients of the Data Distribution", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Improved Techniques for Training Score-Based Generative Models", "venue": "arXiv preprint arXiv:2006.09011,", "year": 2020 }, { "authors": [ "Yang Song", "Sahaj Garg", "Jiaxin Shi", "Stefano Ermon" ], "title": "Sliced Score Matching: A Scalable Approach to Density and Score Estimation", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "Jose Sotelo", "Soroush Mehri", "Kundan Kumar", "Joao Felipe Santos", "Kyle Kastner", "Aaron C. Courville", "Yoshua Bengio" ], "title": "Char2Wav: End-to-End Speech Synthesis", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Mitchell Stern", "William Chan", "Jamie Kiros", "Jakob Uszkoreit" ], "title": "Insertion Transformer: Flexible Sequence Generation via Insertion Operations", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Jean-Marc Valin", "Jan Skoglund" ], "title": "LPCNet: Improving Neural Speech Synthesis through Linear Prediction", "venue": "In ICASSP,", "year": 2019 }, { "authors": [ "Sean Vasquez", "Mike Lewis" ], "title": "MelNet: A Generative Model for Audio in the Frequency Domain", "venue": "arXiv preprint arXiv:1906.01083,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention Is All You Need", "venue": null, "year": 2017 }, { "authors": [ "Pascal Vincent" ], "title": "A Connection Between Score Matching and Denoising Autoencoders", "venue": "Neural Computation,", "year": 2011 }, { "authors": [ "Xin Wang", "Shinji Takaki", "Junichi Yamagishi" ], "title": "Neural Source-Filter Waveform Models for Statistical Parametric Speech Synthesis", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2020 }, { "authors": [ "Yuxuan Wang", "RJ Skerry-Ryan", "Daisy Stanton", "Yonghui Wu", "Ron J. Weiss", "Navdeep Jaitly", "Zongheng Yang", "Ying Xiao", "Zhifeng Chen", "Samy Bengio", "Quoc Le", "Yannis Agiomyrgiannakis", "Rob Clark", "Rif A. Saurous" ], "title": "Tacotron: Towards End-to-End Speech Synthesis", "venue": null, "year": 2017 }, { "authors": [ "Ning-Qian Wu", "Zhen-Hua Ling" ], "title": "WaveFFJORD: FFJORD-Based Vocoder for Statistical Parametric Speech Synthesis", "venue": "In ICASSP,", "year": 2020 }, { "authors": [ "Ryuichi Yamamoto", "Eunwoo Song", "Jae-Min Kim" ], "title": "Parallel WaveGAN: A Fast Waveform Generation Model Based on Generative Adversarial Networks with Multi-Resolution Spectrogram", "venue": "In ICASSP,", "year": 2020 }, { "authors": [ "Geng Yang", "Shan Yang", "Kai Liu", "Peng Fang", "Wei Chen", "Lei Xie" ], "title": "Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech", "venue": "arXiv preprint arXiv:2005.05106,", "year": 2020 }, { "authors": [ "Jinhyeok Yang", "Junmo Lee", "Youngik Kim", "Hoonyoung Cho", "Injung Kim" ], "title": "VocGAN: A HighFidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network", "venue": "arXiv preprint arXiv:2007.15256,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep generative models have revolutionized speech synthesis (Oord et al., 2016; Sotelo et al., 2017; Wang et al., 2017; Biadsy et al., 2019; Jia et al., 2019; Vasquez & Lewis, 2019). Autoregressive models, in particular, have been popular for raw audio generation thanks to their tractable likelihoods, simple inference procedures, and high fidelity samples (Oord et al., 2016; Mehri et al., 2017; Kalchbrenner et al., 2018; Song et al., 2019; Valin & Skoglund, 2019). However, autoregressive models require a large number of sequential computations to generate an audio sample. This makes it challenging to deploy them in real-world applications where faster than real time generation is essential, such as digital voice assistants on smart speakers, even using specialized hardware.\nThere has been a plethora of research into non-autoregressive models for audio generation, including normalizing flows such as inverse autoregressive flows (Oord et al., 2018; Ping et al., 2019), generative flows (Prenger et al., 2019; Kim et al., 2019), and continuous normalizing flows (Kim et al., 2020; Wu & Ling, 2020), implicit generative models such as generative adversarial networks (GAN) (Donahue et al., 2018; Engel et al., 2019; Kumar et al., 2019; Yamamoto et al., 2020; Bińkowski et al., 2020; Yang et al., 2020a;b; McCarthy & Ahmed, 2020) and energy score (Gritsenko et al., 2020), variational auto-encoder models (Peng et al., 2020), as well as models inspired by digital signal processing (Ai & Ling, 2020; Engel et al., 2020), and the speech production mechanism (Juvela et al., 2019; Wang et al., 2020). Although such models improve inference speed by requiring fewer sequential operations, they often yield lower quality samples than autoregressive models.\nThis paper introduces WaveGrad, a conditional generative model of waveform samples that estimates the gradients of the data log-density as opposed to the density itself. WaveGrad is simple to train, and implicitly optimizes for the weighted variational lower-bound of the log-likelihood.\n∗Work done during an internship at Google Brain. †Equal contribution.\nWaveGrad is non-autoregressive, and requires only a constant number of generation steps during inference. Figure 1 visualizes the inference process of WaveGrad.\nWaveGrad builds on a class of generative models that emerges through learning the gradient of the data log-density, also known as the Stein score function (Hyvärinen, 2005; Vincent, 2011). During inference, one can rely on the gradient estimate of the data log-density and use gradient-based samplers (e.g., Langevin dynamics) to sample from the model (Song & Ermon, 2019). Promising results have been achieved on image synthesis (Song & Ermon, 2019; 2020) and shape generation (Cai et al., 2020). Closely related are diffusion probabilistic models (Sohl-Dickstein et al., 2015), which capture the output distribution through a Markov chain of latent variables. Although these models do not offer tractable likelihoods, one can optimize a (weighted) variational lower-bound on the log-likelihood. The training objective can be reparameterized to resemble deonising score matching (Vincent, 2011), and can be interpreted as estimating the data log-density gradients. The model is non-autoregressive during inference, requiring only a constant number of generation steps, using a Langevin dynamics-like sampler to generate the output beginning from Gaussian noise.\nThe key contributions of this paper are summarized as follows:\n• WaveGrad combines recent techniques from score matching (Song et al., 2020; Song & Ermon, 2020) and diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020) to address conditional speech synthesis.\n• We build and compare two variants of the WaveGrad model: (1) WaveGrad conditioned on a discrete refinement step index following Ho et al. (2020), (2) WaveGrad conditioned on a continuous scalar indicating the noise level. We find this novel continuous variant is more effective, especially because once the model is trained, different number of refinement steps can be used for inference. The proposed continuous noise schedule enables our model to use fewer inference iterations while maintaining the same quality (e.g., 6 vs. 50).\n• We demonstrate that WaveGrad is capable of generating high fidelity audio samples, outperforming adversarial non-autoregressive models (Yamamoto et al., 2020; Kumar et al., 2019; Yang et al., 2020a; Bińkowski et al., 2020) and matching one of the best autoregressive models (Kalchbrenner et al., 2018) in terms of subjective naturalness. WaveGrad is capable of generating high fidelity samples using as few as six refinement steps." }, { "heading": "2 ESTIMATING GRADIENTS FOR WAVEFORM GENERATION", "text": "We begin with a brief review of the Stein score function, Langevin dynamics, and score matching. The Stein score function (Hyvärinen, 2005) is the gradient of the data log-density log p(y) with respect to the datapoint y:\ns(y) = ∇y log p(y). (1)\nGiven the Stein score function s(·), one can draw samples from the corresponding density, ỹ ∼ p(y), via Langevin dynamics, which can be interpreted as stochastic gradient ascent in the data space:\nỹi+1 = ỹi + η\n2 s(ỹi) +\n√ η zi, (2)\nwhere η > 0 is the step size, zi ∼ N (0, I), and I denotes an identity matrix. A variant (Ho et al., 2020) is used as our inference procedure.\nA generative model can be built by training a neural network to learn the Stein score function directly, using Langevin dynamics for inference. This approach, known as score matching (Hyvärinen, 2005; Vincent, 2011), has seen success in image (Song & Ermon, 2019; 2020) and shape (Cai et al., 2020) generation. The denoising score matching objective (Vincent, 2011) takes the form:\nEy∼p(y) Eỹ∼q(ỹ|y) [∥∥∥sθ(ỹ)−∇ỹ log q(ỹ | y)∥∥∥2\n2\n] , (3)\nwhere p(·) is the data distribution, and q(·) is a noise distribution. Recently, Song & Ermon (2019) proposed a weighted denoising score matching objective, in which data is perturbed with different levels of Gaussian noise, and the score function sθ(ỹ, σ) is conditioned on σ, the standard deviation of the noise used:∑\nσ∈S λ(σ)Ey∼p(y) Eỹ∼N (y,σ)\n[∥∥∥∥sθ(ỹ, σ) + ỹ − yσ2 ∥∥∥∥2\n2\n] , (4)\nwhere S is a set of standard deviation values that are used to perturb the data, and λ(σ) is a weighting function for different σ. WaveGrad is a variant of this approach applied to learning conditional generative models of the form p(y | x). WaveGrad adopts a similar objective which combines the idea of Vincent (2011); Ho et al. (2020); Song & Ermon (2019). WaveGrad learns the gradient of the data density, and uses a sampler similar to Langevin dynamics for inference.\nThe denoising score matching framework relies on a noise distribution to provide support for learning the gradient of the data log density (i.e., q in Equation 3, andN (·, σ) in Equation 4). The choice of the noise distribution is critical for achieving high quality samples (Song & Ermon, 2020). As shown in Figure 2, WaveGrad relies on the diffusion model framework (Sohl-Dickstein et al., 2015; Ho et al., 2020) to generate the noise distribution used to learn the score function." }, { "heading": "2.1 WAVEGRAD AS A DIFFUSION PROBABILISTIC MODEL", "text": "Ho et al. (2020) observed that diffusion probabilistic models (Sohl-Dickstein et al., 2015) and score matching objectives (Song & Ermon, 2019; Vincent, 2011; Song & Ermon, 2020) are closely related. As such, we will first introduce WaveGrad as a diffusion probabilistic model.\nWe adapt the diffusion model setup in Ho et al. (2020), from unconditional image generation to conditional raw audio waveform generation. WaveGrad models the conditional distribution pθ(y0 |\nAlgorithm 1 Training. WaveGrad directly conditions on the continuous noise level √ ᾱ. l is from a predefined noise schedule.\n1: repeat 2: y0 ∼ q(y0) 3: s ∼ Uniform({1, . . . , S}) 4: √ ᾱ ∼ Uniform(ls−1, ls) 5: ∼ N (0, I) 6: Take gradient descent step on\n∇θ ∥∥ − θ(√ᾱ y0 +√1− ᾱ , x,√ᾱ)∥∥1\n7: until converged\nAlgorithm 2 Sampling. WaveGrad generates samples following a gradient-based sampler similar to Langevin dynamics.\n1: yN ∼ N (0, I) 2: for n = N, . . . , 1 do 3: z ∼ N (0, I)\n4: yn−1 =\n( yn− 1−αn√1−ᾱn θ(yn,x, √ ᾱn) ) √ αn\n5: if n > 1, yn−1 = yn−1 + σnz 6: end for 7: return y0\nx) where y0 is the waveform and x contains the conditioning features corresponding to y0, such as linguistic features derived from the corresponding text, mel-spectrogram features extracted from y0, or acoustic features predicted by a Tacotron-style text-to-speech synthesis model (Shen et al., 2018):\npθ(y0 | x) := ∫ pθ(y0:N | x) dy1:N , (5)\nwhere y1, . . . , yN is a series of latent variables, each of which are of the same dimension as the data y0, and N is the number of latent variables (iterations). The posterior q(y1:N | y0) is called the diffusion process (or forward process), and is defined through the Markov chain:\nq(y1:N | y0) := N∏ n=1 q(yn | yn−1), (6)\nwhere each iteration adds Gaussian noise: q(yn | yn−1) := N ( yn; √ (1− βn) yn−1, βnI ) , (7)\nunder some (fixed constant) noise schedule β1, . . . , βN . We emphasize the property observed by Ho et al. (2020), the diffusion process can be computed for any step n in a closed form:\nyn = √ ᾱn y0 + √ (1− ᾱn) (8)\nwhere ∼ N (0, I), αn := 1− βn and ᾱn := ∏n s=1 αs. The gradient of this noise distribution is\n∇yn log q(yn | y0) = − √ 1− ᾱn . (9)\nHo et al. (2020) proposed to train on pairs (y0, yn), and to reparameterize the neural network to model θ. This objective resembles denoising score matching as in Equation 3 (Vincent, 2011):\nEn, [ Cn ∥∥ θ (√ᾱn y0 +√1− ᾱn , x, n)− ∥∥22] , (10)\nwhere Cn is a constant related to βn. In practice Ho et al. (2020) found it beneficial to drop the Cn term, resulting in a weighted variational lower bound of the log-likelihood. Additionally in Ho et al. (2020), θ conditions on the discrete index n, as we will discuss further below. We also found that substituting the original L2 distance metric with L1 offers better training stability." }, { "heading": "2.2 NOISE SCHEDULE AND CONDITIONING ON NOISE LEVEL", "text": "In the score matching setup, Song & Ermon (2019; 2020) noted the importance of the choice of noise distribution used during training, since it provides support for modelling the gradient distribution. The diffusion framework can be viewed as a specific approach to providing support to score matching, where the noise schedule is parameterized by β1, . . . , βN , as described in the previous section. This is typically determined via some hyperparameter heuristic, e.g., a linear decay schedule (Ho et al., 2020). We found the choice of the noise schedule to be critical towards achieving high fidelity audio in our experiments, especially when trying to minimize the number of inference iterations N to make inference efficient. A schedule with superfluous noise may result in a model\nunable to recover the low amplitude detail of the waveform, while a schedule with too little noise may result in a model that converges poorly during inference. Song & Ermon (2020) provide some insights around tuning the noise schedule under the score matching framework. We will connect some of these insights and apply them to WaveGrad under the diffusion framework.\nAnother closely related problem is determining N , the number of diffusion/denoising steps. A large N equips the model with more computational capacity, and may improve sample quality. However using a small N results in faster inference and lower computational costs. Song & Ermon (2019) used N = 10 to generate 32 × 32 images, while Ho et al. (2020) used 1,000 iterations to generate high resolution 256× 256 images. In our case, WaveGrad generates audio sampled at 24 kHz. We found that tuning both the noise schedule and N in conjunction was critical to attaining high fidelity audio, especially when N is small. If these hyperparameters are poorly tuned, the training sampling procedure may provide deficient support for the distribution. Consequently, during inference, the sampler may converge poorly when the sampling trajectory encounters regions that deviate from the conditions seen during training. However, tuning these hyperparameters can be costly due to the large search space, as a large number of models need to be trained and evaluated. We make empirical observations and discuss this in more details in Section 4.4.\nWe address some of the issues above in our WaveGrad implementation. First, compared to the diffusion probabilistic model from Ho et al. (2020), we reparameterize the model to condition on the continuous noise level ᾱ instead of the discrete iteration index n. The loss becomes\nEᾱ, [∥∥∥ θ (√ᾱ y0 +√1− ᾱ , x,√ᾱ)− ∥∥∥\n1\n] , (11)\nA similar approach was also used in the score matching framework (Song & Ermon, 2019; 2020), wherein they conditioned on the noise variance.\nThere is one minor technical issue we must resolve in this approach. In the diffusion probabilistic model training procedure conditioned on the discrete iteration index (Equation 10), we would sample n ∼ Uniform({1, . . . , N}), and then compute its corresponding αn. When directly conditioning on the continuous noise level, we need to define a sampling procedure that can directly sample ᾱ. Recall that ᾱn := ∏n s (1 − βs) ∈ [0, 1]. While we could simply sample from the uniform distribution ᾱ ∼ Uniform(0, 1), we found this to give poor empirical results. Instead, we use a simple hierarchical sampling method that mimics the discrete sampling strategy. We first define a noise schedule with S iterations and compute all of its corresponding √ ᾱs:\nl0 = 1, ls = √∏s i=1(1− βi). (12)\nWe first sample a segment s ∼ U({1, . . . , S}), which provides a segment (ls−1, ls), and then sample from this segment uniformly to give √ ᾱ. The full WaveGrad training algorithm using this sampling procedure is illustrated in Algorithm 1.\nyn\nx\nn\nDBlock (512, /5)\nDBlock (256, /3)\nDBlock (128, /2)\nDBlock (128, /2)\n5× 1 Conv (32)\n3× 1 Conv (768)\nUBlock (512, ×5)\nUBlock (512, ×5)\nUBlock (256, ×3)\nUBlock (128, ×2)\nUBlock (128, ×2)\n3× 1 Conv (1) √ ᾱ\nFiLM\nFiLM\nFiLM\nFiLM\nFiLM\nFigure 3: WaveGrad network architecture. The inputs consists of the mel-spectrogram conditioning signal x, the noisy waveform generated from the previous iteration yn, and the noise level √ ᾱ. The model produces n at each iteration, which can be interpreted as the direction to update yn.\nOne benefit of this variant is that the model needs to be trained only once, yet inference can be run over a large space of trajectories without the need to be retrained. To be specific, once we train a model, we can use a different number of iterations N during inference, making it possible to explicitly trade off between inference computation and output quality in one model. This also makes fast hyperparameter search possible, as we will illustrate in Section 4.4. The full inference algorithm is explained in Algorithm 2. The full WaveGrad architecture is visualized in Figure 3. Details are included in Appendix A.\n3 RELATED WORK\nThis work is inspired in part by Sohl-Dickstein et al. (2015), which applies diffusion proba-\n5\nbilistic models to unconditional image synthesis, whereas we apply diffusion probabilistic models to conditional generation of waveform. The objective we use also resembles the Noise Conditional Score Networks (NCSN) objective of Song & Ermon (2019). Similar to Song & Ermon (2019; 2020), our models condition on\na continuous scalar indicating the noise level. Denoising score matching (Vincent, 2011) and sliced score matching Song et al. (2020) also use similar objective functions, however they do not condition on the noise level. Work of Saremi et al. (2018) on score matching is also related, in that their objective accounts for a noise hyperparameter. Finally, Cai et al. (2020) applied NCSN to model conditional distributions for shape generation, while our focus is waveform generation.\nWaveGrad also closely relates to masked-based generative models (Devlin et al., 2019; Lee et al., 2018; Ghazvininejad et al., 2019; Chan et al., 2020; Saharia et al., 2020), insertion-based generative models (Stern et al., 2019; Chan et al., 2019b;a;c; Li & Chan, 2019) and edit-based generative models (Sabour et al., 2019; Gu et al., 2019; Ruis et al., 2019) found in the semi-autoregressive sequence generation literature. These approaches model discrete tokens and use edit operations (e.g., insertion, substitution, deletion), whereas in our work, we model the (continuous) gradients in a continuous output space. Edit-based models can also iteratively refine the outputs during inference (Lee et al., 2018; Ghazvininejad et al., 2019; Chan et al., 2020), while they do not rely on a (continuous) gradient-based sampler, they rely on a (discrete) edit-based sampler. The noise distribution play a key role, token masking based on Bernoulli (Devlin et al., 2019), uniform (Saharia et al., 2020), or hand-crafted (Chan et al., 2020) distributions have been used to enable learning the edit distribution. We rely on a Markov chain from the diffusion framework (Ho et al., 2020) to sample perturbations.\nWe note that the concurrent work of Kong et al. (2020) also applies the diffusion framework of Ho et al. (2020) to waveform generation. Their model conditions on a discrete iteration index whereas we find that conditioning on a continuous noise level offers improved flexibility and enables generating high fidelity audio as few as six refinement steps. By contrast, Kong et al. (2020) report performance using 20 refinement steps and evaluate their models when conditioned on ground truth mel-spectrogram. We evaluate WaveGrad when conditioned on Tacotron 2 mel-spectrogram predictions, which corresponds to a more realistic TTS setting.\nThe neural network architecture of WaveGrad is heavily inspired by GAN-TTS (Bińkowski et al., 2020). The upsampling block (UBlock) of WaveGrad follows the GAN-TTS generator, with a minor difference that no BatchNorm is used." }, { "heading": "4 EXPERIMENTS", "text": "We compare WaveGrad with other neural vocoders and carry out ablations using different noise schedules. We find that WaveGrad achieves the same sample quality as the fully autoregressive state-of-the-art model of Kalchbrenner et al. (2018) (WaveRNN) on both internal datasets (Table 1) and LJ Speech (Ito & Johnson, 2017) (Table C.1) with less sequential operations." }, { "heading": "4.1 MODEL AND TRAINING SETUP", "text": "We trained models using a proprietary speech dataset consisted of 385 hours of high quality English speech from 84 professional voice talents. For evaluation, we chose a female speaker in the training dataset. Speech signals were downsampled to 24 kHz then 128-dimensional mel-spectrogram features (50 ms Hanning window, 12.5 ms frame shift, 2048-point FFT, 20 Hz & 12 kHz lower & upper frequency cutoffs) were extracted. During training, mel-spectrograms computed from ground truth audio were used as the conditioning features x. However, during inference, we used predicted mel-spectrograms generated by a Tacotron 2 model (Shen et al., 2018) as the conditioning signal. Although there was a mismatch in the conditioning signals between training and inference, unlike Shen et al. (2018), preliminary experiments demonstrated that training using ground truth mel-spectrograms as conditioning had no regression compared to training using predicted features. This property is highly beneficial as it significantly simplifies the training process of text-to-speech\nmodels: the WaveGrad vocoder model can be trained separately on a large corpus without relying on a pretrained text-to-spectrogram model.\nModel Size: Two network size variations were compared: Base and Large. The WaveGrad Base model took 24 frames corresponding to 0.3 seconds of audio (7,200 samples) as input during training. We set the batch size to 256. Models were trained on using 32 Tensor Processing Unit (TPU) v2 cores. The WaveGrad Base model contained 15M parameters. For the WaveGrad Large model, we repeated each UBlock/DBlock twice, one with upsampling/downsampling and another without. Each training sample included 60 frames corresponding to a 0.75 second of audio (18,000 samples). We used the same batch size and trained the model using 128 TPU v3 cores. The WaveGrad Large model contained 23M parameters. Both Base and Large models were trained for about 1M steps. The network architecture is fully convolutional and non-autoregressive thus it is highly parallelizable at both training and inference.\nNoise Schedule: All noise schedules we used can be found in Appendix B." }, { "heading": "4.2 EVALUATION", "text": "The following models were used as baselines in this experiment: (1) WaveRNN (Kalchbrenner et al., 2018) conditioned on mel-spectrograms predicted by a Tacotron 2 model in teacher-forcing mode following Shen et al. (2018); The model used a single long short-term memory (LSTM) layer with 1,024 hidden units, 5 convolutional layers with 512 channels as the conditioning stack to process the mel-spectrogram features, and a 10-component mixture of logistic distributions (Salimans et al., 2017) as its output layer, generating 16-bit samples at 24 kHz. It had 18M parameters and was trained for 1M steps. Preliminary experiments indicated that further reducing the number of units in the LSTM layer hurts performance. (2) Parallel WaveGAN (Yamamoto et al., 2020) with 1.57M parameters, trained for 1M steps. (3) MelGAN (Kumar et al., 2019) with 3.22M parameters, trained for 4M steps. (4) Multi-band MelGAN (Yang et al., 2020a) with 2.27M parameters, trained for 1M steps. (5) GAN-TTS (Bińkowski et al., 2020) with 21.4M parameters, trained for 1M steps.\nAll models were trained using the same training set as the WaveGrad models. Following the original papers, Parallel WaveGAN, MelGAN, and Multi-band MelGAN were conditioned on the melspectrograms computed from ground truth audio during training. They were trained using a publicly available implementation at https://github.com/kan-bayashi/ParallelWaveGAN. Note that hyper-parameters of these baseline models were not fully optimized for this dataset.\nTo compare these models, we report subjective listening test results rating speech naturalness on a 5-point Mean Opinion Score (MOS) scale, following the protocol described in Appendix D. Conditioning mel-spectrograms for the test set were predicted using a Tacotron 2 model, which were passed to these models to synthesize audio signals. Note that the Tacotron 2 model was identical to the one used to predict mel-spectrograms for training WaveRNN and GAN-TTS models." }, { "heading": "4.3 RESULTS", "text": "Subjective evaluation results are summarized in Table 1. Models conditioned on discrete indices followed the formulation from Section 2.1, and models conditioned on continuous noise level followed the formulation from Section 2.2. WaveGrad models matched the performance of the autoregressive WaveRNN baseline and outperformed the non-autoregressive baselines. Although increasing the model size slightly improved naturalness, the difference was not statistically significant. The WaveGrad Base model using six iterations achieved a real time factor (RTF) of 0.2 on an NVIDIA V100 GPU, while still achieving an MOS above 4.4. As a comparison, the WaveRNN model achieved a RTF of 20.1 on the same GPU, 100 times slower. More detailed discussion is in Section 4.4. Appendix C contains results on a public dataset using the same model architecture and noise schedule." }, { "heading": "4.4 DISCUSSION", "text": "To understand the impact of different noise schedules and to reduce the number of iterations in the noise schedule from 1,000, we explored different noise schedules using fewer iterations. We found that a well-behaved inference schedule should satisfy two conditions:\nIn this section, all the experiments were conducted with the WaveGrad Base model. Both objective and subjective evaluation results are reported. The objective evaluation metrics include\n1. Log-mel spectrogram mean squared error metrics (LS-MSE), computed using 50 ms window length and 6.25 ms frame shift; 2. Mel cepstral distance (MCD) (Kubichek, 1993), a similar MSE metric computed using 13- dimensional mel frequency cepstral coefficient features; 3. F0 Frame Error (FFE) (Chu & Alwan, 2009), combining Gross Pitch Error and Voicing Decision metrics to measure the signal proportion whose estimated pitch differs from ground truth.\nSince the ground truth waveform is required to compute objective evaluation metrics, we report results using ground truth mel-spectrograms as conditioning features. We used a validation set of 50 utterances for objective evaluation, including audio samples from multiple speakers. Note that for MOS evaluation, we used the same subjective evaluation protocol described in Appendix D. We experimented with different noise schedules and number of iterations. These models were trained with conditioning on the discrete index. Subjective and quantitative evaluation results are in Table 2.\nWe also performed a detailed study on the the WaveGrad model conditioned on the continuous noise level in the bottom part of Table 2. Compared to the model conditioned on the discrete index with a fixed training schedule (top of Table 2), conditioning on the continuous noise level generalized better, especially if the number of iterations was small. It can be seen from Table 2 that degradation with the model with six iterations was not significant. The model with six iterations achieved real time factor (RTF) = 0.2 on an NVIDIA V100 GPU and RTF = 1.5 on an Intel Xeon CPU (16 cores, 2.3GHz). As we did not optimize the inference code, further speed ups are likely possible." }, { "heading": "5 CONCLUSION", "text": "In this paper, we presented WaveGrad, a novel conditional model for waveform generation which estimates the gradients of the data density, following the diffusion probabilistic model (Ho et al., 2020) and score matching framework (Song et al., 2020; Song & Ermon, 2020). WaveGrad starts from Gaussian white noise and iteratively updates the signal via a gradient-based sampler conditioned on the mel-spectrogram. WaveGrad is non-autoregressive, and requires only a constant number of generation steps during inference. We find that the model can generate high fidelity audio samples using as few as six iterations. WaveGrad is simple to train, and implicitly optimizes for the\nweighted variational lower-bound of the log-likelihood. The empirical experiments demonstrated WaveGrad to generate high fidelity audio samples matching a strong autoregressive baseline.\nAUTHOR CONTRIBUTIONS\nNanxin Chen wrote code, proposed the idea, ran all experiments and wrote the paper. Yu Zhang recruited collaborators, co-managed/advised the project, conducted evaluation, debugging model and editing paper. Heiga Zen helped conducting text-to-speech experiments and advised the project. Ron Weiss implemented the objective evaluation metrics and advised the project. Mohammad Norouzi suggested the use of denoising diffusion models for audio generation and helped with writing and advising the project. William Chan conceived the project, wrote code, wrote paper, and co-managed/advised the project." }, { "heading": "ACKNOWLEDGMENTS", "text": "The authors would like to thank Durk Kingma, Yang Song, Kevin Swersky and Yonghui Wu for providing insightful research discussions and feedback. We also would like to thank Norman Casagrande for helping us to include the GAN-TTS baseline." }, { "heading": "A NEURAL NETWORK ARCHITECTURE", "text": "To convert the mel-spectrogram signal (80 Hz) into raw audio (24 kHz), five upsampling blocks (UBlock) are applied to gradually upsample the temporal dimension by factors of 5, 5, 3, 2, 2, with the number of channels of 512, 512, 256, 128, 128 respectively. Additionally, one convolutional layer is added before and after these blocks.\nThe UBlock is illustrated in Figure 4. Each UBlock includes two residual blocks (He et al., 2016). Neural audio generation models often use large receptive field (Oord et al., 2016; Bińkowski et al., 2020; Yamamoto et al., 2020). The dilation factors of four convolutional layers are 1, 2, 4, 8 for the first three UBlocks and 1, 2, 1, 2 for the rest. Upsampling is carried out by repeating the nearest input. For the large model, we use 1, 2, 4, 8 for all UBlocks. As an iterative approach, the network prediction is also conditioned on noisy waveform √ ᾱn y0 +√\n1− ᾱn . Downsampling blocks (DBlock), illustrated in Figure 5, are introduced to downsample the temporal dimension of the noisy waveform. The DBlock is similar to UBlock except that only one residual block is included. The dilation factors are 1, 2, 4 in the main branch. Downsampling is carried out by convolution with strides. Orthogonal initialization (Saxe et al., 2014) is used for all UBlocks and DBlocks.\nThe feature-wise linear modulation (FiLM) (Dumoulin et al., 2018) module combines information from both noisy waveform and input mel-spectrogram. We also represent the iteration index n, which indicates the noise level of the input waveform, using Transformer-style sinusoidal positional embeddings (Vaswani et al., 2017). To condition on the noise level directly, we also utilize the sinusoidal embeddings where 5000 √ ᾱ instead of n is used. The FiLM module produces both scale and bias vectors given inputs, which are used in a UBlock for feature-wise affine transformation as\nγ(D, √ ᾱ) U + ξ(D, √ ᾱ), (13)\nwhere γ and ξ correspond to the scaling and shift vectors from the FiLM module, D is the output from corresponding DBlock, U is an intermediate output in the UBlock, and denotes the Hadamard product.\nAn overview of the FiLM module is illustrated in Figure 6. The structure is inspired by spatiallyadaptive denormalization (Park et al., 2019). However batch normalization (Ioffe & Szegedy, 2015) is not applied in our work since each minibatch contains samples with different levels of noise. Batch statistics are not accurate since they are heavily dependent on sampled noise level. Experiment\nresults also verified our assumption that models trained with batch normalization generate lowquality audio." }, { "heading": "B NOISE SCHEDULE", "text": "For the WaveGrad Base model, we tested different noise schedules during training. For 1000 and 50 iterations, we set the forward process variances to constants increasing linearly from β1 to βN , defined as Linear(β1, βN , N ). We used Linear(1× 10−4, 0.005, 1000) for 1000 iterations and Linear(1× 10−4, 0.05, 50) for 50 iterations. For 25 iteration, a different Fibonacci-based schedule was adopted (referred to as Fibonacci(N )):\nβ0 = 1× 10−6 β1 = 2× 10−6\nβn = βn−1 + βn−2 ∀n ≥ 2. (14)\nWhen a fixed schedule was used during training, the same schedule was used during inference. We found that a mismatch in the noise schedule degraded performance. To sample the noise level √ ᾱ, we set the maximal iteration S to 1000 and precompute l1 to lS from Linear(1× 10−6, 0.01, 1000). Unlike the base fixed schedule, WaveGrad support using a different schedule during inference thus “Manual” schedule was also explored to demonstrate the possibilities with WaveGrad. For example, the 6-iteration inference schedule was explored by sweeping the βs over following possibilities:\n{1, 2, 3, 4, 5, 6, 7, 8, 9} × 10−6, 10−5, 10−4, 10−3, 10−2, 10−1 (15)\nAgain, we did not need to train individual models for such hyper-parameter tuning. Here we used LS-MSE as a metric for tuning. All noise schedules and corresponding √ ᾱ are plotted in Figure 7." }, { "heading": "C RESULTS FOR LJ SPEECH", "text": "We ran experiments using the LJ Speech dataset (Ito & Johnson, 2017), a publicly available dataset consisting of audiobook recordings that were segmented into utterances of up to 10 seconds. We trained on a 12,764-utterance subset (23 hours) and evaluated on a held-out 130-utterance subset, following Battenberg et al. (2020). During training, mel-spectrograms computed from ground truth audio was used as the conditioning features. We used the held-out subset for evaluating synthesized speech with ground truth features. Results are presented in Table C.1. For this dataset, larger network size is beneficial and WaveGrad also matches the performance of the autoregressive baseline." }, { "heading": "D SUBJECTIVE LISTENING TEST PROTOCOL", "text": "The test set included 1,000 sentences. Subjects were asked to rate the naturalness of each stimulus after listening to it. Following previous studies, a five-point Likert scale score (1: Bad, 2: Poor, 3:\nTable E.2: Reported mean opinion scores (MOS) of various models and their confidence intervals. “Linguistic” and “Mel” in the “Features” column indicate that linguistic features and melspectrogram were used as conditioning, respectively.\nModel Features Sample Rate MOS Autoregressive\nWaveNet (Oord et al., 2016) Linguistic 16 kHz 4.21± 0.08 WaveNet (Oord et al., 2018) Linguistic 24 kHz 4.41± 0.07 WaveNet (Shen et al., 2018) Mel 24 kHz 4.53± 0.07 WaveRNN (Kalchbrenner et al., 2018) Linguistic 24 kHz 4.46± 0.07 Non-autoregressive Parallel WaveNet (Oord et al., 2018) Linguistic 24 kHz 4.41± 0.08 GAN-TTS (Bińkowski et al., 2020) Linguistic 24 kHz 4.21± 0.05 GED (Gritsenko et al., 2020) Linguistic 24 kHz 4.25± 0.06\nFair, 4: Good, 5: Excellent) was adopted with rating increments of 0.5. Each subject was allowed evaluate up to six stimuli. Test stimuli were randomly chosen and presented for each subject. Each stimulus was presented to a subject in isolation and was evaluated by one subject. The subjects were paid and native speakers of English living in United States. They were requested to use headphones in a quiet room." }, { "heading": "E SUBJECTIVE SCORES REPORTED IN THE PRIOR WORK", "text": "Table E.2 shows the reported mean opinion scores of the prior work which used the same speaker. Although different papers listed here used the same female speaker, their results are not directly comparable due to differences in the training dataset, sampling rates, conditioning features, and sentences used for evaluation." } ]
2,021
WAVEGRAD: ESTIMATING GRADIENTS FOR WAVEFORM GENERATION
SP:11cd869cd8c6dc657c136545fd2029f0c49843ba
[ "The paper presents a benchmark / dataset, HW-NAS-Bench, for evaluating various neural architecture search algorithms. The benchmark is based on extensive measurements on real hardware. An important goal with the proposal is to support neural architecture searches for non-hardware experts. Further, the paper provides a good overview of related work in the domain. " ]
HardWare-aware Neural Architecture Search (HW-NAS) has recently gained tremendous attention by automating the design of deep neural networks deployed in more resource-constrained daily life devices. Despite its promising performance, developing optimal HW-NAS solutions can be prohibitively challenging as it requires cross-disciplinary knowledge in the algorithm, micro-architecture, and device-specific compilation. First, to determine the hardware-cost to be incorporated into the NAS process, existing works mostly adopt either pre-collected hardware-cost look-up tables or device-specific hardware-cost models. The former can be time-consuming due to the required knowledge of the device’s compilation method and how to set up the measurement pipeline, while building the latter is often a barrier for non-hardware experts like NAS researchers. Both of them limit the development of HW-NAS innovations and impose a barrier-to-entry to non-hardware experts. Second, similar to generic NAS, it can be notoriously difficult to benchmark HW-NAS algorithms due to their significant required computational resources and the differences in adopted search spaces, hyperparameters, and hardware devices. To this end, we develop HW-NAS-Bench, the first public dataset for HW-NAS research which aims to democratize HW-NAS research to non-hardware experts and make HW-NAS research more reproducible and accessible. To design HW-NAS-Bench, we carefully collected the measured/estimated hardware performance (e.g., energy cost and latency) of all the networks in the search spaces of both NAS-Bench-201 and FBNet, on six hardware devices that fall into three categories (i.e., commercial edge devices, FPGA, and ASIC). Furthermore, we provide a comprehensive analysis of the collected measurements in HW-NAS-Bench to provide insights for HW-NAS research. Finally, we demonstrate exemplary user cases to (1) show that HW-NAS-Bench allows non-hardware experts to perform HW-NAS by simply querying our premeasured dataset and (2) verify that dedicated device-specific HW-NAS can indeed lead to optimal accuracy-cost trade-offs. The codes and all collected data are available at https://github.com/RICE-EIC/HW-NAS-Bench.
[ { "affiliations": [], "name": "SEARCH BENCHMARK" }, { "affiliations": [], "name": "Chaojian Li" }, { "affiliations": [], "name": "Zhongzhi Yu" }, { "affiliations": [], "name": "Yonggan Fu" }, { "affiliations": [], "name": "Yongan Zhang" }, { "affiliations": [], "name": "Yang Zhao" }, { "affiliations": [], "name": "Haoran You" }, { "affiliations": [], "name": "Qixuan Yu" }, { "affiliations": [], "name": "Yue Wang" }, { "affiliations": [], "name": "Yingyan Lin" } ]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "In 12th {USENIX} symposium on operating systems design and implementation ({OSDI}", "year": 2016 }, { "authors": [ "Hervé Abdi" ], "title": "The kendall rank correlation coefficient. Encyclopedia of Measurement and Statistics", "venue": null, "year": 2007 }, { "authors": [ "Junjie Bai", "Fang Lu", "Ke Zhang" ], "title": "Onnx: Open neural network exchange", "venue": "https://github. com/onnx/onnx,", "year": 2020 }, { "authors": [ "Samik Basu", "Mahasweta Ghosh", "Soma Barman" ], "title": "Raspberry pi 3b+ based smart remote health monitoring system using iot platform", "venue": "In Proceedings of the 2nd International Conference on Communication, Devices and Computing,", "year": 2020 }, { "authors": [ "Jacob Benesty", "Jingdong Chen", "Yiteng Huang", "Israel Cohen" ], "title": "Pearson correlation coefficient. In Noise reduction in speech", "venue": null, "year": 2009 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "arXiv preprint arXiv:1812.00332,", "year": 2018 }, { "authors": [ "Thomas Chau", "Łukasz Dudziak", "Mohamed S Abdelfattah", "Royson Lee", "Hyeji Kim", "Nicholas D Lane" ], "title": "Brp-nas: Prediction-based nas using gcns", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Y. Chen", "T. Krishna", "J. Emer", "V. Sze" ], "title": "Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks", "venue": "JSSC 2017,", "year": 2017 }, { "authors": [ "Yu-Hsin Chen", "Joel Emer", "Vivienne Sze" ], "title": "Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks", "venue": "ACM SIGARCH Computer Architecture News,", "year": 2016 }, { "authors": [ "Yu-Hsin Chen", "Tushar Krishna", "Joel Emer", "Vivienne Sze" ], "title": "Eyeriss: An EnergyEfficient Reconfigurable Accelerator for Deep Convolutional Neural Networks", "venue": "In IEEE International Solid-State Circuits Conference,", "year": 2016 }, { "authors": [ "Patryk Chrabaszcz", "Ilya Loshchilov", "Frank Hutter" ], "title": "A downsampled variant of imagenet as an alternative to the cifar datasets", "venue": "arXiv preprint arXiv:1707.08819,", "year": 2017 }, { "authors": [ "Grace Chu", "Okan Arikan", "Gabriel Bender", "Weijun Wang", "Achille Brighton", "Pieter-Jan Kindermans", "Hanxiao Liu", "Berkin Akin", "Suyog Gupta", "Andrew Howard" ], "title": "Discovering multi-hardware mobile models via architecture", "venue": null, "year": 2008 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Nas-bench-201: Extending the scope of reproducible neural architecture search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Xuanyi Dong", "Lu Liu", "Katarzyna Musial", "Bogdan Gabrys" ], "title": "Nats-bench: Benchmarking nas algorithms for architecture topology and size", "venue": "arXiv preprint arXiv:2009.00437,", "year": 2020 }, { "authors": [ "Yonggan Fu", "Wuyang Chen", "Haotao Wang", "Haoran Li", "Yingyan Lin", "Zhangyang Wang" ], "title": "Autogan-distiller: Searching to compress generative adversarial networks", "venue": "arXiv preprint arXiv:2006.08198,", "year": 2020 }, { "authors": [ "Yonggan Fu", "Zhongzhi Yu", "Yongan Zhang", "Yingyan Lin" ], "title": "Auto-agent-distiller: Towards efficient deep reinforcement learning agents via neural architecture search, 2020b", "venue": null, "year": 2020 }, { "authors": [ "Lukas Geiger", "Plumerai Team" ], "title": "Larq: An open-source library for training binarized neural networks", "venue": "Journal of Open Source Software,", "year": 2020 }, { "authors": [ "Andrew Howard", "Mark Sandler", "Grace Chu", "Liang-Chieh Chen", "Bo Chen", "Mingxing Tan", "Weijun Wang", "Yukun Zhu", "Ruoming Pang", "Vijay Vasudevan" ], "title": "Searching for mobilenetv3", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Nikita Klyuchnikov", "Ilya Trofimov", "Ekaterina Artemova", "Mikhail Salnikov", "Maxim Fedorov", "Evgeny Burnaev" ], "title": "Nas-bench-nlp: Neural architecture search benchmark for natural language processing, 2020", "venue": null, "year": 2020 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Jaeseong Lee", "Duseok Kang", "Soonhoi Ha" ], "title": "S3nas: Fast npu-aware neural architecture search methodology", "venue": "arXiv preprint arXiv:2009.02009,", "year": 2020 }, { "authors": [ "Chaojian Li", "Tianlong Chen", "Haoran You", "Zhangyang Wang", "Yingyan Lin" ], "title": "Halo: Hardwareaware learning to optimize", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Y. Lin", "S. Zhang", "N.R. Shanbhag" ], "title": "Variation-tolerant architectures for convolutional neural networks in the near threshold voltage regime", "venue": "In 2016 IEEE International Workshop on Signal Processing Systems (SiPS),", "year": 2016 }, { "authors": [ "Y. Lin", "C. Sakr", "Y. Kim", "N. Shanbhag" ], "title": "Predictivenet: An energy-efficient convolutional neural network via zero prediction", "venue": "IEEE International Symposium on Circuits and Systems (ISCAS),", "year": 2017 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Sicong Liu", "Yingyan Lin", "Zimu Zhou", "Kaiming Nan", "Hui Liu", "Junzhao Du" ], "title": "On-demand deep model compression for mobile devices: A usage-driven model selection framework", "venue": "MobiSys ’18,", "year": 2018 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Alberto Marchisio", "Andrea Massa", "Vojtech Mrazek", "Beatrice Bussolino", "Maurizio Martina", "Muhammad Shafique" ], "title": "Nascaps: A framework for neural architecture search to optimize the accuracy and hardware efficiency of convolutional capsule", "venue": null, "year": 2008 }, { "authors": [ "Angshuman Parashar", "Priyanka Raina", "Yakun Sophia Shao", "Yu-Hsin Chen", "Victor A Ying", "Anurag Mukkara", "Rangharajan Venkatesan", "Brucek Khailany", "Stephen W Keckler", "Joel Emer" ], "title": "Timeloop: A systematic approach to dnn accelerator evaluation", "venue": "IEEE international symposium on performance analysis of systems and software (ISPASS),", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Patrick Mochel", "Mike Murphy" ], "title": "sysfs - The filesystem for exporting kernel objects", "venue": "https://www.kernel.org/doc/Documentation/filesystems/sysfs. txt,", "year": 2019 }, { "authors": [ "Benjamin Ross" ], "title": "AI at the Edge Enabling a New Generation of Apps, Smart Devices, March 2020", "venue": "URL https://www.aitrends.com/edge-computing/ ai-at-the-edge-enabling-a-new-generation-of-apps-smart-devices/", "year": 2020 }, { "authors": [ "Jianghao Shen", "Yue Wang", "Pengfei Xu", "Yonggan Fu", "Zhangyang Wang", "Yingyan Lin" ], "title": "Fractional skipping: Towards finer-grained dynamic cnn inference", "venue": null, "year": 2020 }, { "authors": [ "Yongming Shen", "Michael Ferdman", "Peter Milder" ], "title": "Maximizing cnn accelerator efficiency through resource partitioning", "venue": "In Proceedings of the 44th Annual International Symposium on Computer Architecture,", "year": 2017 }, { "authors": [ "Mennatullah Siam", "Mostafa Gamal", "Moemen Abdel-Razek", "Senthil Yogamani", "Martin Jagersand", "Hong Zhang" ], "title": "A comparative study of real-time semantic segmentation for autonomous driving", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition workshops,", "year": 2018 }, { "authors": [ "Julien Siems", "Lucas Zimmer", "Arber Zela", "Jovita Lukasik", "Margret Keuper", "Frank Hutter" ], "title": "Nasbench-301 and the case for surrogate benchmarks for neural architecture", "venue": null, "year": 2008 }, { "authors": [ "Dimitrios Stamoulis", "Ruizhou Ding", "Di Wang", "Dimitrios Lymberopoulos", "Bodhi Priyantha", "Jie Liu", "Diana Marculescu" ], "title": "Single-path nas: Designing hardware-efficient convnets in less than 4 hours", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2019 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "arXiv preprint arXiv:1905.11946,", "year": 2019 }, { "authors": [ "Mingxing Tan", "Bo Chen", "Ruoming Pang", "Vijay Vasudevan", "Mark Sandler", "Andrew Howard", "Quoc V Le" ], "title": "Mnasnet: Platform-aware neural architecture search for mobile", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Alvin Wan", "Xiaoliang Dai", "Peizhao Zhang", "Zijian He", "Yuandong Tian", "Saining Xie", "Bichen Wu", "Matthew Yu", "Tao Xu", "Kan Chen" ], "title": "Fbnetv2: Differentiable neural architecture search for spatial and channel dimensions", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Chien-Yao Wang", "Hong-Yuan Mark Liao", "Ping-Yang Chen", "Jun-Wei Hsieh" ], "title": "Enriching variety of layer-wise learning information by gradient combination", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops,", "year": 2019 }, { "authors": [ "Y. Wang", "J. Shen", "T.K. Hu", "P. Xu", "T. Nguyen", "R. Baraniuk", "Z. Wang", "Y. Lin" ], "title": "Dual dynamic inference: Enabling more efficient, adaptive, and controllable deep inference", "venue": "IEEE Journal of Selected Topics in Signal Processing,", "year": 2020 }, { "authors": [ "Y. Wang", "J. Shen", "T.K. Hu", "P. Xu", "T. Nguyen", "R. Baraniuk", "Z. Wang", "Y. Lin" ], "title": "Dual dynamic inference: Enabling more efficient, adaptive, and controllable deep inference", "venue": "IEEE Journal of Selected Topics in Signal Processing,", "year": 2020 }, { "authors": [ "Yue Wang", "Ziyu Jiang", "Xiaohan Chen", "Pengfei Xu", "Yang Zhao", "Yingyan Lin", "Zhangyang Wang" ], "title": "E2-Train: Training state-of-the-art cnns with over 80% energy savings", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Diana Wofk", "Fangchang Ma", "Tien-Ju Yang", "Sertac Karaman", "Vivienne Sze" ], "title": "Fastdepth: Fast monocular depth estimation on embedded systems", "venue": "In 2019 International Conference on Robotics and Automation (ICRA),", "year": 2019 }, { "authors": [ "Bichen Wu", "Xiaoliang Dai", "Peizhao Zhang", "Yanghan Wang", "Fei Sun", "Yiming Wu", "Yuandong Tian", "Peter Vajda", "Yangqing Jia", "Kurt Keutzer" ], "title": "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Junru Wu", "Yue Wang", "Zhenyu Wu", "Zhangyang Wang", "Ashok Veeraraghavan", "Yingyan Lin" ], "title": "Deep k-means: Re-training and parameter sharing with harder cluster assignments for compressing deep convolutions", "venue": "arXiv preprint arXiv:1806.09228,", "year": 2018 }, { "authors": [ "Qingcheng Xiao", "Yun Liang", "Liqiang Lu", "Shengen Yan", "Yu-Wing Tai" ], "title": "Exploring heterogeneous algorithms for accelerating deep convolutional neural networks on fpgas", "venue": "In Proceedings of the 54th Annual Design Automation Conference", "year": 2017 }, { "authors": [ "Yunyang Xiong", "Hanxiao Liu", "Suyog Gupta", "Berkin Akin", "Gabriel Bender", "Pieter-Jan Kindermans", "Mingxing Tan", "Vikas Singh", "Bo Chen" ], "title": "Mobiledets: Searching for object detection architectures for mobile accelerators", "venue": null, "year": 2004 }, { "authors": [ "Antoine Yang", "Pedro M. Esperança", "Fabio M. Carlucci" ], "title": "Nas evaluation is frustratingly hard", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Xuan Yang", "Jing Pu", "Blaine Burton Rister", "Nikhil Bhagdikar", "Stephen Richardson", "Shahar Kvatinsky", "Jonathan Ragan-Kelley", "Ardavan Pedram", "Mark Horowitz" ], "title": "A systematic approach to blocking convolutional neural networks, 2016", "venue": null, "year": 2016 }, { "authors": [ "Chris Ying", "Aaron Klein", "Eric Christiansen", "Esteban Real", "Kevin Murphy", "Frank Hutter" ], "title": "Nasbench-101: Towards reproducible neural architecture search", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Haoran You", "Xiaohan Chen", "Yongan Zhang", "Chaojian Li", "Sicheng Li", "Zihao Liu", "Zhangyang Wang", "Yingyan Lin" ], "title": "Shiftaddnet: A hardware-inspired deep network", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Shan You", "Tao Huang", "Mingmin Yang", "Fei Wang", "Chen Qian", "Changshui Zhang" ], "title": "Greedynas: Towards fast one-shot nas with greedy supernet", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Chen Zhang", "Peng Li", "Guangyu Sun", "Yijin Guan", "Bingjun Xiao", "Jason Cong" ], "title": "Optimizing fpga-based accelerator design for deep convolutional neural networks", "venue": "In Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA", "year": 2015 }, { "authors": [ "Jianhao Zhang", "Yingwei Pan", "Ting Yao", "He Zhao", "Tao Mei" ], "title": "dabnn: A super fast inference framework for binary neural networks on arm devices", "venue": "In Proceedings of the 27th ACM International Conference on Multimedia,", "year": 2019 }, { "authors": [ "Xiaofan Zhang", "Junsong Wang", "Chao Zhu", "Yonghua Lin", "Jinjun Xiong", "Wen-mei Hwu", "Deming Chen" ], "title": "Dnnbuilder: An automated tool for building high-performance dnn hardware accelerators for fpgas", "venue": "In Proceedings of the International Conference on Computer-Aided Design, ICCAD ’18,", "year": 2018 }, { "authors": [ "Yongan Zhang", "Yonggan Fu", "Weiwen Jiang", "Chaojian Li", "Haoran You", "Meng Li", "Vikas Chandra", "Yingyan Lin" ], "title": "Dna: Differentiable network-accelerator co-search, 2020", "venue": null, "year": 2020 }, { "authors": [ "Cheah Wai Zhao", "Jayanand Jegatheesan", "Son Chee Loon" ], "title": "Exploring iot application using raspberry pi", "venue": "International Journal of Computer Networks and Applications,", "year": 2015 }, { "authors": [ "Y. Zhao", "X. Chen", "Y. Wang", "C. Li", "H. You", "Y. Fu", "Y. Xie", "Z. Wang", "Y. Lin" ], "title": "Smartexchange: Trading higher-cost memory storage/access for lower-cost computation", "venue": "In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA),", "year": 2020 }, { "authors": [ "Y. Zhao", "C. Li", "Y. Wang", "P. Xu", "Y. Zhang", "Y. Lin" ], "title": "Dnn-chip predictor: An analytical performance predictor for dnn accelerators with various dataflows and hardware architectures", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Zhang et al", "Yang" ], "title": "2016). We then compile all the architectures using the standard Vivado HLS toolflow (Xilinx Inc., a) and obtain the bottleneck latency, the maximum latency across all sub-accelerators (chunks) of the architectures on a Xilinx ZC706 development board with Zynq XC7045", "venue": "FPGA accelerators (Chen et al.,", "year": 2016 }, { "authors": [ "Xiao" ], "title": "2017) given the same architecture and dataset as shown in Table 8. We can see that our implementation achieves SOTA performance and thus provides insightful and trusted hardware-cost estimation for the HW-NAS-Bench", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The recent performance breakthroughs of deep neural networks (DNNs) have attracted an explosion of research in designing efficient DNNs, aiming to bring powerful yet power-hungry DNNs into more resource-constrained daily life devices for enabling various DNN-powered intelligent functions (Ross, 2020; Liu et al., 2018b; Shen et al., 2020; You et al., 2020a). Among them, HardWareaware Neural Architecture Search (HW-NAS) has emerged as one of the most promising techniques as it can automate the process of designing optimal DNN structures for the target applications, each of which often adopts a different hardware device and requires a different hardware-cost metric (e.g., prioritizes latency or energy). For example, HW-NAS in (Wu et al., 2019) develops a differentiable neural architecture search (DNAS) framework and discovers state-of-the-art (SOTA) DNNs balancing both accuracy and hardware efficiency, by incorporating a loss consisting of both the cross-entropy loss that leads to better accuracy and the latency loss that penalizes the network’s latency on a target device.\nDespite the promising performance achieved by SOTA HW-NAS, there exist paramount challenges that limit the development of HW-NAS innovations. First, HW-NAS requires the collection of hardware efficiency data corresponding to (all) the networks in the search space. To do so, current practice either pre-collects these data to construct a hardware-cost look-up table or adopts device-specific hardware-cost estimators/models, both of which can be time-consuming to obtain and impose a barrier-to-entry to non-hardware experts. This is because it requires knowledge about device-specific compilation and properly setting up the hardware measurement pipeline to collect hardware-cost data. Second, similar to generic NAS, it can be notoriously difficult to benchmark HW-NAS algorithms due to the required significant computational resources and the differences in their (1) hardware devices, which are specific for HW-NAS, (2) adopted search spaces, and (3) hyperparameters. Such a difficulty is even higher for HW-NAS considering the numerous choices of hardware devices, each of which can favor very different network structures even under the same target hardware efficiency, as discussed in (Chu et al., 2020). While the number of floating-point operations (FLOPs) has been commonly used to estimate the hardware-cost, many works have pointed out that DNNs with fewer FLOPs are not necessarily faster or more efficient (Wu et al., 2019; 2018; Wang et al., 2019b). For example, NasNet-A (Zoph et al., 2018) has a comparable complexity in terms of FLOPs as MobileNetV1 (Howard et al., 2017), yet can have a larger latency than the latter due to NasNet-A (Zoph et al., 2018)’s adopted hardware-unfriendly structure.\nIt is thus imperative to address the aforementioned challenges in order to make HW-NAS more accessible and reproducible to unfold HW-NAS’s full potential. Note that although pioneering NAS benchmark datasets (Ying et al., 2019; Dong & Yang, 2020; Klyuchnikov et al., 2020; Siems et al., 2020; Dong et al., 2020) have made a significant step towards providing a unified benchmark dataset for generic NAS works, all of them either merely provide the latency on server-level GPUs (e.g., GTX 1080Ti) or do not provide any hardware-cost data on real hardware, limiting their applicability to HW-NAS (Wu et al., 2019; Wan et al., 2020; Cai et al., 2018) which primarily targets commercial edge devices, FPGA, and ASIC. To this end, as shown in Figure 1, we develop HW-NAS-Bench and make the following contributions in this paper:\n• We have developed HW-NAS-Bench, the first public dataset for HW-NAS research aiming to (1) democratize HW-NAS research to non-hardware experts and (2) facilitate a unified benchmark for HW-NAS to make HW-NAS research more reproducible and accessible, covering two SOTA NAS search spaces including NAS-Bench-201 and FBNet, with the former being one of the most popular NAS search spaces and the latter having been shown to be one of the most hardware friendly NAS search spaces.\n• We provide hardware-cost data collection pipelines for six commonly used hardware devices that fall into three categories (i.e., commercial edge devices, FPGA, and ASIC), in addition to the measured/estimated hardware-cost (e.g., energy cost and latency) on these devices for all the networks in the search spaces of both NAS-Bench-201 and FBNet.\n• We conduct comprehensive analysis of the collected data in HW-NAS-Bench, such as studying the correlation between the collected hardware-cost and accuracy-cost data of all\nthe networks on the six hardware devices, which provides insights to not only HW-NAS researchers but also DNN accelerator designers. Other researchers can extract useful insights from HW-NAS-Bench that have not been discussed in this work.\n• We demonstrate exemplary user cases to show: (1) how HW-NAS-Bench can be easily used by non-hardware experts to develop HW-NAS solutions by simply querying the collected data in our HW-NAS-Bench and (2) dedicated device-specific HW-NAS can indeed lead to optimal accuracy-cost trade-offs, demonstrating the great necessity of HW-NAS benchmarks like our proposed HW-NAS-Bench." }, { "heading": "2 RELATED WORKS", "text": "" }, { "heading": "2.1 HARDWARE-AWARE NEURAL ARCHITECTURE SEARCH", "text": "Driven by the growing demand for efficient DNN solutions, HW-NAS has been proposed to automate the search for efficient DNN structures under the target efficiency constraints (Fu et al., 2020b;a; Zhang et al., 2020). For example, (Tan et al., 2019; Howard et al., 2019; Tan & Le, 2019) adopt reinforcement learning based NAS with a multi-objective reward consisting of both the task performance and efficiency, achieving promising results yet suffering from prohibitive search time/cost. In parallel, (Wu et al., 2019; Wan et al., 2020; Cai et al., 2018; Stamoulis et al., 2019) explore the design space in a differentiable manner following (Liu et al., 2018a) and significantly improve the search efficiency. The promising performance of HW-NAS has motivated a tremendous interest in applying it to more diverse applications (Fu et al., 2020a; Wang et al., 2020a; Marchisio et al., 2020) paired with target hardware devices, e.g., Edge TPU (Xiong et al., 2020) and NPU (Lee et al., 2020), in addition to the widely explored mobile phones.\nAs discussed in (Chu et al., 2020), different hardware devices can favor very different network structures under the same hardware-cost metric, and the optimal network structure can differ significantly when considering different application-driven hardware-cost metrics on the same hardware device. As such, it would ideally lead to the optimal accuracy-cost trade-offs if the HW-NAS design is dedicated for the target device and hardware-cost metrics. However, this requires a good understanding of both device-specific compilation and hardware-cost characterization, imposing a barrier-to-entry to non-hardware experts, such as many NAS researchers, and thus limits the development of optimal HW-NAS results for numerous applications, each of which often prioritizes a different application-driven hardware-cost metric and adopts a different type of hardware devices. As such, our proposed HW-NAS-Bench will make HW-NAS more friendly to NAS researchers, who are often non-hardware experts, as it consists of comprehensive hardware-cost data in a wide range of hardware devices for all the networks in two commonly used SOTA NAS search spaces, expediting the development of HW-NAS innovations." }, { "heading": "2.2 NEURAL ARCHITECTURE SEARCH BENCHMARKS", "text": "The importance and difficulty of NAS reproducibility and benchmarking has recently gained increasing attention. Pioneering efforts include (Ying et al., 2019; Dong & Yang, 2020; Klyuchnikov et al., 2020; Siems et al., 2020; Dong et al., 2020). Specifically, NAS-Bench-101 (Ying et al., 2019) presents the first large-scale and open-source architecture dataset for NAS, in which the ground truth test accuracy of all the architectures (i.e., 423k) in its search space on CIFAR-10 (Krizhevsky et al., 2009) are provided. Later, NAS-Bench-201 (Dong & Yang, 2020) further extends NAS-Bench-101 to support more NAS algorithm categories (e.g., differentiable algorithms) and more datasets (e.g., CIFAR-100 (Krizhevsky et al., 2009) and ImageNet16-120 (Chrabaszcz et al., 2017)). Most recently, NAS-Bench-301 (Siems et al., 2020) and NATS-Bench (Dong et al., 2020) are developed to support benchmarking NAS algorithms on larger search spaces. However, all of these works either merely provide latency on the server-level GPU (e.g., GTX 1080Ti) or do not consider any hardware-cost data on real hardware at all, limiting their applicability to HW-NAS (Wu et al., 2019; Wan et al., 2020; Cai et al., 2018) that primarily targets commercial edge devices, FPGA (Wang et al., 2020b), and ASIC (Chen et al., 2016; Lin et al., 2017; 2016; Zhao et al., 2020a). This has motivated us to develop the proposed HW-NAS-Bench, which aims to make HW-NAS more accessible especially for non-hardware experts and reproducible.\nA concurrent work (published after our submission) is BRP-NAS (Chau et al., 2020), which presents a benchmark for the latency of all the networks in NAS-Bench-201 (Dong & Yang, 2020) search space. In comparison, our proposed HW-NAS-Bench includes (1) more device categories (i.e., not only commercial devices, but also FPGA (Wang et al., 2020b) and ASIC (Chen et al., 2016)), (2) more hardware-cost metrics (i.e., not only latency, but also energy), and (3) more search spaces (i.e., not only NAS-Bench-201 (Dong & Yang, 2020) but also FBNet (Wu et al., 2019)). Additionally, we (4) add a detailed description of the pipeline to collect the hardware-cost of various devices and (5) analyze the necessity of device-specific HW-NAS solutions based on our collected data." }, { "heading": "3 THE PROPOSED HW-NAS-BENCH FRAMEWORK", "text": "" }, { "heading": "3.1 HW-NAS-BENCH’S CONSIDERED SEARCH SPACES", "text": "To ensure a wide applicability, our HW-NAS-Bench considers two representative NAS search spaces: (1) NAS-Bench-201’s cell-based search space and (2) FBNet search space. Both contribute valuable aspects to ensure our goal of constructing a comprehensive HW-NAS benchmark. Specifically, the former enables HW-NAS-Bench to naturally integrate the ground truth accuracy data of all NAS-Bench-201’s considered network architectures, while the latter ensures that HW-NASBench includes the most commonly recognized hardware friendly search space.\nNAS-Bench-201 Search Space. Inspired from the search space used in the most popular cell-based NAS, NAS-Bench-201 adopts a fixed cell search space, where each architecture consists of a predefined skeleton with a stack of the searched cell that is represented as a densely-connected directed acyclic graph (DAG). Specifically, it considers 4 nodes and 5 representative operation candidates for the operation set, and varies the feature map sizes and the dimensions of the final fully-connected layer to handle its considered three datasets (i.e., CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and ImageNet16-120 (Chrabaszcz et al., 2017)), leading to a total of 3× 56 = 46875 architectures. Training log and accuracy are provided for each architecture. However, NAS-Bench-201 can not be directly used for HW-NAS as it only includes theoretical cost metrics (i.e., FLOPs and the number of parameters (#Params)) and the latency on a server-level GPU (i.e., GTX 1080Ti). HW-NASBench enhances NAS-Bench-201 by providing all the 46875 architectures’ measured/estimated hardware-cost on six devices, which are primarily targeted by SOTA HW-NAS works.\nFBNet Search Space. FBNet (Wu et al., 2019) constructs a layer-wise search space with a fixed macro-architecture, which defines the number of layers and the input/output dimensions of each layer and fixes the first and last three layers with the remaining layers to be searched. In this way, the network architectures in the FBNet (Wu et al., 2019) search space have more regular structures than those in NAS-Bench-201, and have been shown to be more hardware friendly (Fu et al., 2020a; Ma et al., 2018). The 9 considered pre-defined cell candidates and 22 unique positions lead to a total of 922 ≈ 1021 unique architectures. While HW-NAS researchers can develop their search algorithms on top of the FBNet (Wu et al., 2019) search space, tedious efforts are required to build the hardware-cost look-up tables or models for each target device. HW-NAS-Bench provides the measured/estimated hardware-cost on six hardware devices for all the 1021 architectures in the FBNet search space, aiming to make HW-NAS research more friendly to non-hardware experts and easier to be benchmarked." }, { "heading": "3.2 HARDWARE-COST COLLECTION PIPELINE AND THE CONSIDERED DEVICES", "text": "To collect the hardware-cost data for all the architectures in both the NAS-Bench-201 and FBNet search spaces, we construct a generic hardware-cost collection pipeline (see Figure 2) to automate the process. The pipeline mainly consists of the target devices and corresponding deployment tools (e.g., compilers). Specifically, it takes all the networks as its inputs, and then compiles the networks to (1) convert them into the device’s required execution format and (2) optimize the execution flow, the latter of which aims to optimize the hardware performance on the target devices. For example, for collecting the hardware-cost in an Edge GPU, we first set the device in the Max-N mode to fully make use of all available resources following (Wofk et al., 2019), and then set up the embedded power rail monitor (Texas Instruments Inc.) to obtain the real-measured latency and energy via sysfs (Patrick Mochel and Mike Murphy.), averaging over 50 runs. We can see that the hardware-cost collection pipeline requires various hardware domain knowledge, includ-\ning machine learning development frameworks, device compilation, embedded systems, and device measurements, imposing a barrier-to-entry to non-hardware experts.\nNext, we briefly introduce the six considered hardware devices (as summarized in Table 1) and the specific configuration required to collect the hardware-cost data on each device.\nEdge GPU: NVIDIA Edge GPU Jetson TX2 (Edge GPU) is a commercial device with a 256-core Pascal GPU and a 8GB LPDDR4, targeting IoT applications (NVIDIA Inc., a). When plugging an Edge GPU into the above hardware-cost collection pipeline, we first compile the network architectures in both NAS-Bench-201 and FBNet spaces to (1) convert them to the TensorRT format and (2) optimize the inference implementation within NVIDIA’s recommended TensorRT runtime environment, and then execute them in the Edge GPU to measure the consumed energy and latency.\nRaspi 4: Raspberry Pi 4 (Raspi 4) is the latest Raspberry Pi device (Raspberry Pi Limited.), consisting of a Broadcom BCM2711 SoC and a 4GB LPDDR4. To collect the hardware-cost operating on it, we compile the architecture candidates to (1) convert them into the TensorFlow Lite (TFLite) (Abadi et al., 2016) format and (2) optimize the implementation using the official interpreter (Google LLC., 2020) in Raspi 4, where the interpreter will be pre-configured.\nEdge TPU: An Edge TPU Dev Board (Edge TPU) (Google LLC., a) is a dedicated ASIC accelerator developed by Google, targeting Artificial Intelligence (AI) inference for edge applications. Similar to the case when using Raspi 4, all the architectures are converted into the TFLite format. After that, an Edge TPU compiler will be used to convert the pre-built TFLite model into a more compressed format which is compatible to the pre-configured runtime environment in the Edge TPU.\nPixel 3: Pixel 3 is one of the latest Pixel mobile phones (Google LLC., e), which are widely used as the target platforms by recent NAS works (Xiong et al., 2020; Howard et al., 2019; Tan et al., 2019). To collect the hardware-cost in Pixel 3, we first convert all the architectures into the TFLite format, then use TFLite’s official benchmark binary file to obtain the latency, when configuring the Pixel 3 device to only use its big cores for reducing the measurement variance as in (Xiong et al., 2020; Tan et al., 2019).\nASIC-Eyeriss: For collecting the hardware-cost data in ASIC, we consider a SOTA ASIC accelerator, Eyeriss (Chen et al., 2016). Specifically, we adopt the SOTA ASIC accelerator’s performance simulators: (1) Accelergy (Wu et al., 2019)+Timeloop (Parashar et al., 2019) and (2) DNN-Chip Predictor (Zhao et al., 2020b), both of which automatically identify the optimal algorithm-to-hardware mapping methods for each architecture and then provide the estimated hardware-cost of the network execution in Eyeriss.\nFPGA: FPGA is a widely adopted AI acceleration platform featuring a higher hardware flexibility than ASIC and more decent hardware efficiency than commercial edge devices. To collect hardwarecost data in this platform, we first develop a SOTA chunk based pipeline structure (Shen et al., 2017; Zhang et al., 2020) implementation, compile all the architectures using the standard Vivado HLS toolflow (Xilinx Inc., a), and then obtain the hardware-cost on a Xilinx ZC706 board with a Zynq XC7045 SoC (Xilinx Inc., b).\nMore details about the pipeline for each of the aforementioned devices are provided in the Appendix D for better understanding.\nIn our HW-NAS-Bench, to estimate the hardware-cost of the networks in the FBNet search space (Wu et al., 2019) when being executed on the commercial edge devices (i.e., Edge GPU, Raspi 4, Edge TPU, and Pixel 3), we sum up the hardware-cost of all unique blocks (i.e., “block” in the FBNet space (Wu et al., 2019)) within the network architectures. To validate that such an approximation is close to the corresponding real-measured results, we conduct experiments, as summarized in Table 2, to calculate two types of correlation coefficients between the measured and the approximated hardware-cost based on 100 randomly sampled architectures from the FBNet search space. We can see that our approximated hardware-cost is highly correlated with the real-measured one, except for the case on the Edge TPU, which we conjecture is caused by the adopted in-house Edge TPU compiler (Google LLC., c). More visualization results can be found in the Appendix A." }, { "heading": "4 ANALYSIS ON HW-NAS-BENCH", "text": "In this section, we provide analysis and visualization of the hardware-cost and corresponding accuracy data (the latter only for architectures in NAS-Bench-201) for all the architectures in the two considered search spaces. Specifically, our analysis and visualization confirm that (1) commonly used theoretical hardware-cost metrics such as FLOPs do not correlate well with the measured/estimated hardware-cost; (2) hardware-cost of the same architectures can differ a lot when executed on different devices; and (3) device-specific HW-NAS is necessary because optimal architectures resulting from HW-NAS targeting on one device can perform poorly in terms of the hardware-cost when being executed on another device." }, { "heading": "4.1 CORRELATION BETWEEN COLLECTED HARDWARE-COST AND THEORETICAL ONES", "text": "To confirm whether commonly used theoretical hardware-cost metrics align with realmeasured/estimated ones, we summarize the calculated correlation between the collected hardwarecost in our HW-NAS-Bench and the theoretical metrics (i.e., FLOPs and #Params), based on the data for all the architectures in both search spaces on all the six considered hardware devices where a total of four different datasets are involved.\nAs summarized in Tables 3 - 4, commonly used theoretical hardware-cost metrics (i.e., FLOPs and #Params) do not always correlate well with measured/estimated hardware-cost for the architectures in both the NAS-Bench-201 and FBNet spaces. For example, there exists at least one coefficient <0.5 on all devices, especially for the cases with real-measured/estimated hardware-cost on commonly considered edge platforms including Edge GPU, Edge TPU, and ASIC-Eyeriss. As such, HW-NAS based on the theoretical hardware-cost might lead to sub-optimal results, motivating HWNAS benchmarks like our HW-NAS-Bench. Note that we consider the Kendall Rank Correlation Coefficients (Abdi, 2007), which is a commonly used correlation coefficient in both recent NAS frameworks and benchmarks (You et al., 2020b; Siems et al., 2020; Yang et al., 2020)." }, { "heading": "4.2 CORRELATION AMONG COLLECTED HARDWARE-COST ON DIFFERENT DEVICES", "text": "To check how much the hardware-cost of the same architectures on different devices correlate, we visualize the correlation between the hardware-cost collected from every two paired devices based on the data for all the architectures in both the NAS-Bench-201 and FBNet search spaces with each of the architectures associated with 9 different hardware-cost metrics.\nThe visualization in Figures 3 - 4 indicates that hardware-cost of the same network architectures can differ a lot when being executed on different devices. More specifically, the correlation coefficients can be as small as -0.00 (e.g., Edge GPU latency vs. ASIC-Eyeriss energy for the architectures in the FBNet search space), which is resulting from the large difference in their underlying (1) hardware micro-architectures and (2) available hardware resources. Thus, the resulting architecture of HW-NAS targeting one device might perform poorly when being executed on other devices, motivating device-specific HW-NAS; Furthermore, it is crucial to develop comprehensive hardwarecost datasets like our HW-NAS-Bench to enable fast development and ensure optimal results of HW-NAS for different applications." }, { "heading": "4.3 OPTIMAL ARCHITECTURES ON DIFFERENT HARDWARE DEVICES", "text": "To confirm the necessity of performing device-specific HW-NAS from another perspective, we summarize the test accuracy vs. hardware-cost of all the architectures in NAS-Bench-201 considering the ImageNet16-120 dataset, and analyze the architectures with the optimal accuracy-cost trade-offs for different devices.\nAs shown in Figure 5, such optimal architectures for different devices are not the same. For example, the optimal architectures on Edge GPU (marked as red points) can perform poorly in terms of the hardware-cost in other devices, especially in ASIC-Eyeriss and Edge TPU whose hardware-cost exactly has the smallest correlation coefficient with the hardware-cost measured in Edge GPU, which is shown in Figure 3. Again, this set of analysis and visualization confirms that HW-NAS targeting on one device can perform poorly in terms of the hardware-cost when being executed on another device, thus motivating the necessity of device-specific HW-NAS." }, { "heading": "5 USER CASES: BENCHMARK SOTA HW-NAS ALGORITHMS", "text": "In this section, we will demonstrate the user cases of our HW-NAS-Bench to show (1) how nonhardware experts can use it to develop HW-NAS solutions by simply querying the hardware-cost\ndata and (2) dedicated device-specific HW-NAS can indeed often lead to optimal accuracy-cost trade-offs, again showing the important need for HW-NAS benchmarks like our HW-NAS-Bench to enable more optimal HW-NAS solutions via device-specific HW-NAS.\nBenchmark Setting. We adopt a SOTA HW-NAS algorithm, ProxylessNAS (Cai et al., 2018) for this experiment. As an example to use our HW-NAS-Bench, we use ProxylessNAS to search over the FBNet (Wu et al., 2019) search space on CIFAR-100 (Krizhevsky et al., 2009), when targeting different devices in our HW-NAS-Bench by simply querying the corresponding device’s measured/estimated hardware-cost, which has negligible overhead as compared to the HW-NAS algorithm itself, without the need for hardware expertise or knowledge during the whole HW-NAS." }, { "heading": "5.1 OPTIMAL ARCHITECTURES RESULTING FROM DEVICE-SPECIFIC HW-NAS", "text": "Table 5 illustrates that the searched architectures achieve the lowest latency among all architectures when the target devices of HW-NAS are the same as the one used to measure the architecture’s on-device inference latency. Specifically, when being executed on an Edge GPU, the searched architecture targeting Raspi 4 during HW-NAS leads to about a 50% higher latency, while the searched architecture targeting FPGA during HW-NAS introduces over a 100% higher latency, than the architecture specifically target on the Edge GPU during HW-NAS, under the same inference accuracy. This set of experiments shows that non-hardware experts can easily use our HW-NAS-Bench to develop optimal HW-NAS solutions, and demonstrates that device-specific HW-NAS is critical to guarantee the searched architectures’ on-device performance." }, { "heading": "6 CONCLUSION", "text": "We have developed HW-NAS-Bench, the first public dataset for HW-NAS research aiming to (1) democratize HW-NAS research to non-hardware experts and (2) facilitate a unified benchmark for HW-NAS to make HW-NAS research more reproducible and accessible. Our HW-NAS-Bench covers two representative NAS search spaces, and provides all network architectures’ hardware-cost data on six commonly used hardware devices that fall into three categories (i.e., commercial edge devices, FPGA, and ASIC). Furthermore, we conduct comprehensive analysis of the collected data in HW-NAS-Bench, aiming to provide insights to not only HW-NAS researchers but also DNN accelerator designers. Finally, we demonstrate exemplary user cases of HW-NAS-Bench to show: (1) how HW-NAS-Bench can be easily used by non-hardware experts via simply querying the collected data to develop HW-NAS solutions and (2) dedicated device-specific HW-NAS can indeed lead to optimal accuracy-cost trade-offs, demonstrating the great necessity of HW-NAS benchmarks like our proposed HW-NAS-Bench. It is expected that our HW-NAS-Benchcan significantly expedite and facilitate HW-NAS research innovations." }, { "heading": "ACKNOWLEDGEMENT", "text": "The work is supported by the National Science Foundation (NSF) through the CNS Division of Computer and Network Systems (Award number: 2016727)." }, { "heading": "A MORE VISUALIZATION ON THE MEASURED HARDWARE-COST FOR THE FBNET SEARCH SPACE", "text": "M ea\nsu re\nd Ed\nge G\nPU L\nat en\ncy\nM ea\nsu re\nd E\ndg e\nG PU\nE ne\nrg y\nM ea\nsu re\nd R\nas pi\n4 L\nat en\ncy\nM ea\nsu re\nd Ed\nge T\nPU L\nat en\ncy\nM ea\nsu re\nd Pi\nxe l 3\nL at\nen cy\nApproximated Edge GPU Latency Approximated Raspi 4 Latency Approximated Pixel 3 LatencyApproximated Edge GPU Energy Approximated Edge TPU Latency\nImageNet\nCIFAR-100\nFig. 6 shows a comparison between the approximated and measured hardware-cost of randomly sampled 100 architectures when being executed on commercial edge devices using the ImageNet and CIFAR-100 datasets, which verifies that our approximation of summing up the performance of the unique blocks is a simple yet quite accurate for providing the hardware-cost for networks in the FBNet space and is consistent with our observation in Table 2." }, { "heading": "B COMPARING THE ESTIMATED COST EXECUTED ON EYERISS USING ACCELERGY+TIMELOOP AND DNN-CHIP REDICTOR", "text": "Both Accelergy (Wu et al., 2019)+Timeloop (Parashar et al., 2019) and DNN-Chip Predictor (Zhao et al., 2020b) are able to simulate the latency and energy cost of Eyeriss (Chen et al., 2016), a SOTA ASIC DNN accelerator, when giving the network architectures. From Table 6, they nearly give the same estimation for the latency and energy cost: specifically, the mean of their differences is 6.096%, the standard deviation of the differences is 0.779%, the Pearson correlation coefficient is 0.9998, and the Kendall Rank correlation coefficient is 0.9633, in term of the average performance, when being benchmarked with NAS-Bench-201 on 3 datasets. Therefore, we use the average value of their predictions as the estimated latency and energy on Eyeriss in our proposed HW-NAS-Bench." }, { "heading": "C MINOR MODIFICATIONS ON THE FBNET SEARCH SPACE WHEN BENCHMARKING ON CIFAR-100", "text": "Here we describe our modification on the FBNet search space when benchmarking on CIFAR-100 (i.e., the setting in Section 5) by comparing the marco-architectures before and after such modification in Table 7." }, { "heading": "D DETAILS OF THE PIPELINE USED TO COLLECT HARDWARE-COST DATA", "text": "D.1 COLLECT PERFORMANCE ON THE EDGE GPU\nNVIDIA Edge GPU Jetson TX2 (Edge GPU) (NVIDIA Inc., a) is a commonly used commercial edge device, consisting of a quad-core Arm Cortex-A57, a dual-core NVIDIA Denver2, a 256-core Pascal GPU, and a 8GB 128-bit LPDDR4, for various deep learning applications including classification (Li et al., 2020), segmentation (Siam et al., 2018), and depth estimation (Wofk et al., 2019), targeting IoT, and self-driving environments. Although widely-used TensorFlow (Abadi et al., 2016) and PyTorch (Paszke et al., 2019) can be directly used in Edge GPUs, to achieve faster inference, TensorRT (NVIDIA Inc., b), a C++ library for high-performance inference on NVIDIA GPUs, is more commonly used as the runtime environment in Edge GPUs when only benchmarking inference performance (Wang et al., 2019a; NVIDIA Inc., c).\nWe pre-set the Edge GPU to the max-N mode to make full use of the resource on it following (Wofk et al., 2019). When plugging Edge GPUs into the hardware-cost collection pipeline, we first compile the PyTorch implementations of the network architectures in both NAS-Bench-201 and FBNet search spaces to TensorRT format models. In this way, the resulting hardware-cost can benefit from the optimized inference implementation within the TensorRF runtime environment. And then we benchmark the architectures in Edge GPUs to further measure the energy and latency using the sysfs (Patrick Mochel and Mike Murphy.) of the embedded INA3221 (Texas Instruments Inc.) power rails monitor.\nD.2 COLLECT PERFORMANCE ON RASPI 4\nRaspberry Pi 4 (Raspi 4) (Raspberry Pi Limited.) is the latest Raspberry Pi device, which is a popular hardware platform for general purpose IoT applications (Zhao et al., 2015; Basu et al., 2020) and is able to support deep learning applications with specifical framework designs (Google LLC., f; Zhang et al., 2019; Geiger & Team, 2020). We choose the type of Raspi 4 with a Broadcom BCM2711 SoC and a 4GB LPDDR4 (Raspberry Pi Limited.). Similar to Edge GPUs, Raspi 4 can run architectures in the TensorFlow (Abadi et al., 2016), PyTorch (Paszke et al., 2019), or TensorFlow Lite (Google LLC., f) runtime environments. We utilize TensorFlow Lite (Google LLC., f) as it can further boost the inference efficiency.\nTo collect hardware-cost operating on Respi 4, an official TensorFlow Lite interpreter is preconfigured in the Raspi 4, following the settings in (Google LLC., 2020). We benchmark the possible architectures in HW-NAS-Bench on Raspi 4 after compiling them to the TensorFlow Lite (Abadi et al., 2016) format to measure the resulting latency.\nD.3 COLLECT PERFORMANCE ON THE EDGE TPU\nEdge TPU (Google LLC., a) is a series of dedicated ASIC accelerators developed by Google, targeting AI inference at the edge, which can be used for classification, pose estimation, and segmentation (Xiong et al., 2020; Google LLC., b) with extremely high efficiency (e.g., 2.32× more efficient than a single SOTA desktop GPU, GTX 2080 Ti, in terms of the number of fixed-point operations per watt (Google LLC., d)). In our proposed collection pipeline, we choose the Dev Board (Google LLC., a) which provides the most functions among all products.\nTo collect hardware-cost in Edge TPUs, all the architectures to be benchmarked will first be converted to the TensorFlow Lite (Google LLC., f) format from their Keras (Chollet et al., 2015) implementation. After that, an in-house compiler (Google LLC., c) will be used to convert the TensorFlow Lite models into a more compressed format. This pipeline uses the least converting tools to make sure that the most operations are supported, as compared to other options (e.g., converting from the PyTorch-ONNX (Bai et al., 2020) implementation). Only the latency is collected on the Edge TPU since it lacks accurate embedded power rails monitor. We do not consider the FBNet’s search space for the Edge TPU, and more details are in the Appendix A.\nD.4 COLLECT PERFORMANCE ON PIXEL 3\nPixel 3 (Google LLC., e) is one of the latest Pixel mobile phones that are widely used as the target platform by recent NAS works (Xiong et al., 2020; Howard et al., 2019; Tan et al., 2019) and machine learning framework benchmark (Google LLC., f). In our implementation, the Pixel 3 is pre-configured to use its big cores following the setting in (Xiong et al., 2020; Tan et al., 2019). Similar to the case of Raspi 4, we first convert the possible architectures in the search spaces of our proposed HW-NAS-Bench into the TensorFlow Lite format and then use the official benchmark binary files to measure the latency for each architecture.\nD.5 COLLECT PERFORMANCE ON ASIC-EYERISS\nFor hardware-cost data collection in ASIC, we consider Eyeriss (ASIC-Eyeriss) which is a SOTA ASIC accelerator (Chen et al., 2016). The Eyeriss chip features 168 processing elements (PEs) which are connected through a configurable dedicated on-chip network into a 2D array. A 128KB SRAM is shared by all PEs and further divided into multiple banks, each of which can be assigned to fit the input feature maps or partial sums. Thanks to these configurable hardware settings, we can adopt the optimal algorithm-to-hardware mappings for different network architectures when being executed on Eyeriss to minimize the energy or latency by maximizing data reuse opportunities for different layers.\nIn order to find the optimal mappings and evaluate the performance metrics on Eyeriss, we adopt SOTA performance simulators for DNN accelerators (1) Accelergy (Wu et al., 2019)+Timeloop (Parashar et al., 2019) and (2) DNN-Chip Predictor (Zhao et al., 2020b). Both of the simulators can characterize the Eyeriss’s micro-architecture, perform mapping exploration, and predict the energy cost and latency metrics. Given the Eyeriss accelerator and layer information (e.g, layer type, feature map size, and kernel size) in both NAS-Bench-201 and FBNet, Accelergy+Timeloop reports the energy cost and latency characterization through an integrated mapper that finds the optimal mapping for such layer when being executed in Eyeriss. The inputs to DNNChip Predictor are the same as those to Accelergy+Timeloop, except that we can set the optimization metric as energy/latency/energy-delay product. DNN-Chip Predictor identifies the optimal mapping for the optimization metric and generates the estimated hardware-cost. We report the average prediction from the two simulators as the estimated hardware-cost of Eyeriss, and more details can be found in Appendix B.\nD.6 COLLECT PERFORMANCE ON FPGA\nFPGA is a widely adopted AI acceleration platform which can offer a higher flexibility in terms of the hardware resources for accelerating AI algorithms. For collecting hardware-cost data in FPGA, we construct a SOTA chunk based pipeline structure (Zhang et al., 2018; Shen et al., 2017) as our FPGA implementation. By configuring multiple sub-accelerators (chunks) and assigning different layers to different sub-accelerators(chunks), we can balance the throughput and hardware resource consumption. To further free up our implantation’s potential to reach the performance frontier across different architectures, we additionally configure hardware settings such as the number of PEs, interconnection method of PEs, and tiling/scheduling of the operations, which are commonly adopted by FPGA accelerators (Chen et al., 2017; Zhang et al., 2015; Yang et al., 2016). We then compile all the architectures using the standard Vivado HLS toolflow (Xilinx Inc., a) and obtain the bottleneck latency, the maximum latency across all sub-accelerators (chunks) of the architectures on a Xilinx ZC706 development board with Zynq XC7045 SoC (Xilinx Inc., b).\nTo verify our implementation, we compare our implementation’s performance with SOTA FPGA accelerators (Zhang et al., 2018; Xiao et al., 2017) given the same architecture and dataset as shown in Table 8. We can see that our implementation achieves SOTA performance and thus provides insightful and trusted hardware-cost estimation for the HW-NAS-Bench." } ]
2,021
null
SP:f65217b47950d0dbf8e77622489d8883211a012d
[ "This paper proposes a novel graph neural network-based architecture. Building upon the theoretical success of graph scattering transforms, the authors propose to learn some aspects of it providing them with more flexibility to adapt to data (recall that graph scattering transforms are built on pre-designed graph wavelet filter banks and do not learn from data). By dropping the dyadic distribution of frequencies within the wavelet bank, the proposed architecture actually learns a more suitable frequency separation among the different wavelets." ]
Many popular graph neural network (GNN) architectures, which are often considered as the current state of the art, rely on encoding graph structure via smoothness or similarity between neighbors. While this approach performs well on a surprising number of standard benchmarks, the efficacy of such models does not translate consistently to more complex domains, such as graph data in the biochemistry domain. We argue that these more complex domains require priors that encourage learning of longer range features rather than oversmoothed signals of standard GNN architectures. Here, we propose an alternative GNN architecture, based on a relaxation of recently proposed geometric scattering transforms, which consists of a cascade of graph wavelet filters. Our learned geometric scattering (LEGS) architecture adaptively tunes these wavelets and their scales to encourage band-pass features to emerge in learned representations. This results in a simplified GNN with significantly fewer learned parameters compared to competing methods. We demonstrate the predictive performance of our method on several biochemistry graph classification benchmarks, as well as the descriptive quality of its learned features in biochemical graph data exploration tasks. Our results show that the proposed LEGS network matches or outperforms popular GNNs, as well as the original geometric scattering construction, while retaining certain mathematical properties of its handcrafted (nonlearned) design.
[]
[ { "authors": [ "Uri Alon", "Eran Yahav" ], "title": "On the bottleneck of graph neural networks and its practical implications", "venue": "arXiv preprint arXiv:2006.05205,", "year": 2020 }, { "authors": [ "Pablo Barceló", "Egor V Kostylev", "Mikael Monet", "Jorge Pérez", "Juan Reutter", "Juan Pablo Silva" ], "title": "The logical expressiveness of graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "K.M. Borgwardt", "C.S. Ong", "S. Schonauer", "S.V.N. Vishwanathan", "A.J. Smola", "H.-P. Kriegel" ], "title": "Protein function prediction via graph kernels. Bioinformatics, 21(Suppl 1):i47–i56", "venue": "doi: 10.1093/bioinformatics/bti1007", "year": 2005 }, { "authors": [ "Michael M. Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: Going beyond Euclidean data", "venue": "IEEE Signal Process. Mag.,", "year": 2017 }, { "authors": [ "D.S. Broomhead", "D Lowe" ], "title": "Radial Basis Functions, Multi-Variable Functional Interpolation and Adaptive Networks", "venue": "R. Signals Raar Establ., Memorandum", "year": 1988 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral Networks and Locally Connected Networks on Graphs", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Ronald R. Coifman", "Mauro Maggioni" ], "title": "Diffusion wavelets", "venue": "Applied and Computational Harmonic Analysis,", "year": 2006 }, { "authors": [ "Sergio Martinez Cuesta", "Syed Asad Rahman", "Nicholas Furnham", "Janet M. Thornton" ], "title": "The Classification and Evolution of Enzyme Function", "venue": "Biophysical Journal,", "year": 2015 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering", "venue": "Adv. Neural Inf. Process. Syst", "year": 2016 }, { "authors": [ "Paul D. Dobson", "Andrew J. Doig" ], "title": "Distinguishing Enzyme Structures from Non-enzymes Without Alignments", "venue": "Journal of Molecular Biology,", "year": 2003 }, { "authors": [ "Fernando Gama", "Joan Bruna", "Alejandro Ribeiro" ], "title": "Diffusion Scattering Transforms on Graphs", "venue": "Int. Conf. Mach. Learn.,", "year": 2019 }, { "authors": [ "Fernando Gama", "Joan Bruna", "Alejandro Ribeiro" ], "title": "Stability of Graph Scattering Transforms", "venue": "Adv. Neural Inf. Process. Syst", "year": 2019 }, { "authors": [ "Feng Gao", "Guy Wolf", "Matthew Hirn" ], "title": "Geometric scattering for graph data analysis", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hongyang Gao", "Shuiwang Ji" ], "title": "Graph U-Nets", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural Message Passing for Quantum Chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "William L. Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Inductive Representation Learning on Large Graphs", "venue": "Adv. Neural Inf. Process. Syst", "year": 2017 }, { "authors": [ "John Ingraham", "Vikas Garg", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Generative Models for GraphBased Protein Design", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-Supervised Classification with Graph Convolutional Networks", "venue": "4th Int. Conf. Mach. Learn.,", "year": 2016 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "venue": "In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", "year": 2018 }, { "authors": [ "Wenfei Li", "Jun Wang", "Jian Zhang", "Wei Wang" ], "title": "Molecular simulations of metal-coupled protein folding", "venue": "Current Opinion in Structural Biology,", "year": 2015 }, { "authors": [ "Renjie Liao", "Zhizhen Zhao", "Raquel Urtasun", "Richard S Zemel" ], "title": "LanczosNet: Multi-Scale Deep Graph Convolutional Networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Sitao Luan", "Mingde Zhao", "Xiao-Wen Chang", "Doina Precup" ], "title": "Break the Ceiling: Stronger Multiscale Deep Graph Convolutional Networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Stéphane Mallat" ], "title": "Group Invariant Scattering", "venue": "Commun. Pure Appl. Math.,", "year": 2012 }, { "authors": [ "Yimeng Min", "Frederik Wenkel", "Guy Wolf" ], "title": "Scattering gcn: Overcoming oversmoothness in graph convolutional networks", "venue": "arXiv preprint arXiv:2003.08414,", "year": 2020 }, { "authors": [ "Vivek Modi", "Qifang Xu", "Sam Adhikari", "Roland L. Dunbrack" ], "title": "Assessment of Template-Based Modeling of Protein", "venue": "Structure in CASP11. Proteins,", "year": 2016 }, { "authors": [ "John Moult", "Krzysztof Fidelis", "Andriy Kryshtafovych", "Torsten Schwede", "Anna Tramontano" ], "title": "Critical assessment of methods of protein structure prediction (CASP)—Round XII", "venue": "Proteins Struct. Funct. Bioinforma.,", "year": 2018 }, { "authors": [ "Sergey Ovchinnikov", "Hahnbeom Park", "David E. Kim", "Frank DiMaio", "David Baker" ], "title": "Protein structure prediction using Rosetta in CASP12", "venue": "Proteins Struct. Funct. Bioinforma.,", "year": 2018 }, { "authors": [ "Michael Perlmutter", "Guy Wolf", "Matthew Hirn" ], "title": "Geometric scattering on manifolds", "venue": "In NeurIPS 2018 Workshop on Integration of Deep Learning Theories,", "year": 2018 }, { "authors": [ "Michael Perlmutter", "Feng Gao", "Guy Wolf", "Matthew Hirn" ], "title": "Understanding graph neural networks with asymmetric geometric scattering transforms", "venue": null, "year": 1911 }, { "authors": [ "H. Toivonen", "A. Srinivasan", "R.D. King", "S. Kramer", "C. Helma" ], "title": "Statistical evaluation of the Predictive Toxicology Challenge 2000-2001", "venue": null, "year": 2003 }, { "authors": [ "Nikil Wale", "Ian A Watson", "George Karypis" ], "title": "Comparison of Descriptor Spaces for Chemical Compound Retrieval and Classification", "venue": "Knowl. Inf. Syst.,", "year": 2008 }, { "authors": [ "Zhenqin Wu", "Bharath Ramsundar", "Evan N. Feinberg", "Joseph Gomes", "Caleb Geniesse", "Aneesh S. Pappu", "Karl Leswing", "Vijay Pande" ], "title": "MoleculeNet: A benchmark for molecular machine learning", "venue": "Chem. Sci.,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How Powerful Are Graph Neural Networks", "venue": null, "year": 2019 }, { "authors": [ "Pinar Yanardag", "S.V.N. Vishwanathan" ], "title": "Deep Graph Kernels", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD", "year": 2015 }, { "authors": [ "Dongmian Zou", "Gilad Lerman" ], "title": "Graph convolutional neural networks via scattering", "venue": "Applied and Computational Harmonic Analysis,", "year": 2019 }, { "authors": [ "ENZYMES Borgwardt" ], "title": "Is a dataset of 600 enzymes divided into 6 balanced classes", "venue": null, "year": 2005 }, { "authors": [ "PROTEINS Borgwardt" ], "title": "Contains 1178 protein structures with the goal of classifying", "venue": null, "year": 2005 }, { "authors": [ "PTC Toivonen" ], "title": "Contains 344 chemical compound graphs divided into two classes", "venue": null, "year": 2003 }, { "authors": [ "Gilmer" ], "title": "Graphs in the QM9 dataset each represent chemicals", "venue": null, "year": 2018 }, { "authors": [ "of Cuesta" ], "title": "2015) and the class exchange preferences inferred from LEGS-FIXED, LEGS-FCN", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Geometric deep learning has recently emerged as an increasingly prominent branch of machine learning in general, and deep learning in particular (Bronstein et al., 2017). It is based on the observation that many of the impressive achievements of neural networks come in applications where the data has an intrinsic geometric structure which can be used to inform network design and training procedures. For example, in computer vision, convolutional neural networks use the spatial organization of pixels to define convolutional filters that hierarchically aggregate local information at multiple scales that in turn encode shape and texture information in data and task-driven representations. Similarly, in time-series analysis, recurrent neural networks leverage memory mechanisms based on the temporal organization of input data to collect multiresolution information from local subsequences, which can be interpreted geometrically via tools from dynamical systems and spectral analysis. While these examples only leverage Euclidean spatiotemporal structure in data, they exemplify the potential benefits of incorporating information about intrinsic data geometry in neural network design and processing. Indeed, recent advances have further generalized the utilization of geometric information in neural networks design to consider non-Euclidean structures, with particular interest in graphs that represent data geometry, either directly given as input or constructed as an approximation of a data manifold.\nAt the core of geometric deep learning is the use of graph neural networks (GNNs) in general, and graph convolutional networks (GCNs) in particular, which ensure neuron activations follow the geometric organization of input data by propagating information across graph neighborhoods (Bruna et al., 2014; Defferrard et al., 2016; Kipf & Welling, 2016; Hamilton et al., 2017; Xu et al., 2019; Abu-El-Haija et al., 2019). However, recent work has shown the difficulty in generalizing these methods to more complex structures, identifying common problems and phrasing them in terms of oversmoothing (Li et al., 2018), oversquashing (Alon & Yahav, 2020) or under-reaching (Barceló et al., 2020). Using graph signal processing terminology from Kipf & Welling (2016), these issues\ncan be partly attributed to the limited construction of convolutional filters in many commonly used GCN architectures. Inspired by the filters learned in convolutional neural networks, GCNs consider node features as graph signals and aim to aggregate information from neighboring nodes. For example, Kipf & Welling (2016) presented a typical implementation of a GCN with a cascade of averaging (essentially low pass) filters. We note that more general variations of GCN architectures exist (Defferrard et al., 2016; Hamilton et al., 2017; Xu et al., 2019), which are capable of representing other filters, but as investigated in Alon & Yahav (2020), they too often have difficulty in learning long range connections.\nRecently, an alternative approach was presented to provide deep geometric representation learning by generalizing Mallat’s scattering transform (Mallat, 2012), originally proposed to provide a mathematical framework for understanding convolutional neural networks, to graphs (Gao et al., 2019; Gama et al., 2019a; Zou & Lerman, 2019) and manifolds (Perlmutter et al., 2018). Similar to traditional scattering, which can be seen as a convolutional network with nonlearned wavelet filters, geometric scattering is defined as a GNN with handcrafted graph filters, typically constructed as diffusion wavelets over the input graph (Coifman & Maggioni, 2006), which are then cascaded with pointwise absolute-value nonlinearities. This wavelet cascade results in permutation equivariant node features that are typically aggregated via statistical moments over the graph nodes, as explained in detail in Sec. 2, to provide a permutation invariant graph-level representation. The efficacy of geometric scattering features in graph processing tasks was demonstrated in Gao et al. (2019), with both supervised learning and data exploration applications. Moreover, their handcrafted design enables rigorous study of their properties, such as stability to deformations and perturbations, and provides a clear understanding of the information extracted by them, which by design (e.g., the cascaded band-pass filters) goes beyond low frequencies to consider richer notions of regularity (Gama et al., 2019b; Perlmutter et al., 2019).\nHowever, while graph scattering transforms provide effective universal feature extractors, their rigid handcrafted design does not allow for the automatic task-driven representation learning that naturally arises in traditional GNNs. To address this deficiency, recent work has proposed a hybrid scattering-GCN (Min et al., 2020) model for obtaining node-level representations, which ensembles a GCN model with a fixed scattering feature extractor. In Min et al. (2020), integrating channels from both architectures alleviates the well-known oversmoothing problem and outperforms popular GNNs on node classification tasks. Here, we focus on improving the geometric scattering transform by learning, in particular its scales. We focus on whole-graph representations with an emphasis on biochemical molecular graphs, where relatively large diameters and non-planar structures usually limit the effectiveness of traditional GNNs. Instead of the ensemble approach of Min et al. (2020), we propose a native neural network architecture for learned geometric scattering (LEGS), which directly modifies the scattering architecture from Gao et al. (2019); Perlmutter et al. (2019), via relaxations described in Sec. 3, to allow a task-driven adaptation of its wavelet configuration via backpropagation implemented in Sec. 4. We note that other recent graph spectrum-based methods approach the learning of long range connections by approximating the spectrum of the graph with the Lancoz algorithm Liao et al. (2019), or learning in block Krylov subspaces Luan et al. (2019). Such methods are complementary to the work presented here, in that their spectral approximation can also be applied in the computation of geometric scattering when considering very long range scales (e.g., via spectral formulation of graph wavelet filters). However, we find that such approximations are not necessary in the datasets considered here and in other recent work focusing on whole-graph tasks, where direct computation of polynomials of the Laplacian is sufficient.\nThe resulting learnable geometric scattering network balances the mathematical properties inherited from the scattering transform (as shown in Sec. 3) with the flexibility enabled by adaptive representation learning. The benefits of our construction over standard GNNs, as well as pure geometric scattering, are discussed and demonstrated on graph classification and regression tasks in Sec. 5. In particular, we find that our network maintains the robustness to small training sets present in graph scattering while improving classification on biological graph classification and regression tasks, and we show that in tasks where the graphs have a large diameter relative to their size, learnable scattering features improve performance over competing methods." }, { "heading": "2 PRELIMINARIES: GEOMETRIC SCATTERING FEATURES", "text": "Let G = (V,E,w) be a weighted graph with V := {v1, . . . , vn} the set of nodes, E ⊂ {{vi, vj} ∈ V × V, i 6= j} the set of (undirected) edges and w : E → (0,∞) assigning (positive) edge weights to the graph edges. Note that w can equivalently be considered as a function of V ×V , where we set the weights of non-adjacent node pairs to zero. We define a graph signal as a function x : V → R on the nodes of G and aggregate them in a signal vector x ∈ Rn with the ith entry being x[vi]. We define the weighted adjacency matrix W ∈ Rn×n of the graph G as\nW [vi, vj ] := { w(vi, vj) if {vi, vj} ∈ E 0 otherwise ,\nand the degree matrix D ∈ Rn×n of G as D := diag(d1, . . . , dn) with di := deg(vi) :=∑n j=1W [vi, vj ] being the degree of the node vi.\nThe geometric scattering transform (Gao et al., 2019) relies on a cascade of graph filters constructed from a row stochastic diffusion matrix P := 12 ( In + WD −1), which corresponds to transition probabilities of a lazy random walk Markov process. The laziness of the process signifies that at each step it has equal probability of either staying at the current node or transitioning to a neighbor, where transition probabilities in the latter case are determined by (normalized) edge weights. Scattering filters are then defined via the graph-wavelet matrices Ψj ∈ Rn×n of scale j ∈ N0, as\nΨ0 := In − P ,\nΨj := P 2j−1 − P 2 j = P 2 j−1( In − P 2 j−1) , j ≥ 1. (1)\nThese diffusion wavelet operators partition the frequency spectrum into dyadic frequency bands, which are then organized into a full wavelet filter bankWJ := {Ψj ,ΦJ}0≤j≤J , where ΦJ := P 2 J is a pure low-pass filter, similar to the one used in GCNs. It is easy to verify that the resulting wavelet transform is invertible, since a simple sum of filter matrices inWJ yields the identity. Moreover, as discussed in Perlmutter et al. (2019), this filter bank forms a nonexpansive frame, which provides energy preservation guarantees as well as stability to perturbations, and can be generalized to a wider family of constructions that encompasses the variations of scattering transforms on graphs from Gama et al. (2019a;b) and Zou & Lerman (2019).\nGiven the wavelet filter bankWJ , node-level scattering features are computed by stacking cascades of bandpass filters and element-wise absolute value nonlinearities to form\nUpx := Ψjm |Ψjm−1 . . . |Ψj2 |Ψj1x|| . . . |, (2)\nindexed (or parametrized) by the scattering path p := (j1, . . . , jm) ∈ ∪m∈NNm0 that determines the filter scales captured by each scattering coefficient. Then, a whole-graph scattering representation is obtained by aggregating together node-level features via statistical moments over the nodes of the graph (Gao et al., 2019). This construction yields the geometric scattering features\nSp,qx := n∑ i=1 |Upx[vi]|q. (3)\nindexed by the scattering path p and moment order q. Finally, we note that it can be shown that the graph-level scattering transform Sp,q guarantees node-permutation invariance, while Up is permutation equivariant (Perlmutter et al., 2019; Gao et al., 2019)." }, { "heading": "3 RELAXED GEOMETRIC SCATTERING CONSTRUCTION TO ALLOW TRAINING", "text": "The geometric scattering construction, described in Sec. 2, can be seen as a particular GNN with handcrafted layers, rather than learned ones. This provides a solid mathematical framework for understanding the encoding of geometric information in GNNs, as shown in Perlmutter et al. (2019), while also providing effective unsupervised graph representation learning for data exploration, which also has some advantages even in supervised learning task, as shown in Gao et al. (2019). While the handcrafted design in Perlmutter et al. (2019); Gao et al. (2019) is not a priori amenable to task-driven tuning provided by end-to-end GNN training, we note that the cascade in Eq. 3 does\nconform to a neural network architecture suitable for backpropagation. Therefore, in this section, we show how and under what conditions a relaxation of the laziness of the random walk and the selection of the scales preserves some of the useful mathematical properties established in Perlmutter et al. (2019). We then establish in section 5 the empirical benefits of learning the diffusion scales over a purely handcrafted design.\nWe first note that the construction of the diffusion matrix P that forms the lowpass filter used in the fixed scattering construction can be relaxed to encode adaptive laziness by setting Pα := αIn + (1−α)WD−1. Where α ∈ [1/2, 1) controls the reluctance of the random walk to transition from one node to another. α = 1/2 gives an equal probability to stay in the same node as to transition to one of its neighbors. At this point, we note that one difference between the diffusion lowpass filter here and the one typically used in GCN and its variation is the symmetrization applied in Kipf & Welling (2016). However, Perlmutter et al. (2019) established that for the original construction, this is only a technical difference since P can be regarded as self-adjoint under an appropriate measure which encodes degree variations in the graph. This is then used to generate a Hilbert space L2(G,D−1/2) of graph signals with inner product 〈x,y〉D−1/2 := 〈D−1/2x,D−1/2y〉. The following lemma shows that a similar property is retained for our adaptive lowpass filter Pα.\nLemma 1. The matrix Pα is self-adjoint on the Hilbert space L2(G,D−1/2) from Perlmutter et al. (2019).\nWe note that the self-adjointness shown here is interesting, as it links models that use symmetric and asymmetric versions of the Laplacian or adjacency matrix. Namely, Lemma 1 shows that the diffusion matrix P (which is column normalized but not row normalized) is self-adjoint, as an operator, and can thus be considered as “symmetric” in a suitable inner product space, thus establishing a theoretical link between these design choices.\nAs a second relaxation, we propose to replace the handcrafted dyadic scales in Eq. 1 with an adaptive monotonic sequence of integer diffusion time scales 0 < t1 < · · · < tJ , which can be selected or tuned via training. Then, an adaptive filter bank is constructed asW ′J := {Ψ′j ,Φ′J} J−1 j=0 , with\nΦ′J := P tJ α , Ψ′0 := In − P t1α , (4) Ψ′j := P tj α − P tj+1α , 1 ≤ j ≤ J − 1.\nThe following theorem shows that for any selection of scales, the relaxed construction ofW ′J constructs a nonexpansive frame, similar to the result from Perlmutter et al. (2019) shown for the original handcrafted construction. Theorem 1. There exist a constant C > 0 that only depends on t1 and tJ such that for all x ∈ L2(G,D−1/2),\nC‖x‖2D−1/2 6 ‖Φ ′ Jx‖2D−1/2 + J∑ j=0 ‖Ψ′jx‖2D−1/2 6 ‖x‖ 2 D−1/2 ,\nwhere the norm considered here is the one induced by the space L2(G,D−1/2).\nIntuitively, the upper (i.e., nonexpansive) frame bound implies stability in the sense that small perturbations in the input graph signal will only result in small perturbations in the representation extracted by the constructed filter bank. Further, the lower frame bound ensures certain energy preservation by the constructed filter bank, thus indicating the nonexpansiveness is not implemented in a trivial fashion (e.g., by constant features independent of input signal).\nIn the next section we leverage the two relaxations described here to design a neural network architecture for learning the configuration α, t1, . . . , tJ of this relaxed construction via backpropagation through the resulting scattering filter cascade. The following theorem establishes that for any such configuration, extracted fromW ′J via Eqs. 2-3, is permutation equivariant at the node-level and permutation invariant at the graph level. This guarantees that the extracted (in this case learned) features indeed encode intrinsic graph geometry rather than a priori indexation. Theorem 2. Let U ′p and S′p,q be defined as in Eq. 2 and 3 (correspondingly), with the filters from W ′J with an arbitrary configuration 0 < α < 1, 0 < t1 < · · · < tJ . Then, for any permutation Π\nover the nodes of G, and any graph signal x ∈ L2(G,D−1/2)\nU ′pΠx = ΠU ′ px and S ′ p,qΠx = S ′ p,qx p ∈ ∪m∈NNm0 , q ∈ N\nwhere geometric scattering implicitly considers here the node ordering supporting its input signal.\nWe note that the results in Lemma 1 and Theorems 1-2, as well as their proofs, closely follow the theoretical framework proposed by Perlmutter et al. (2019). We carefully account here for the relaxed learned configuration, which replaces the originally handcrafted configuration there. For completeness, the adjusted proofs appear in Sec. A of the Appendix." }, { "heading": "4 LEARNABLE GEOMETRIC SCATTERING NETWORK ARCHITECTURE", "text": "In order to implement the relaxed geometric scattering construction (Sec. 3) via a trainable neural network, throughout this section, we consider an input graph signal x ∈ Rn or, equivalently, a collection of graph signals X ∈ Rn×N`−1 . The propagation of these signals can be divided into three major modules. First, a diffusion module implements the Markov process that forms the basis of the filter bank and transform, while allowing learning of the laziness parameter α. Then, a scattering module implements the filters and the corresponding cascade, while allowing the learning of the scales t1, . . . , tJ . Finally, the aggregation module collects the extracted features to provide a graph and produces the task-dependent output.\nBuilding a diffusion process. We build a set of m ∈ N subsequent diffusion steps of the signal x by iteratively multiplying the diffusion matrix Pα to the left of the signal, resulting in[\nPαx,P 2 αx,P 3 αx, . . . ,P m α x ] ,\nSince Pα is often sparse, for efficiency reasons these filter responses are implemented via an RNN structure consisting of m RNN modules. Each module propagates the incoming hidden state ht−1, t = 1, . . . ,m with Pα with the readout ot equal to the produced hidden state,\nht := Pαht−1, ot := ht.\nOur architecture and theory enable the implementation of either trainable or nontrainable α, which we believe will be useful for future work as indicated, for example, in Gao & Ji (2019). However, in the applications considered here (see Sec. 5), we find that training α made training unstable and did not improve performance. Therefore, for simplicity, we leave it fixed as α = 1/2 for the remainder of this work. In this case, the RNN portion of the network contains no trainable parameters, thus speeding up the computation, but still enables a convenient gradient flow back to the model input.\nLearning diffusion filter bank. Next, we consider the selection of J ≤ m diffusion scales for the relaxed filter bank construction with the wavelets defined according to Eq. 5. We found this was the most influential part of the architecture. We experimented with methods of increasing flexibility:\n1. Selection of {tj}J−1j=1 as dyadic scales (as in Sec. 2 and Eq. 1), fixed for all datasets (LEGSFIXED),\n2. Selection of each tj using softmax and sorting by j, learnable per model (LEGS-FCN and LEGS-RBF, depending on output layer explained below).\nFor the softmax selection, we use a selection matrix F ∈ RJ×m, where each row F(j,·), j = 1, . . . , J is dedicated to identifying the diffusion scale of the wavelet P tjα via a one-hot encoding. This is achieved by setting\nF := softmax(Θ) = [softmax(θ1), softmax(θ2), . . . , softmax(θJ)] T\nwhere θj ∈ Rm constitute the rows of the trainable weight matrix Θ. While this construction may not strictly guarantee an exact one-hot encoding, we assume that the softmax activations yield a sufficient approximation. Further, without loss of generality, we assume that the rows of F are ordered according to the position of the leading “one” activated in every row. In practice, this can be easily enforced by reordering the rows. We now construct the filter bank W̃F := {Ψ̃j , Φ̃J}J−1j=0 with the filters\nΦ̃Jx = ∑m\nt=1 F(J,t)P\nt αx, Ψ̃0x = In − ∑m\nt=1 F(1,t)P\nt αx (5) Ψ̃jx = ∑m\nt=1\n[ F(j,t)P t αx− Fj+1,tP tαx ] 1 ≤ j ≤ J − 1\nmatching and implementing the construction ofW ′J from Eq. 4.\nAggregating and classifying scattering features. While many approaches may be applied to aggregate node-level features into graph-level features such as max, mean, sum pooling, and the more powerful TopK (Gao & Ji, 2019) or attention pooling (Veličković et al., 2018), we follow the statistical-moment aggregation explained in Secs. 2-3 (motivated by Gao et al., 2019; Perlmutter et al., 2019) and leave exploration of other pooling methods to future work. As shown in Gao et al. (2019) on graph classification, this aggregation works particularly well in conjunction with support vector machines (SVMs) based on the radial basis function (RBF) kernel.\nHere, we consider two configurations for the task-dependent output layer of the network, either using a small neural network with two fully connected layers, which we denote LEGS-FCN, or using a modified RBF network (Broomhead & Lowe, 1988), which we denote LEGS-RBF, to produce the final classification. The latter configuration more accurately processes scattering features as shown in Table 2. Our RBF network works by first initializing a fixed number of movable anchor points. Then, for every point, new features are calculated based on the radial distances to these anchor points. In previous work on radial basis networks these anchor points were initialized independent of the data. We found that this led to training issues if the range of the data was not similar to the initialization of the centers. Instead, we first use a batch normalization layer to constrain the scale of the features and then pick anchors randomly from the initial features of the first pass through our data. This gives an RBF-kernel network with anchors that are always in the range of the data. Our RBF layer is then RBF(x) = φ(‖BatchNorm(x)− c‖) with φ(x) = e−‖x‖2 ." }, { "heading": "5 EMPIRICAL RESULTS", "text": "Here we show results of LEGSNet on whole graph classification and graph regression tasks, that arise in a variety of contexts, with emphasis on the more complex biochemic datasets. We use biochemical graph datasets as they represent a new challenge in the field of graph learning. Unlike\nother types of data, these datasets do not exhibit the small-world structure of social datasets and may have large graph diameters for their size. Further, the connectivity patterns of biomolecules are very irregular due to 3D folding and long range connections, and thus ordinary local node aggregation methods may miss such connectivity differences." }, { "heading": "5.1 WHOLE GRAPH CLASSIFICATION", "text": "We perform whole graph classification by using eccentricity and clustering coefficient as node features as is done in Gao et al. (2019). We compare against graph convolutional networks (GCN) (Kipf & Welling, 2016), GraphSAGE (Hamilton et al., 2017), graph attention network (GAT) (Veličković et al., 2018), graph isomorphism network (GIN) (Xu et al., 2019), Snowball network (Luan et al., 2019), and fixed geometric scattering with a support vector machine classifier (GS-SVM) as in Gao et al. (2019), and a baseline which is a 2-layer neural network on the features averaged across nodes (disregarding graph structure). These comparisons are meant to inform when including learnable graph scattering features are helpful in extracting whole graph features. Specifically, we are interested in the types of graph datasets where existing graph neural network performance can be improved upon with scattering features. We evaluate these methods across 7 benchmark biochemical datasets: DD, ENZYMES, MUTAG, NCI1, NCI109, PROTEINS, and PTC where the goal is to classify between two or more classes of compounds with hundreds to thousands of graphs and tens to hundreds of nodes (See Table 1). For completeness we also show results on six social network datasets in Table S2. For more specific information on individual datasets see Appendix B. We use 10-fold cross validation on all models which is elaborated on in Appendix C. For an ensembling comparison to Scattering-GCN (Min et al., 2020) see Appendix D.\nLEGS outperforms on biological datasets. A somewhat less explored domain for GNNs is in biochemical graphs that represent molecules and tend to be overall smaller and less connected (see Tables 1 and S1) than social networks. In particular we find that LEGSNet outperforms other methods by a significant margin on biochemical datasets with relatively small but high diameter graphs (NCI1, NCI109, ENZYMES, PTC), as shown in Table 2. On extremely small graphs we find that GS-SVM performs best, which is expected as other methods with more parameters can easily overfit the data. We reason that the performance increases exhibited by LEGSNet, and to a lesser extent GS-SVM, on these chemical and biological benchmarks is due the ability of geometric scattering to compute complex connectivity features via its multiscale diffusion wavelets. Thus, methods that rely on a scattering construction would in general perform better, with the flexibility and trainability LEGSNet giving it an edge on most tasks.\nLEGS performs consistently on social network datasets. On the social network datasets LEGSNet performs consistently well, although its benefits here are not as clear as in the biochemical datasets. Ignoring the fixed scattering transform GS-SVM, which was tuned in Gao et al. (2019) with a focus on these particular social network datasets, a version of LEGSNet is best on three out of the six social datasets and second best on the other three. Since the advantages are clearer in the biochemical domain, we focus on this in the remainder of this section. However, for completeness, we provide results on social network datasets in Table S2, and leave further discussion to Appendix B.1.\nLEGS preserves enzyme exchange preferences while increasing performance. One advantage of geometric scattering over other graph embedding techniques lies in the rich information present within the scattering feature space. This was demonstrated in Gao et al. (2019) where it was shown that the embeddings created through fixed geometric scattering can be used to accurately infer inter-graph relationships. Scattering features of enzyme graphs within the ENZYMES dataset (Borgwardt et al., 2005) possessed sufficient\nglobal information to recreate the enzyme class exchange preferences observed empirically by Cuesta et al. (2015), using only linear methods of analysis, and despite working with a much smaller and artificially balanced dataset. We demonstrate here that LEGSNet retains similar descriptive capabilities, as shown in Figure 2 via chord diagrams where each exchange preference between enzyme classes (estimated as suggested in Gao et al., 2019) is represented as a ribbon of the corresponding size. Our results here (and in Table S5, which provides complementary quantitative comparison) show that, with relaxations on the scattering parameters, LEGS-FCN achieves better classification accuracy than both LEGS-FIXED and GCN (see Table 1) while also retaining a more descriptive embedding that maintains the global structure of relations between enzyme classes. We ran two varieties of LEGSNet on the ENZYMES dataset: LEGS-FIXED and LEGSFCN, which allows the diffusion scales to be learned. For comparison, we also ran a standard GCN whose graph embeddings were obtained via mean pooling. To infer enzyme ex-\nchange preferences from their embeddings, we followed Gao et al. (2019) in defining the distance from an enzyme e to the enzyme class ECj as dist(e,ECj) := ‖ve − projCj (ve)‖, where vi is the embedding of e, and Cj is the PCA subspace of the enzyme feature vectors within ECj . The distance between the enzyme classes ECi and ECj is the average of the individual distances, mean{dist(e,ECj) : e ∈ ECi}. From here, the affinity between two enzyme classes is computed as pref(ECi,ECj) = wi/min(\nDi,i Di,j , Dj,j Dj,i ), where wi is the percentage of enzymes in class i which are closer to another class than their own, and Di,j is the distance between ECi and ECj .\nRobustness to reduced training set size. We remark that similar to the robustness shown in (Gao et al., 2019) for handcrafted scattering, LEGSNet is able to maintain accuracy even when the training set size is shrunk to as low as 20% of the dataset, with a median decrease of 4.7% accuracy as when 80% of the data is used for training, as discussed in the supplement (see Table S3).\n5.2 GRAPH REGRESSION\nWe next evaluate learnable scattering on two graph regression tasks, the QM9 (Gilmer et al., 2017; Wu et al., 2018) graph regression dataset, and a new task from the critical assessment of structure prediction (CASP) challenge (Moult et al., 2018). On the CASP task, the main objective is to score protein structure prediction/simulation models in terms of the discrepancy between their predicted structure and the actual structure of the protein (which is known a priori). The accuracy of such 3D structure predictions are evaluated using a variety of met-\nrics, but we focus on the global distance test (GDT) score (Modi et al., 2016). The GDT score measures the similarity between tertiary structures of two proteins with amino-acid correspondence. A higher score means two structures are more similar. For a set of predicted 3D structures for a protein, we would like to score their quality as quantified by the GDT score.\nFor this task we use the CASP12 dataset (Moult et al., 2018) and preprocess the data similarly to Ingraham et al. (2019), creating a KNN graph between proteins based on the 3D coordinates of each amino acid. From this KNN graph we regress against the GDT score. We evaluate on 12 proteins from the CASP12 dataset and choose random (but consistent) splits with 80% train, 10% validation, and 10% test data out of 4000 total structures. We are only concerned with structure similarity so use no non-structural node features.\nLEGSNet outperforms on all CASP targets Across all CASP targets we find that LEGSNet significantly outperforms GNN and baseline methods (See Table S4). This performance improvement is particularly stark on the easiest structures (measured by average GDT) but is consistent across all structures. In Figure 3 we show the relationship between percent improvement of LEGSNet over the GCN model and the average GDT score across the target structures. We draw attention to target t0879, where LEGSNet shows the greatest improvement over other methods. This target has long range dependencies (Ovchinnikov et al., 2018) as it exhibits metal coupling (Li et al., 2015)\ncreating long range connections over the sequence. Since other methods are unable to model these long range connections LEGSNet is particularly important on these more difficult to model targets.\nLEGSNet outperforms on the QM9 dataset We evaluate the performance of LEGSNet on the quantum chemistry dataset QM9 (Gilmer et al., 2017; Wu et al., 2018), which consists of 130,000 molecules with ∼18 nodes per molecule. We use the node features from Gilmer et al. (2017), with the addition of eccentricity and clustering coefficient features, and ignore the edge features. We whiten all targets to have zero mean and unit standard deviation. We train each network against all 19 targets and evaluate the mean squared error on the test set with mean and std. over four runs. We find that learning the scales improves the overall MSE, and particularly improves the results over difficult targets (see Table 4 for overall results and Table S7 for results by target). Indeed, on more difficult targets (i.e., those with large test error) LEGS-FCN is able to\nperform better, where on easy targets GIN is the best. Overall, scattering features offer a robust signal over many targets, and while perhaps less flexible (by construction), they achieve good average performance with significantly fewer parameters." }, { "heading": "6 CONCLUSION", "text": "In this work we have established a relaxation from fixed geometric scattering with strong guarantees to a more flexible network with better performance by learning data dependent scales. Allowing the network to choose data-driven diffusion scales leads to improved performance particularly on biochemical datasets, while keeping strong guarantees on extracted features. This parameterization has advantages in representing long range connections with a small number of weights, which are necessary in complex biochemical data. This also opens the possibility to provide additional relaxation to enable node-specific or graph-specific tuning via attention mechanisms, which we regard as an exciting future direction, but out of scope for the current work." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A PROOFS FOR SECTION 3", "text": "" }, { "heading": "A.1 PROOF OF LEMMA 1", "text": "Let Mα = D−1/2PαD1/2 then it can be verified that Mα is a symmetric conjugate of Pα, and by construction is self-adjoint with respect to the standard inner product of L2(G). Let x,y ∈ L2(G,D−1/2) then we have\n〈Pαx,y〉D−1/2 = 〈D−1/2Pαx,D−1/2y〉 = 〈D−1/2D1/2MαD−1/2x,D−1/2y〉 = 〈MαD−1/2x,D−1/2y〉 = 〈D−1/2x,MαD−1/2y〉 = 〈D−1/2x,D−1/2D1/2MαD−1/2y〉 = 〈D−1/2x,D−1/2Pαy〉 = 〈x,Pαy〉D−1/2 ,\nwhich gives the result of the lemma." }, { "heading": "A.2 PROOF OF THEOREM 1", "text": "As shown in the previous proof (Sec. A.1), Pα has a symmetric conjugate Mα. Given the eigendecomposition Mα = QΛQT , we can write P tα = D\n1/2QΛtQTD−1/2, giving the eigendecomposition of the propagated diffusion matrices. Furthermore, it can be verified that the eigenvalues on the diagonal of Λ are nonnegative. Briefly, this results from graph Laplacian eigenvalues being within the range [0, 1], which means those of WD−1 are in [−1, 1], which combined with 1/2 ≤ α ≤ 1 result in λi := [Λ]ii ∈ [0, 1] for every j. Next, given this decomposition we can write:\nΦ′J = D 1/2QΛtJQTD−1/2, Ψ′j = D 1/2Q(Λtj − Λtj+1)QTD−1/2, 0 ≤ j ≤ J − 1.\nwhere we set t0 = 0 to simplify notations. Then, we have:\n‖Φ′Jx‖2D−1/2 = 〈Φ ′ Jx,Φ ′ Jx〉D−1/2\n= 〈D−1/2D1/2QΛtJQTD−1/2x, D−1/2D1/2QΛtJQTD−1/2x〉 = xTD−1/2QΛtJQTQΛtJQTD−1/2x = (xTD−1/2QΛtJ )(ΛtJQTD−1/2x)\n= ‖ΛtJQTD−1/2x‖22 Further, since Q is orthogonal (as it is constructed from an eigenbasis of a symmetric matrix), if we consider a change of variable to y = QTD−1/2x, we have ‖x‖2\nD−1/2 = ‖D−1/2x‖22 = ‖y‖22\nwhile ‖Φ′Jx‖2D−1/2 = ‖Λ tJy‖22. Similarly, we can also reformulate the operation of other filters in terms of diagonal matrices applied to y asW ′J as ‖Ψ′jx‖2D−1/2 = ‖(Λ tj − Λtj+1)y‖22.\nGiven the reformulation in terms of y and standard L2(G), we can now write\n‖ΛtJy‖22 + J−1∑ j=0 ‖(Λtj − Λtj+1)y‖22 = n∑ i=1 y2i · ( λ2tJ + ∑J−1 j=0 (λ tj i − λ tj+1 i ) 2 ) .\nThen, since 0 ≤ λi ≤ 1 and 0 = t0 < t1 < · · · < tJ we have\nλ2tJ + J−1∑ j=0 (λ tj i − λ tj+1 i ) 2 ≤ λtJ + J−1∑ j=0 λ tj i − λ tj+1 i 2 = (λtJ + λt0i − λtJi )2 = 1,\nwhich yields the upper bound ‖ΛtJy‖22 + ∑J−1 j=0 ‖(Λtj − Λtj+1)y‖22 ≤ ‖y‖22. On the other hand, since t1 > 0 = t0, then we also have\nλ2tJ + J−1∑ j=0 (λ tj i − λ tj+1 i ) 2 ≥ λ2tJ + (1− λt1i ) 2\nand therefore, by setting C := min0≤ξ≤1(ξ2tJ + (1− ξt1)2) > 0, whose positivity is not difficult to verify, we get the lower bound ‖ΛtJy‖22 + ∑J−1 j=0 ‖(Λtj − Λtj+1)y‖22 ≥ C‖y‖22. Finally, applying the reverse change of variable to x and L2(G,D−1/2) yields the result of the theorem." }, { "heading": "A.3 PROOF OF THEOREM 2", "text": "Denote the permutation group on n elements as Sn, then for a permutation Π ∈ Sn we let G = Π(G) be the graph obtained by permuting the vertices of G with Π. The corresponding permutation operation on a graph signal x ∈ L2(G,D−1/2) gives a signal Πx ∈ L2(G,D−1/2), which we implicitly considered in the statement of the theorem, without specifying these notations for simplicity. Rewriting the statement of the theorem more rigorously with the introduced notations, we aim to show that U ′ pΠx = ΠU ′ px and S ′ p,qΠx = S ′ p,qx under suitable conditions, where the operation U ′ p from G on the permuted graph G is denoted here by U ′p and likewise for S′p,q we have S ′ p,q .\nWe start by showing U ′p is permutation equivariant. First, we notice that for any Ψj , 0 < j < J we have that ΨjΠx = ΠΨjx, as for 1 ≤ j ≤ J − 1\nΨjΠx = (ΠP tjΠT −ΠP tj+1ΠT )Πx\n= Π(P tj − P tj+1)x = ΠΨjx.\nSimilar reasoning also holds for j ∈ {0, J}. Further, notice that for the element-wise nature of the absolute value nonlinearity yields |Πx| = Π|x| for any permutation matrix Π. Using these two observations, it follows inductively that\nU ′ pΠx :=Ψ ′ jm |Ψ ′ jm−1 . . . |Ψ ′ j2 |Ψ ′ j1Πx|| . . . |\n=Ψ′jm |Ψ ′ jm−1 . . . |Ψ ′ j2Π|Ψ ′ j1x|| . . . |\n...\n=ΠΨ′jm |Ψ ′ jm−1 . . . |Ψ ′ j2 |Ψ ′ j1x|| . . . | =ΠU ′px.\nTo show S′p,q is permutation invariant, first notice that for any statistical moment q > 0, we have |Πx|q = Π|x|q and further as sums are commutative, ∑ j(Πx)j = ∑ j xj . We then have\nS ′ p,qΠx = n∑ i=1 |U ′pΠx[vi]|q = n∑ i=1 |ΠU ′px[vi]|q = n∑ i=1 |U ′px[vi]|q = S′p,qx,\nwhich, together with the previous result, completes the proof of the theorem." }, { "heading": "B DATASETS", "text": "In this section we further analyze individual datasets. Relating composition of the dataset as shown in Table S1 to the relative performance of our models as shown in Table S2.\nDD Dobson & Doig (2003): Is a dataset extracted from the protein data bank (PDB) of 1178 high resolution proteins. The task is to distinguish between enzymes and non-enzymes. Since these are high resolution structures, these graphs are significantly larger than those found in our other biochemical datasets with a mean graph size of 284 nodes with the next largest biochemical dataset with a mean size of 39 nodes.\nENZYMES Borgwardt et al. (2005): Is a dataset of 600 enzymes divided into 6 balanced classes of 100 enzymes each. As we analyzed in the main text, scattering features are better able to preserve the structure between classes. LEGS-FCN slightly relaxes this structure but improves accuracy from 32 to 39% over LEGS-FIXED.\nNCI1, NCI109 Wale et al. (2008): Contains slight variants of 4100 chemical compounds encoded as graphs. Each compound is separated into one of two classes based on its activity against nonsmall cell lung cancer and ovarian cancer cell lines. Graphs in this dataset are 30 nodes with a similar number of edges. This makes for long graphs with high diameter.\nPROTEINS Borgwardt et al. (2005): Contains 1178 protein structures with the goal of classifying enzymes vs. non enzymes. GCN outperforms all other models on this dataset, however the Baseline model, where no structure is used also performs very similarly. This suggests that the graph structure within this dataset does not add much information over the structure encoded in the eccentricity and clustering coefficient.\nPTC Toivonen et al. (2003): Contains 344 chemical compound graphs divided into two classes based on whether or not they cause cancer in rats. This dataset is very difficult to classify without features however LEGS-RBF and LEGS-FCN are able to capture the long range connections slightly better than other methods.\nCOLLAB Yanardag & Vishwanathan (2015): 5000 ego-networks of different researchers from high energy physics, condensed matter physics or astrophysics. The goal is to determine which field the research belongs to. The GraphSAGE model performs best on this dataset although the LEGS-RBF network performs nearly as well. Ego graphs have a very small average diameter. Thus shallow networks can perform quite well on them as is the case here.\nIMDB Yanardag & Vishwanathan (2015): For each graph nodes represent actresses/actors and there is an edge between them if they are in the same move. These graphs are also ego graphs around specific actors. IMDB-BINARY classifies between action and romance genres. IMDB-MULTI classifies between 3 classes. Somewhat surprisingly GS-SVM performs the best with other LEGS networks close behind. This could be due to oversmoothing on the part of GCN and GraphSAGE when the graphs are so small.\nREDDIT Yanardag & Vishwanathan (2015): Graphs in REDDIT-BINARY/MULTI-5K/MULTI12K datasets each graph represents a discussion thread where nodes correspond to users and there is an edge between two nodes if one replied to the other’s comment. The task is to identify which subreddit a given graph came from. On these datasets GCN outperforms other models.\nQM9 Gilmer et al. (2017); Wu et al. (2018): Graphs in the QM9 dataset each represent chemicals with 18 atoms. Regression targets represent chemical properties of the molecules." }, { "heading": "B.1 PERFORMANCE OF LEGSNET ON SOCIAL NETWORK DATASETS", "text": "Table S2 shows that our model outperforms other GNNs on some biomedical benchmarks and that it performs comparably on social network datasets. Out of the six social network datasets, ignoring the fixed scattering model GS-SVM, which has been hand tuned with these datasets in mind, our model outperforms both GNN models on three of them, and is second best on the other three. This is at least comparable if not slightly superior performance. GraphSAGE does a bit better on Collab, but much worse on IMDB-Binary and Reddit-Binary. GCN does a bit better on Reddit-Multi, but worse on Collab, IMDB-Binary, and Reddit-Binary.\nLEGSNet has significantly fewer parameters and achieves comparable or superior accuracy on common benchmarks. Even when our method shows comparable results, and definitely when it outperforms other GNNs, we believe that its smaller number of parameters could be useful in applications with limited compute or limited training examples.\nTable S1: Dataset statistics, diameter, nodes, edges, clustering coefficient averaged over all graphs. Split into bio-chemical and social network types.\n# Graphs # Classes Diameter Nodes Edges Clust. Coeff\nDD 1178 2 19.81 284.32 715.66 0.48 ENZYMES 600 6 10.92 32.63 62.14 0.45 MUTAG 188 2 8.22 17.93 19.79 0.00 NCI1 4110 2 13.33 29.87 32.30 0.00 NCI109 4127 2 13.14 29.68 32.13 0.00 PROTEINS 1113 2 11.62 39.06 72.82 0.51 PTC 344 2 7.52 14.29 14.69 0.01\nCOLLAB 5000 3 1.86 74.49 2457.22 0.89 IMDB-BINARY 1000 2 1.86 19.77 96.53 0.95 IMDB-MULTI 1500 3 1.47 13.00 65.94 0.97 REDDIT-BINARY 2000 2 8.59 429.63 497.75 0.05 REDDIT-MULTI-12K 11929 11 9.53 391.41 456.89 0.03 REDDIT-MULTI-5K 4999 5 10.57 508.52 594.87 0.03\nTable S2: Mean ± std. over 10 test sets on bio-chemical and social datasets.\nLEGS-RBF LEGS-FCN LEGS-FIXED GCN GraphSAGE GAT GIN GS-SVM Baseline\nDD 72.58 ± 3.35 72.07 ± 2.37 69.09 ± 4.82 67.82 ± 3.81 66.37 ± 4.45 68.50 ± 3.62 42.37 ± 4.32 72.66 ± 4.94 75.98 ± 2.81 ENZYMES 36.33 ± 4.50 38.50 ± 8.18 32.33 ± 5.04 31.33 ± 6.89 15.83 ± 9.10 25.83 ± 4.73 36.83 ± 4.81 27.33 ± 5.10 20.50 ± 5.99 MUTAG 33.51 ± 4.34 82.98 ± 9.85 81.84 ± 11.24 79.30 ± 9.66 81.43 ± 11.64 79.85 ± 9.44 83.57 ± 9.68 85.09 ± 7.44 79.80 ± 9.92 NCI1 74.26 ± 1.53 70.83 ± 2.65 71.24 ± 1.63 60.80 ± 4.26 57.54 ± 3.33 62.19 ± 2.18 66.67 ± 2.90 69.68 ± 2.38 56.69 ± 3.07 NCI109 72.47 ± 2.11 70.17 ± 1.46 69.25 ± 1.75 61.30 ± 2.99 55.15 ± 2.58 61.28 ± 2.24 65.23 ± 1.82 68.55 ± 2.06 57.38 ± 2.20 PROTEINS 70.89 ± 3.91 71.06 ± 3.17 67.30 ± 2.94 74.03 ± 3.20 71.87 ± 3.50 73.22 ± 3.55 75.02 ± 4.55 70.98 ± 2.67 73.22 ± 3.76 PTC 57.26 ± 5.54 56.92 ± 9.36 54.31 ± 6.92 56.34 ± 10.29 55.22 ± 9.13 55.50 ± 6.90 55.82 ± 8.07 56.96 ± 7.09 56.71 ± 5.54 COLLAB 75.78 ± 1.95 75.40 ± 1.80 72.94 ± 1.70 73.80 ± 1.73 76.12 ± 1.58 72.88 ± 2.06 62.98 ± 3.92 74.54 ± 2.32 64.76 ± 2.63 IMDB-BINARY 64.90 ± 3.48 64.50 ± 3.50 64.30 ± 3.68 47.40 ± 6.24 46.40 ± 4.03 45.50 ± 3.14 64.20 ± 5.77 66.70 ± 3.53 47.20 ± 5.67 IMDB-MULTI 41.93 ± 3.01 40.13 ± 2.77 41.67 ± 3.19 39.33 ± 3.13 39.73 ± 3.45 39.73 ± 3.61 38.67 ± 3.93 42.13 ± 2.53 39.53 ± 3.63 REDDIT-BINARY 86.10 ± 2.92 78.15 ± 5.42 85.00 ± 1.93 81.60 ± 2.32 73.40 ± 4.38 73.35 ± 2.27 71.40 ± 6.98 85.15 ± 2.78 69.30 ± 5.08 REDDIT-MULTI-12K 38.47 ± 1.07 38.46 ± 1.31 39.74 ± 1.31 42.57 ± 0.90 32.17 ± 2.04 32.74 ± 0.75 24.45 ± 5.52 39.79 ± 1.11 22.07 ± 0.98 REDDIT-MULTI-5K 47.83 ± 2.61 46.97 ± 3.06 47.17 ± 2.93 52.79 ± 2.11 45.71 ± 2.88 44.03 ± 2.57 35.73 ± 8.35 48.79 ± 2.95 36.41 ± 1.80" }, { "heading": "C TRAINING DETAILS", "text": "We train all models for a maximum of 1000 epochs with an initial learning rate of 1e−4 using the ADAM optimizer (Kingma & Ba, 2015). We terminate training if validation loss does not improve for 100 epochs testing every 10 epochs. Our models are implemented with Pytorch Paszke et al. (2019) and Pytorch geometric. Models were run on a variety of hardware resources. For all models we use q = 4 normalized statistical moments for the node to graph level feature extraction and m = 16 diffusion scales in line with choices in Gao et al. (2019)." }, { "heading": "C.1 CROSS VALIDATION PROCEDURE", "text": "For all datasets we use 10-fold cross validation with 80% training data 10% validation data and 10% test data for each model. We first split the data into 10 (roughly) equal partitions. For each model we take exactly one of the partitions to be the test set and one of the remaining nine to be the validation set. We then train the model on the remaining eight partitions using the cross-entropy loss on the validation for early stopping checking every ten epochs. For each test set, we use majority voting of the nine models trained with that test set. We then take the mean and standard deviation across these test set scores to average out any variability in the particular split chosen. This results in 900 models trained on every dataset. With mean and standard deviation over 10 ensembled models each with a separate test set." }, { "heading": "D ENSEMBLING EVALUATION", "text": "Recent work by Min et al. (2020) combines the features from a fixed scattering transform with a GCN network, showing that this has empirical advantages in semi-supervised node classification, and theoretical representation advantages over a standard Kipf & Welling (2016) style GCN. We\nTable S3: Mean ± std. over test set selection on cross-validated LEGS-RBF Net with reduced training set size.\nTrain, Val, Test % 80%, 10%, 10% 70%, 10%, 20% 40%, 10%, 50% 20%, 10%, 70%\nCOLLAB 75.78 ± 1.95 75.00 ± 1.83 74.00 ± 0.51 72.73 ± 0.59 DD 72.58 ± 3.35 70.88 ± 2.83 69.95 ± 1.85 69.43 ± 1.24 ENZYMES 36.33 ± 4.50 34.17 ± 3.77 29.83 ± 3.54 23.98 ± 3.32 IMDB-BINARY 64.90 ± 3.48 63.00 ± 2.03 63.30 ± 1.27 57.67 ± 6.04 IMDB-MULTI 41.93 ± 3.01 40.80 ± 1.79 41.80 ± 1.23 36.83 ± 3.31 MUTAG 33.51 ± 4.34 33.51 ± 1.14 33.52 ± 1.26 33.51 ± 0.77 NCI1 74.26 ± 1.53 74.38 ± 1.38 72.07 ± 0.28 70.30 ± 0.72 NCI109 72.47 ± 2.11 72.21 ± 0.92 70.44 ± 0.78 68.46 ± 0.96 PROTIENS 70.89 ± 3.91 69.27 ± 1.95 69.72 ± 0.27 68.96 ± 1.63 PTC 57.26 ± 5.54 57.83 ± 4.39 54.62 ± 3.21 55.45 ± 2.35 REDDIT-BINARY 86.10 ± 2.92 86.05 ± 2.51 85.15 ± 1.77 83.71 ± 0.97 REDDIT-MULTI-12K 38.47 ± 1.07 38.60 ± 0.52 37.55 ± 0.05 36.65 ± 0.50 REDDIT-MULTI-5K 47.83 ± 2.61 47.81 ± 1.32 46.73 ± 1.46 44.59 ± 1.02\nTable S4: Test set mean squared error on CASP GDT regression task across targets over 3 nonoverlapping test sets.\nLEGS-RBF LEGS-FCN LEGS-FIXED GCN GraphSAGE GIN Baseline\nt0860 197.68 ± 34.29 164.22 ± 10.28 206.20 ± 28.46 314.90 ± 29.66 230.45 ± 79.72 262.35 ± 66.88 414.41 ± 26.96 t0868 131.42 ± 8.12 127.71 ± 14.26 178.45 ± 5.64 272.14 ± 26.34 191.08 ± 21.96 170.05 ± 27.26 411.98 ± 57.39 t0869 106.69 ± 9.97 132.12 ± 31.37 104.47 ± 14.16 317.22 ± 12.75 244.38 ± 40.58 217.02 ± 57.01 393.12 ± 48.70 t0872 144.11 ± 24.88 148.20 ± 23.63 134.48 ± 8.25 293.96 ± 19.00 221.13 ± 28.74 240.89 ± 24.17 374.48 ± 33.70 t0879 89.00 ± 44.94 80.14 ± 16.21 64.63 ± 15.92 309.23 ± 69.40 172.41 ± 73.07 147.77 ± 15.72 364.79 ± 144.32 t0900 193.74 ± 10.78 171.05 ± 25.41 158.56 ± 9.87 254.11 ± 18.63 209.07 ± 11.90 265.77 ± 79.99 399.16 ± 83.48 t0912 113.00 ± 22.31 169.55 ± 27.35 150.70 ± 8.53 227.17 ± 22.11 192.28 ± 39.45 271.30 ± 28.89 406.25 ± 31.42 t0920 80.46 ± 14.98 136.94 ± 36.43 84.83 ± 19.70 361.19 ± 71.25 261.72 ± 59.67 191.86 ± 37.85 398.22 ± 25.60 t0921 187.89 ± 46.15 165.97 ± 42.39 142.97 ± 27.09 382.69 ± 20.27 260.49 ± 16.09 207.19 ± 24.84 363.92 ± 35.79 t0922 254.83 ± 91.28 110.54 ± 43.99 227.73 ± 26.41 366.72 ± 8.10 290.71 ± 7.22 130.46 ± 11.64 419.14 ± 45.49 t0942 188.55 ± 11.10 167.53 ± 22.01 137.21 ± 7.43 371.31 ± 9.90 233.78 ± 84.95 254.38 ± 47.21 393.03 ± 24.93 t0944 146.59 ± 8.41 138.67 ± 50.36 245.79 ± 58.16 263.03 ± 9.43 199.40 ± 51.11 157.90 ± 2.57 404.12 ± 40.82\nTable S5: Quantified distance between the empirically observed enzyme class exchange preferences of Cuesta et al. (2015) and the class exchange preferences inferred from LEGS-FIXED, LEGS-FCN, and a GCN. We measure the cosine distance between the graphs represented by the chord diagrams in Figure 2. As before, the self-affinities were discarded. LEGS-Fixed reproduces the exchange preferences the best, but LEGS-FCN still reproduces well and has significantly better classification accuracy.\nLEGS-FIXED LEGS-FCN GCN\n0.132 0.146 0.155\nensemble the learned features from a learnable scattering network (LEGS-FCN) with those of GCN and compare this to ensembling fixed scattering features with GCN as in Min et al. (2020), as well as the solo features. Our setting is slightly different in that we use the GCN features from pretrained networks, only training a small 2-layer ensembling network on the combined graph level features. This network consists of a batch norm layer, a 128 width fully connected layer, a leakyReLU activation, and a final classification layer down to the number of classes. In Table S6 we see that combining GCN features with fixed scattering features in LEGS-FIXED or learned scattering features in LEGS-FCN always helps classification. Learnable scattering features help more than fixed scattering features overall and particularly in the biochemical domain.\nTable S6: Mean ± standard deviation test set accuracy on biochemical and social network datasets.\nGCN GCN-LEGS-FIXED GCN-LEGS-FCN\nDD 67.82 ± 3.81 74.02 ± 2.79 73.34 ± 3.57 ENZYMES 31.33 ± 6.89 31.83 ± 6.78 35.83 ± 5.57 MUTAG 79.30 ± 9.66 82.46 ± 7.88 83.54 ± 9.39 NCI1 60.80 ± 4.26 70.80 ± 2.27 72.21 ± 2.32 NCI109 61.30 ± 2.99 68.82 ± 1.80 69.52 ± 1.99 PROTEINS 74.03 ± 3.20 73.94 ± 3.88 74.30 ± 3.41 PTC 56.34 ± 10.29 58.11 ± 6.06 56.64 ± 7.34 COLLAB 73.80 ± 1.73 76.60 ± 1.75 75.76 ± 1.83 IMDB-BINARY 47.40 ± 6.24 65.10 ± 3.75 65.90 ± 4.33 IMDB-MULTI 39.33 ± 3.13 39.93 ± 2.69 39.87 ± 2.24 REDDIT-BINARY 81.60 ± 2.32 86.90 ± 1.90 87.00 ± 2.36 REDDIT-MULTI-12K 42.57 ± 0.90 45.41 ± 1.24 45.55 ± 1.00 REDDIT-MULTI-5K 52.79 ± 2.11 53.87 ± 2.75 53.41 ± 3.07\nTable S7: Mean ± std. over four runs of mean squared error over 19 targets for the QM9 dataset, lower is better.\nLEGS-FCN LEGS-FIXED GCN GraphSAGE GIN Baseline\nTarget 0 0.749 ± 0.025 0.761 ± 0.026 0.776 ± 0.021 0.876 ± 0.083 0.786 ± 0.032 0.985 ± 0.020 Target 1 0.158 ± 0.014 0.164 ± 0.024 0.448 ± 0.007 0.555 ± 0.295 0.191 ± 0.060 0.593 ± 0.013 Target 2 0.830 ± 0.016 0.856 ± 0.026 0.899 ± 0.051 0.961 ± 0.057 0.903 ± 0.033 0.982 ± 0.027 Target 3 0.511 ± 0.012 0.508 ± 0.005 0.549 ± 0.010 0.688 ± 0.216 0.555 ± 0.006 0.805 ± 0.025 Target 4 0.587 ± 0.007 0.587 ± 0.006 0.609 ± 0.009 0.755 ± 0.177 0.613 ± 0.013 0.792 ± 0.010 Target 5 0.646 ± 0.013 0.674 ± 0.047 0.889 ± 0.014 0.882 ± 0.118 0.699 ± 0.033 0.833 ± 0.026 Target 6 0.018 ± 0.012 0.020 ± 0.011 0.099 ± 0.011 0.321 ± 0.454 0.012 ± 0.006 0.468 ± 0.005 Target 7 0.017 ± 0.005 0.024 ± 0.008 0.368 ± 0.015 0.532 ± 0.405 0.015 ± 0.005 0.379 ± 0.013 Target 8 0.017 ± 0.005 0.024 ± 0.008 0.368 ± 0.015 0.532 ± 0.404 0.015 ± 0.005 0.378 ± 0.013 Target 9 0.017 ± 0.005 0.024 ± 0.008 0.368 ± 0.015 0.532 ± 0.404 0.015 ± 0.005 0.378 ± 0.013 Target 10 0.017 ± 0.005 0.024 ± 0.008 0.368 ± 0.015 0.533 ± 0.404 0.015 ± 0.005 0.380 ± 0.014 Target 11 0.254 ± 0.013 0.279 ± 0.023 0.548 ± 0.023 0.617 ± 0.282 0.294 ± 0.003 0.631 ± 0.013 Target 12 0.034 ± 0.014 0.033 ± 0.010 0.215 ± 0.009 0.356 ± 0.437 0.020 ± 0.002 0.478 ± 0.014 Target 13 0.033 ± 0.014 0.033 ± 0.010 0.214 ± 0.009 0.356 ± 0.438 0.020 ± 0.002 0.478 ± 0.014 Target 14 0.033 ± 0.014 0.033 ± 0.010 0.213 ± 0.009 0.355 ± 0.438 0.020 ± 0.002 0.478 ± 0.014 Target 15 0.036 ± 0.014 0.036 ± 0.011 0.219 ± 0.009 0.359 ± 0.436 0.023 ± 0.002 0.479 ± 0.014 Target 16 0.002 ± 0.002 0.001 ± 0.001 0.017 ± 0.034 0.012 ± 0.022 0.000 ± 0.000 0.033 ± 0.013 Target 17 0.083 ± 0.047 0.079 ± 0.033 0.280 ± 0.354 0.264 ± 0.347 0.169 ± 0.206 0.205 ± 0.220 Target 18 0.062 ± 0.005 0.176 ± 0.231 0.482 ± 0.753 0.470 ± 0.740 0.321 ± 0.507 0.368 ± 0.525" } ]
2,020
null
SP:c90a894d965bf8e529df296b9d5c76864aa5f4f9
[ "This paper describes a neural vocoder based on a diffusion probabilistic model. The model utilizes a fixed-length markov chain to convert between a latent uncorrelated Gaussian vector and a full-length observation. The conversion from observation to latent is fixed and amounts to adding noise at each step. The conversion from latent to observation reveals slightly more of the observation from the latent at each step via a sort of cancellation. This process is derived theoretically based on maximizing the variational lower bound (ELBO) of the model and follows Ho et al. (2020) who derived it for image generation. Thorough experiments show that the model produces high quality speech syntheses on the LJ dataset (MOS comparable to WaveNet and real speech) when conditionally synthesizing from the true mel spectrogram, while generating much more quickly than WaveNet. Perhaps more interesting and surprising, however, is that it generates very high quality and intelligible short utterances with no conditioning, and also admits to global conditioning, e.g., with a digit label." ]
In this work, we propose DiffWave, a versatile diffusion probabilistic model for conditional and unconditional waveform generation. The model is non-autoregressive, and converts the white noise signal into structured waveform through a Markov chain with a constant number of steps at synthesis. It is efficiently trained by optimizing a variant of variational bound on the data likelihood. DiffWave produces high-fidelity audio in different waveform generation tasks, including neural vocoding conditioned on mel spectrogram, class-conditional generation, and unconditional generation. We demonstrate that DiffWave matches a strong WaveNet vocoder in terms of speech quality (MOS: 4.44 versus 4.43), while synthesizing orders of magnitude faster. In particular, it significantly outperforms autoregressive and GAN-based waveform models in the challenging unconditional generation task in terms of audio quality and sample diversity from various automatic and human evaluations. 1
[ { "affiliations": [], "name": "Zhifeng Kong" }, { "affiliations": [], "name": "Wei Ping" }, { "affiliations": [], "name": "Jiaji Huang" }, { "affiliations": [], "name": "Kexin Zhao" } ]
[ { "authors": [ "Yang Ai", "Zhen-Hua Ling" ], "title": "A neural vocoder with hierarchical generation of amplitude and phase spectra for statistical parametric speech synthesis", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2020 }, { "authors": [ "Sercan Ö. Arık", "Mike Chrzanowski", "Adam Coates", "Gregory Diamos", "Andrew Gibiansky", "Yongguo Kang", "Xian Li", "John Miller", "Jonathan Raiman", "Shubho Sengupta", "Mohammad Shoeybi" ], "title": "Deep Voice: Real-time neural text-to-speech", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Sercan Ö. Arık", "Gregory Diamos", "Andrew Gibiansky", "John Miller", "Kainan Peng", "Wei Ping", "Jonathan Raiman", "Yanqi Zhou" ], "title": "Deep Voice 2: Multi-speaker neural text-to-speech", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Mikołaj Bińkowski", "Jeff Donahue", "Sander Dieleman", "Aidan Clark", "Erich Elsen", "Norman Casagrande", "Luis C Cobo", "Karen Simonyan" ], "title": "High fidelity speech synthesis with adversarial networks", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Nanxin Chen", "Yu Zhang", "Heiga Zen", "Ron J Weiss", "Mohammad Norouzi", "William Chan" ], "title": "WaveGrad: Estimating gradients for waveform generation", "venue": "arXiv preprint arXiv:2009.00713,", "year": 2020 }, { "authors": [ "Greg Diamos", "Shubho Sengupta", "Bryan Catanzaro", "Mike Chrzanowski", "Adam Coates", "Erich Elsen", "Jesse Engel", "Awni Hannun", "Sanjeev Satheesh" ], "title": "Persistent rnns: Stashing recurrent weights on-chip", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Chris Donahue", "Julian McAuley", "Miller Puckette" ], "title": "Adversarial audio synthesis", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Jeff Donahue", "Sander Dieleman", "Mikołaj Bińkowski", "Erich Elsen", "Karen Simonyan" ], "title": "End-to-end adversarial text-to-speech", "venue": "arXiv preprint arXiv:2006.03575,", "year": 2020 }, { "authors": [ "Jesse Engel", "Kumar Krishna Agrawal", "Shuo Chen", "Ishaan Gulrajani", "Chris Donahue", "Adam Roberts" ], "title": "Gansynth: Adversarial neural audio synthesis", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Jesse Engel", "Lamtharn Hantrakul", "Chenjie Gu", "Adam Roberts" ], "title": "Ddsp: Differentiable digital signal processing", "venue": "arXiv preprint arXiv:2001.04643,", "year": 2020 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Anirudh Goyal Alias Parth Goyal", "Nan Rosemary Ke", "Surya Ganguli", "Yoshua Bengio" ], "title": "Variational walkback: Learning a transition operator as a stochastic recurrent net", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Swaminathan Gurumurthy", "Ravi Kiran Sarvadevabhatla", "R Venkatesh Babu" ], "title": "Deligan: Generative adversarial networks for diverse and limited data", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jonathan Ho", "Ajay Jain", "Pieter Abbeel" ], "title": "Denoising diffusion probabilistic models", "venue": "arXiv preprint arXiv:2006.11239,", "year": 2020 }, { "authors": [ "Keith Ito" ], "title": "The LJ speech dataset. 2017", "venue": null, "year": 2017 }, { "authors": [ "Nal Kalchbrenner", "Erich Elsen", "Karen Simonyan", "Seb Noury", "Norman Casagrande", "Edward Lockhart", "Florian Stimberg", "Aaron van den Oord", "Sander Dieleman", "Koray Kavukcuoglu" ], "title": "Efficient neural audio synthesis", "venue": null, "year": 2018 }, { "authors": [ "Sungwon Kim", "Sang-gil Lee", "Jongyoon Song", "Sungroh Yoon" ], "title": "FloWaveNet: A generative flow for raw audio", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Kundan Kumar", "Rithesh Kumar", "Thibault de Boissiere", "Lucas Gestin", "Wei Zhen Teoh", "Jose Sotelo", "Alexandre de Brébisson", "Yoshua Bengio", "Aaron C Courville" ], "title": "Melgan: Generative adversarial networks for conditional waveform synthesis", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Chae Young Lee", "Anoop Toffy", "Gue Jun Jung", "Woo-Jin Han" ], "title": "Conditional wavegan", "venue": "arXiv preprint arXiv:1809.10636,", "year": 2018 }, { "authors": [ "Francesc Lluís", "Jordi Pons", "Xavier Serra" ], "title": "End-to-end music source separation: is it possible in the waveform domain", "venue": "arXiv preprint arXiv:1810.12187,", "year": 2018 }, { "authors": [ "Soroush Mehri", "Kundan Kumar", "Ishaan Gulrajani", "Rithesh Kumar", "Shubham Jain", "Jose Sotelo", "Aaron Courville", "Yoshua Bengio" ], "title": "SampleRNN: An unconditional end-to-end neural audio generation model", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Kasperi Palkama", "Lauri Juvela", "Alexander Ilin" ], "title": "Conditional spoken digit generation with stylegan", "venue": "In Interspeech,", "year": 2020 }, { "authors": [ "Kainan Peng", "Wei Ping", "Zhao Song", "Kexin Zhao" ], "title": "Non-autoregressive neural text-to-speech", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Wei Ping", "Kainan Peng", "Andrew Gibiansky", "Sercan O Arik", "Ajay Kannan", "Sharan Narang", "Jonathan Raiman", "John Miller" ], "title": "Deep Voice 3: Scaling text-to-speech with convolutional sequence learning", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Wei Ping", "Kainan Peng", "Jitong Chen" ], "title": "ClariNet: Parallel wave generation in end-to-end text-tospeech", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Wei Ping", "Kainan Peng", "Kexin Zhao", "Zhao Song" ], "title": "WaveFlow: A compact flow-based model for raw audio", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Ryan Prenger", "Rafael Valle", "Bryan Catanzaro" ], "title": "WaveGlow: A flow-based generative network for speech synthesis", "venue": "In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Yi Ren", "Yangjun Ruan", "Xu Tan", "Tao Qin", "Sheng Zhao", "Zhou Zhao", "Tie-Yan Liu" ], "title": "Fastspeech: Fast, robust and controllable text to speech", "venue": null, "year": 1905 }, { "authors": [ "Dario Rethage", "Jordi Pons", "Xavier Serra" ], "title": "A wavenet for speech denoising", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Flávio Ribeiro", "Dinei Florêncio", "Cha Zhang", "Michael Seltzer" ], "title": "CrowdMOS: An approach for crowdsourcing mean opinion score studies", "venue": "In ICASSP,", "year": 2011 }, { "authors": [ "Eitan Richardson", "Yair Weiss" ], "title": "On gans and gmms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Jonathan Shen", "Ruoming Pang", "Ron J Weiss", "Mike Schuster", "Navdeep Jaitly", "Zongheng Yang", "Zhifeng Chen", "Yu Zhang", "Yuxuan Wang", "RJ Skerry-Ryan" ], "title": "Natural TTS synthesis by conditioning WaveNet on mel spectrogram predictions", "venue": null, "year": 2018 }, { "authors": [ "Jascha Sohl-Dickstein", "Eric A Weiss", "Niru Maheswaranathan", "Surya Ganguli" ], "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "venue": "arXiv preprint arXiv:1503.03585,", "year": 2015 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Generative modeling by estimating gradients of the data distribution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Improved techniques for training score-based generative models", "venue": "arXiv preprint arXiv:2006.09011,", "year": 2020 }, { "authors": [ "Jose Sotelo", "Soroush Mehri", "Kundan Kumar", "Joao Felipe Santos", "Kyle Kastner", "Aaron Courville", "Yoshua Bengio" ], "title": "Char2wav: End-to-end speech synthesis", "venue": "ICLR workshop,", "year": 2017 }, { "authors": [ "Yaniv Taigman", "Lior Wolf", "Adam Polyak", "Eliya Nachmani" ], "title": "VoiceLoop: Voice fitting and synthesis via a phonological loop", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Rafael Valle", "Kevin Shih", "Ryan Prenger", "Bryan Catanzaro" ], "title": "Flowtron: an autoregressive flow-based generative network for text-to-speech synthesis", "venue": "arXiv preprint arXiv:2005.05957,", "year": 2020 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "WaveNet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Oriol Vinyals", "Koray Kavukcuoglu" ], "title": "Neural discrete representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Igor Babuschkin", "Karen Simonyan", "Oriol Vinyals", "Koray Kavukcuoglu", "George van den Driessche", "Edward Lockhart", "Luis C Cobo", "Florian Stimberg" ], "title": "Parallel WaveNet: Fast high-fidelity speech synthesis", "venue": null, "year": 2018 }, { "authors": [ "Sean Vasquez", "Mike Lewis" ], "title": "Melnet: A generative model for audio in the frequency domain", "venue": "arXiv preprint arXiv:1906.01083,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Xin Wang", "Shinji Takaki", "Junichi Yamagishi" ], "title": "Neural source-filter-based waveform model for statistical parametric speech synthesis", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Yuxuan Wang", "RJ Skerry-Ryan", "Daisy Stanton", "Yonghui Wu", "Ron J Weiss", "Navdeep Jaitly", "Zongheng Yang", "Ying Xiao", "Zhifeng Chen", "Samy Bengio", "Quoc Le", "Yannis Agiomyrgiannakis", "Rob Clark", "Rif A. Saurous" ], "title": "Tacotron: Towards end-to-end speech synthesis", "venue": null, "year": 2017 }, { "authors": [ "Pete Warden" ], "title": "Speech commands: A dataset for limited-vocabulary speech recognition", "venue": "arXiv preprint arXiv:1804.03209,", "year": 2018 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Yuan Xu", "Erdene-Ochir Tuguldur" ], "title": "Convolutional neural networks for Google speech commands data set with PyTorch, 2017. https://github.com/tugstugi/ pytorch-speech-commands", "venue": null, "year": 2017 }, { "authors": [ "Ryuichi Yamamoto", "Eunwoo Song", "Jae-Min Kim" ], "title": "Parallel wavegan: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Zhiming Zhou", "Han Cai", "Shu Rong", "Yuxuan Song", "Kan Ren", "Weinan Zhang", "Yong Yu", "Jun Wang" ], "title": "Activation maximization generative adversarial nets", "venue": null, "year": 2000 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep generative models have produced high-fidelity raw audio in speech synthesis and music generation. In previous work, likelihood-based models, including autoregressive models (van den Oord et al., 2016; Kalchbrenner et al., 2018; Mehri et al., 2017) and flow-based models (Prenger et al., 2019; Ping et al., 2020; Kim et al., 2019), have predominated in audio synthesis because of the simple training objective and superior ability of modeling the fine details of waveform in real data. There are other waveform models, which often require auxiliary losses for training, such as flow-based models trained by distillation (van den Oord et al., 2018; Ping et al., 2019), variational auto-encoder (VAE) based model (Peng et al., 2020), and generative adversarial network (GAN) based models (Kumar et al., 2019; Bińkowski et al., 2020; Yamamoto et al., 2020).\nMost of previous waveform models focus on audio synthesis with informative local conditioner (e.g., mel spectrogram or aligned linguistic features), with only a few exceptions for unconditional generation (Mehri et al., 2017; Donahue et al., 2019). It has been noticed that autoregressive models (e.g., WaveNet) tend to generate made-up word-like sounds (van den Oord et al., 2016), or inferior samples (Donahue et al., 2019) under unconditional settings. This is because very long sequences need to be generated (e.g., 16,000 time-steps for one second speech) without any conditional information.\nDiffusion probabilistic models (diffusion models for brevity) are a class of promising generative models, which use a Markov chain to gradually convert a simple distribution (e.g., isotropic Gaussian) into complicated data distribution (Sohl-Dickstein et al., 2015; Goyal et al., 2017; Ho et al., 2020). Although the data likelihood is intractable, diffusion models can be efficiently trained by optimizing the variational lower bound (ELBO). Most recently, a certain parameterization has been shown successful in image synthesis (Ho et al., 2020), which is connected with denoising score matching (Song\n∗Contributed to the work during an internship at Baidu Research, USA. 1Audio samples are in: https://diffwave-demo.github.io/\n& Ermon, 2019). Diffusion models can use a diffusion (noise-adding) process without learnable parameters to obtain the “whitened” latents from training data. Therefore, no additional neural networks are required for training in contrast to other models (e.g., the encoder in VAE (Kingma & Welling, 2014) or the discriminator in GAN (Goodfellow et al., 2014)). This avoids the challenging “posterior collapse” or “mode collapse” issues stemming from the joint training of two networks, and hence is valuable for high-fidelity audio synthesis.\nIn this work, we propose DiffWave, a versatile diffusion probabilistic model for raw audio synthesis. DiffWave has several advantages over previous work: i) It is non-autoregressive thus can synthesize high-dimensional waveform in parallel. ii) It is flexible as it does not impose any architectural constraints in contrast to flow-based models, which need to keep the bijection between latents and data (e.g., see more analysis in Ping et al. (2020)). This leads to small-footprint neural vocoders that still generate high-fidelity speech. iii) It uses a single ELBO-based training objective without any auxiliary losses (e.g., spectrogram-based losses) for high-fidelity synthesis. iv) It is a versatile model that produces high-quality audio signals for both conditional and unconditional waveform generation.\nSpecifically, we make the following contributions:\n1. DiffWave uses a feed-forward and bidirectional dilated convolution architecture motivated by WaveNet (van den Oord et al., 2016). It matches the strong WaveNet vocoder in terms of speech quality (MOS: 4.44 vs. 4.43), while synthesizing orders of magnitude faster as it only requires a few sequential steps (e.g., 6) for generating very long waveforms.\n2. Our small DiffWave has 2.64M parameters and synthesizes 22.05 kHz high-fidelity speech (MOS: 4.37) more than 5× faster than real-time on a V100 GPU without engineered kernels. Although it is still slower than the state-of-the-art flow-based models (Ping et al., 2020; Prenger et al., 2019), it has much smaller footprint. We expect further speed-up by optimizing its inference mechanism in the future.\n3. DiffWave significantly outperforms WaveGAN (Donahue et al., 2019) and WaveNet in the challenging unconditional and class-conditional waveform generation tasks in terms of audio quality and sample diversity measured by several automatic and human evaluations.\nWe organize the rest of the paper as follows. We present the diffusion models in Section 2, and introduce DiffWave architecture in Section 3. Section 4 discusses related work. We report experimental results in Section 5 and conclude the paper in Section 6." }, { "heading": "2 DIFFUSION PROBABILISTIC MODELS", "text": "We define qdata(x0) as the data distribution on RL, where L is the data dimension. Let xt ∈ RL for t = 0, 1, · · · , T be a sequence of variables with the same dimension, where t is the index for diffusion steps. Then, a diffusion model of T steps is composed of two processes: the diffusion process, and the reverse process (Sohl-Dickstein et al., 2015). Both of them are illustrated in Figure 1.\nAlgorithm 1 Training\nfor i = 1, 2, · · · , Niter do Sample x0 ∼ qdata, ∼ N (0, I), and t ∼ Uniform({1, · · · , T})\nTake gradient step on ∇θ‖ − θ( √ ᾱtx0 + √ 1− ᾱt , t)‖22\naccording to Eq. (7) end for\nAlgorithm 2 Sampling Sample xT ∼ platent = N (0, I) for t = T, T − 1, · · · , 1 do\nCompute µθ(xt, t) and σθ(xt, t) using Eq. (5) Sample xt−1 ∼ pθ(xt−1|xt) = N (xt−1;µθ(xt, t), σθ(xt, t)2I)\nend for return x0\nThe diffusion process is defined by a fixed Markov chain from data x0 to the latent variable xT :\nq(x1, · · · , xT |x0) = T∏ t=1 q(xt|xt−1), (1)\nwhere each of q(xt|xt−1) is fixed to N (xt; √\n1− βtxt−1, βtI) for a small positive constant βt. The function of q(xt|xt−1) is to add small Gaussian noise to the distribution of xt−1. The whole process gradually converts data x0 to whitened latents xT according to a variance schedule β1, · · · , βT . 2\nThe reverse process is defined by a Markov chain from xT to x0 parameterized by θ:\nplatent(xT ) = N (0, I), and pθ(x0, · · · , xT−1|xT ) = T∏ t=1 pθ(xt−1|xt), (2)\nwhere platent(xT ) is isotropic Gaussian, and the transition probability pθ(xt−1|xt) is parameterized asN (xt−1;µθ(xt, t), σθ(xt, t)2I) with shared parameter θ. Note that both µθ and σθ take two inputs: the diffusion-step t ∈ N, and variable xt ∈ RL. µθ outputs an L-dimensional vector as the mean, and σθ outputs a real number as the standard deviation. The goal of pθ(xt−1|xt) is to eliminate the Gaussian noise (i.e. denoise) added in the diffusion process.\nSampling: Given the reverse process, the generative procedure is to first sample an xT ∼ N (0, I), and then sample xt−1 ∼ pθ(xt−1|xt) for t = T, T − 1, · · · , 1. The output x0 is the sampled data. Training: The likelihood pθ(x0) = ∫ pθ(x0, · · · , xT−1|xT ) · platent(xT ) dx1:T is intractable to calculate in general. The model is thus trained by maximizing its variational lower bound (ELBO):\nEqdata(x0) log pθ(x0) = Eqdata(x0) logEq(x1,··· ,xT |x0) [pθ(x0, · · · , xT−1|xT )× platent(xT )\nq(x1, · · · , xT |x0) ] ≥ Eq(x0,··· ,xT ) log\npθ(x0, · · · , xT−1|xT )× platent(xT ) q(x1, · · · , xT |x0) := ELBO. (3)\nMost recently, Ho et al. (2020) showed that under a certain parameterization, the ELBO of the diffusion model can be calculated in closed-form. This accelerates the computation and avoids Monte Carlo estimates, which have high variance. This parameterization is motivated by its connection to denoising score matching with Langevin dynamics (Song & Ermon, 2019; 2020). To introduce this parameterization, we first define some constants based on the variance schedule {βt}Tt=1 in the diffusion process as in Ho et al. (2020):\nαt = 1− βt, ᾱt = t∏\ns=1\nαs, β̃t = 1− ᾱt−1 1− ᾱt βt for t > 1 and β̃1 = β1. (4)\nThen, the parameterizations of µθ and σθ are defined by\nµθ(xt, t) = 1 √ αt\n( xt −\nβt√ 1− ᾱt θ(xt, t)\n) , and σθ(xt, t) = β̃ 1 2 t , (5)\nwhere θ : RL × N→ RL is a neural network also taking xt and the diffusion-step t as inputs. Note that σθ(xt, t) is fixed to a constant β̃ 1 2 t for every step t under this parameterization. In the following proposition, we explicitly provide the closed-form expression of the ELBO.\n2One can find that q(xT |x0) approaches to isotropic Gaussian with large T in Eq. (11) in the Appendix A.\nProposition 1. (Ho et al., 2020) Suppose a series of fixed schedule {βt}Tt=1 are given. Let ∼ N (0, I) and x0 ∼ qdata. Then, under the parameterization in Eq. (5), we have\n− ELBO = c+ T∑ t=1 κtEx0, ‖ − θ( √ ᾱtx0 + √ 1− ᾱt , t)‖22 (6)\nfor some constants c and κt, where κt = βt2αt(1−ᾱt−1) for t > 1, and κ1 = 1 2α1 .\nNote that c is irrelevant for optimization purpose. The key idea in the proof is to expand the ELBO into a sum of KL divergences between tractable Gaussian distributions, which have a closed-form expression. We refer the readers to look at Section A in the Appendix for the full proof.\nIn addition, Ho et al. (2020) reported that minimizing the following unweighted variant of the ELBO leads to higher generation quality:\nmin θ Lunweighted(θ) = Ex0, ,t ‖ − θ(\n√ ᾱtx0 + √ 1− ᾱt , t)‖22 (7)\nwhere t is uniformly taken from 1, · · · , T . Therefore, we also use this training objective in this paper. We summarize the training and sampling procedures in Algorithm 1 and 2, respectively.\nFast sampling: Given a trained model from Algorithm 1, we noticed that the most effective denoising steps at sampling occur near t = 0 (see Section IV on demo website). This encourages us to design a fast sampling algorithm with much fewer denoising steps Tinfer (e.g., 6) than T at training (e.g., 200). The key idea is to “collapse” the T -step reverse process into a Tinfer-step process with carefully designed variance schedule. We provide the details in Appendix B." }, { "heading": "3 DIFFWAVE ARCHITECTURE", "text": "In this section, we present the architecture of DiffWave (see Figure 2 for an illustration). We build the network θ : RL × N→ RL in Eq. (5) based on a bidirectional dilated convolution architecture that is different from WaveNet (van den Oord et al., 2016), because there is no autoregressive generation constraint. 3 The similar architecture has been applied for source separation (Rethage et al., 2018; Lluís et al., 2018). The network is non-autoregressive, so generating an audio x0 with length L from latents xT requires T rounds of forward propagation, where T (e.g., 50) is much smaller than the waveform length L. The network is composed of a stack of N residual layers with residual channels\n3Indeed, we found the causal dilated convolution architecture leads to worse audio quality in DiffWave.\nC. These layers are grouped into m blocks and each block has n = Nm layers. We use a bidirectional dilated convolution (Bi-DilConv) with kernel size 3 in each layer. The dilation is doubled at each layer within each block, i.e., [1, 2, 4, · · · , 2n−1]. We sum the skip connections from all residual layers as in WaveNet. More details including the tensor shapes are included in Section C in the Appendix." }, { "heading": "3.1 DIFFUSION-STEP EMBEDDING", "text": "It is important to include the diffusion-step t as part of the input, as the model needs to output different θ(·, t) for different t. We use an 128-dimensional encoding vector for each t (Vaswani et al., 2017):\ntembedding = [ sin ( 10 0×4 63 t ) , · · · , sin ( 10 63×4 63 t ) , cos ( 10 0×4 63 t ) , · · · , cos ( 10 63×4 63 t )] (8)\nWe then apply three fully connected (FC) layers on the encoding, where the first two FCs share parameters among all residual layers. The last residual-layer-specific FC maps the output of the second FC into a C-dimensional embedding vector. We next broadcast this embedding vector over length and add it to the input of every residual layer." }, { "heading": "3.2 CONDITIONAL GENERATION", "text": "Local conditioner: In speech synthesis, a neural vocoder can synthesize the waveform conditioned on the aligned linguistic features (van den Oord et al., 2016; Arık et al., 2017b), the mel spectrogram from a text-to-spectrogram model (Ping et al., 2018; Shen et al., 2018), or the hidden states within the text-to-wave architecture (Ping et al., 2019; Donahue et al., 2020). In this work, we test DiffWave as a neural vocoder conditioned on mel spectrogram. We first upsample the mel spectrogram to the same length as waveform through transposed 2-D convolutions. After a layer-specific Conv1×1 mapping its mel-band into 2C channels, the conditioner is added as a bias term for the dilated convolution in each residual layer. The hyperparameters can be found in Section 5.1.\nGlobal conditioner: In many generative tasks, the conditional information is given by global discrete labels (e.g., speaker IDs or word IDs). We use shared embeddings with dimension dlabel = 128 in all experiments. In each residual layer, we apply a layer-specific Conv1×1 to map dlabel to 2C channels, and add the embedding as a bias term after the dilated convolution in each residual layer." }, { "heading": "3.3 UNCONDITIONAL GENERATION", "text": "In unconditional generation task, the model needs to generate consistent utterances without conditional information. It is important for the output units of the network to have a receptive field size (denoted as r) larger than the length L of the utterance. Indeed, we need r ≥ 2L, thus the left and right-most output units have receptive fields covering the whole L-dimensional inputs as illustrated in Figure 4 in Appendix. This posts a challenge for architecture design even with the dilated convolutions.\nFor a stack of dilated convolution layers, the receptive field size of the output is up to: r = (k − 1) ∑ i di + 1, where k is the kernel size and di is the dilation at i-th residual layer. For example, 30-layer dilated convolution has a receptive field size r = 6139, with k = 3 and dilation cycle [1, 2, · · · , 512]. This only amounts to 0.38s of 16kHz audio. We can further increase the number of layers and the size of dilation cycles; however, we found degraded quality with deeper layers and larger dilation cycles. This is particularly true for WaveNet. In fact, previous study (Shen et al., 2018) suggests that even a moderate large receptive field size (e.g., 6139) is not effectively used in WaveNet and it tends to focus on much shorter context (e.g., 500). DiffWave has an advantage in enlarging the receptive fields of output x0: by iterating from xT to x0 in the reverse process, the receptive field size can be increased up to T × r, which makes DiffWave suitable for unconditional generation." }, { "heading": "4 RELATED WORK", "text": "In the past years, many neural text-to-speech (TTS) systems have been introduced. An incomplete list includes WaveNet (van den Oord et al., 2016), Deep Voice 1 & 2 & 3 (Arık et al., 2017a;b; Ping et al., 2018), Tacotron 1 & 2 (Wang et al., 2017; Shen et al., 2018), Char2Wav (Sotelo et al., 2017), VoiceLoop (Taigman et al., 2018), Parallel WaveNet (van den Oord et al., 2018), WaveRNN (Kalchbrenner et al., 2018), ClariNet (Ping et al., 2019), ParaNet (Peng et al., 2020), FastSpeech (Ren et al., 2019), GAN-TTS (Bińkowski et al., 2020), and Flowtron (Valle et al., 2020). These systems first\ngenerate intermediate representations (e.g., aligned linguistic features, mel spectrogram, or hidden representations) conditioned on text, then use a neural vocoder to synthesize the raw waveform.\nNeural vocoder plays the most important role in the recent success of speech synthesis. Autoregressive models like WaveNet and WaveRNN can generate high-fidelity speech, but in a sequential way of generation. Parallel WaveNet and ClariNet distill parallel flow-based models from WaveNet, thus can synthesize waveform in parallel. In contrast, WaveFlow (Ping et al., 2020), WaveGlow (Prenger et al., 2019) and FloWaveNet (Kim et al., 2019) are trained by maximizing likelihood. There are other waveform models, such as VAE-based models (Peng et al., 2020), GAN-based models (Kumar et al., 2019; Yamamoto et al., 2020; Bińkowski et al., 2020), and neural signal processing models (Wang et al., 2019; Engel et al., 2020; Ai & Ling, 2020). In contrast to likelihood-based models, they often require auxiliary training losses to improve the audio fidelity. The proposed DiffWave is another promising neural vocoder synthesizing the best quality of speech with a single objective function.\nUnconditional generation of audio in the time domain is a challenging task in general. Likelihoodbased models are forced to learn all possible variations within the dataset without any conditional information, which can be quite difficult with limited model capacity. In practice, these models produce made-up word-like sounds or inferior samples (van den Oord et al., 2016; Donahue et al., 2019). VQ-VAE (van den Oord et al., 2017) circumvents this issue by compressing the waveform into compact latent code, and training an autoregressive model in latent domain. GAN-based models are believed to be suitable for unconditional generation (e.g., Donahue et al., 2019) due to the “mode seeking” behaviour and success in image domain (Brock et al., 2018). Note that unconditional generation of audio in the frequency domain is considered easier, as the spectrogram is much shorter (e.g., 200×) than waveform (Vasquez & Lewis, 2019; Engel et al., 2019; Palkama et al., 2020). In this work, we demonstrate the superior performance of DiffWave in unconditional generation of waveform. In contrast to the exact-likelihood models, DiffWave maximizes a variational lower bound of the likelihood, which can focus on the major variations within the data and alleviate the requirements for model capacity. In contrast to GAN or VAE-based models (Donahue et al., 2019; Peng et al., 2020), it is much easier to train without mode collapse, posterior collapse, or training instability stemming from the joint training of two networks. There is a concurrent work (Chen et al., 2020) that uses diffusion probabilistic models for waveform generation. In contrast to DiffWave, it uses a neural architecture similar to GAN-TTS and focuses on the neural vocoding task only. Our DiffWave vocoder has much fewer parameters than WaveGrad – 2.64M vs. 15M for Base models and 6.91M vs. 23M for Large models. The small memory footprint is preferred in production TTS systems, especially for on-device deployment. In addition, DiffWave requires a smaller batch size (16 vs. 256) and fewer computational resources for training." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate DiffWave on neural vocoding, unconditional and class-conditional generation tasks." }, { "heading": "5.1 NEURAL VOCODING", "text": "Data: We use the LJ speech dataset (Ito, 2017) that contains ∼24 hours of audio recorded in home environment with a sampling rate of 22.05 kHz. It contains 13,100 utterances from a female speaker.\nModels: We compare DiffWave with several state-of-the-art neural vocoders, including WaveNet, ClariNet, WaveGlow and WaveFlow. Details of baseline models can be found in the original papers. Their hyperparameters can be found in Table 1. Our DiffWave models have 30 residual layers, kernel size 3, and dilation cycle [1, 2, · · · , 512]. We compare DiffWave models with different number of diffusion steps T ∈ {20, 40, 50, 200} and residual channels C ∈ {64, 128}. We use linear spaced schedule for βt ∈ [1 × 10−4, 0.02] for DiffWave with T = 200, and βt ∈ [1 × 10−4, 0.05] for DiffWave with T ≤ 50. The reason to increase βt for smaller T is to make q(xT |x0) close to platent. In addition, we compare the fast sampling algorithm with smaller Tinfer (see Appendix B), denoted as DiffWave (Fast), with the regular sampling (Algorithm 2). Both of them use the same trained models.\nConditioner: We use the 80-band mel spectrogram of the original audio as the conditioner to test these neural vocoders as in previous work (Ping et al., 2019; Prenger et al., 2019; Kim et al., 2019). We set FFT size to 1024, hop size to 256, and window size to 1024. We upsample the mel spectrogram 256 times by applying two layers of transposed 2-D convolution (in time and frequency) interleaved\nwith leaky ReLU (α = 0.4). For each layer, the upsamling stride in time is 16 and 2-D filter sizes are [32, 3]. After upsampling, we use a layer-specific Conv1×1 to map the 80 mel bands into 2× residual channels, then add the conditioner as a bias term for the dilated convolution before the gated-tanh nonlinearities in each residual layer.\nTraining: We train DiffWave on 8 Nvidia 2080Ti GPUs using random short audio clips of 16,000 samples from each utterance. We use Adam optimizer (Kingma & Ba, 2015) with a batch size of 16 and learning rate 2× 10−4. We train all DiffWave models for 1M steps. For other models, we follow the training setups as in the original papers.\nResults: We use the crowdMOS tookit (Ribeiro et al., 2011) for speech quality evaluation, where the test utterances from all models were presented to Mechanical Turk workers. We report the 5-scale Mean Opinion Scores (MOS), and model footprints in Table 1 4. Our DiffWave LARGE model with residual channels 128 matches the strong WaveNet vocoder in terms of speech quality (MOS: 4.44 vs. 4.43). The DiffWave BASE with residual channels 64 also generates high quality speech (e.g., MOS: 4.35) even with small number of diffusion steps (e.g., T = 40 or 20). For synthesis speed, DiffWave BASE (T = 20) in FP32 generates audio 2.1× faster than real-time, and DiffWave BASE (T = 40) in FP32 is 1.1× faster than real-time on a Nvidia V100 GPU without engineering optimization. Meanwhile, DiffWave BASE (Fast) and DiffWave LARGE (Fast) can be 5.6× and 3.5× faster than realtime respectively and still obtain good audio fidelity. In contrast, a WaveNet implementation can be 500× slower than real-time at synthesis without engineered kernels. DiffWave is still slower than the state-of-the-art flow-based models (e.g., a 5.91M WaveFlow is > 40× faster than real-time in FP16), but has smaller footprint and slightly better quality. Because DiffWave does not impose any architectural constraints as in flow-based models, we expect further speed-up by optimizing the architecture and inference mechanism in the future." }, { "heading": "5.2 UNCONDITIONAL GENERATION", "text": "In this section, we apply DiffWave to an unconditional generation task based on raw waveform only.\nData: We use the Speech Commands dataset (Warden, 2018), which contains many spoken words by thousands of speakers under various recording conditions including some very noisy environment. We select the subset that contains spoken digits (0∼9), which we call the SC09 dataset. The SC09 dataset contains 31,158 training utterances (∼8.7 hours in total) by 2,032 speakers, where each audio has length equal to one second under sampling rate 16kHz. Therefore, the data dimension L is 16,000. Note that the SC09 dataset exhibits various variations (e.g., contents, speakers, speech rate, recording conditions); the generative models need to model them without any conditional information.\nModels: We compare DiffWave with WaveNet and WaveGAN. We also tried to remove the mel conditioner in a state-of-the-art GAN-based neural vocoder (Yamamoto et al., 2020), but found it could\n4The MOS evaluation for DiffWave(Fast) with Tinfer = 6 was done after paper submission and may not be directly comparable to previous scores.\nnot generate intelligible speech in this unconditional task. We use 30 layer-WaveNet models with residual channels 128 (denoted as WaveNet-128) and 256 (denoted as WaveNet-256), respectively. We tried to increase the size of the dilation cycle and the number of layers, but these modifications lead to worse quality. In particular, a large dilation cycle (e.g., up to 2048) leads to unstable training. For WaveGAN, we use their pretrained model on Google Colab. We use a 36-layer DiffWave model with kernel size 3 and dilation cycle [1, 2, · · · , 2048]. We set the number of diffusion steps T = 200 and residual channels C = 256. We use linear spaced schedule for βt ∈ [1× 10−4, 0.02]. Training: We train WaveNet and DiffWave on 8 Nvidia 2080Ti GPUs using full utterances. We use Adam optimizer with a batch size of 16. For WaveNet, we set the initial learning rate as 1× 10−3 and halve the learning rate every 200K iterations. For DiffWave, we fix the learning rate to 2× 10−4. We train WaveNet and DiffWave for 1M steps.\nEvaluation: For human evaluation, we report the 5-scale MOS for speech quality similar to Section 5.1. To automatically evaluate the quality of generated audio samples, we train a ResNeXT classifier (Xie et al., 2017) on the SC09 dataset according to an open repository (Xu & Tuguldur, 2017). The classifier achieves 99.06% accuracy on the trainset and 98.76% accuracy on the testset. We use the following evaluation metrics based on the 1024-dimensional feature vector and the 10-dimensional logits from the ResNeXT classifier (see Section D in the Appendix for the detailed definitions):\n• Fréchet Inception Distance (FID) (Heusel et al., 2017) measures both quality and diversity of generated samples, and favors generators that match moments in the feature space.\n• Inception Score (IS) (Salimans et al., 2016) measures both quality and diversity of generated samples, and favors generated samples that can be clearly determined by the classifier.\n• Modified Inception Score (mIS) (Gurumurthy et al., 2017) measures the within-class diversity of samples in addition to IS.\n• AM Score (Zhou et al., 2017) takes into consideration the marginal label distribution of training data compared to IS.\n• Number of Statistically-Different Bins (NDB) (Richardson & Weiss, 2018) measures diversity of generated samples.\nResults: We randomly generate 1,000 audio samples from each model for evaluation. We report results in Table 2. Our DiffWave model outperforms baseline models under all metrics, including both automatic and human evaluation. Notably, the quality of audio samples generated by DiffWave is much higher than WaveNet and WaveGAN baselines (MOS: 3.39 vs. 1.43 and 2.03). Note that the quality of ground-truth audio exhibits large variations. The automatic evaluation metrics also indicate that DiffWave is better at quality, diversity, and matching marginal label distribution of training data." }, { "heading": "5.3 CLASS-CONDITIONAL GENERATION", "text": "In this section, we provide the digit labels as the conditioner in DiffWave and compare our model to WaveNet. We omit the comparison with conditional WaveGAN due to its noisy output audio (Lee et al., 2018). For both DiffWave and WaveNet, the label conditioner is added to the model according to Section 3.2. We use the same dataset, model hyperparameters, and training settings as in Section 5.2.\nEvaluation: We use slightly different automatic evaluation methods in this section because audio samples are generated according to pre-specified discrete labels. The AM score and NDB are removed because they are less meaningful when the prior label distribution of generated data is specified. We keep IS and mIS because IS favors sharp, clear samples and mIS measures within-class diversity. We modify FID to FID-class: for each digit from 0 to 9, we compute FID between the generated audio samples that are pre-specified as this digit and training utterances with the same digit labels, and report the mean and standard deviation of these ten FID scores. We also report classification accuracy based on the ResNeXT classifier used in Section 5.2.\nResults: We randomly generate 100 audio samples for each digit (0 to 9) from all models for evaluation. We report results in Table 3. Our DiffWave model significantly outperforms WaveNet on all evaluation metrics. It produces superior quality than WaveNet (MOS: 3.50 vs. 1.58), and greatly decreases the gap to ground-truth (the gap between DiffWave and ground-truth is ∼10% of the gap between WaveNet and ground-truth). The automatic evaluation metrics indicate that DiffWave is much better at speech clarity (> 91% accuracy) and within-class diversity (its mIS is 6× higher than WaveNet). We additionally found a deep and thin version of DiffWave with residual channels C = 128 and 48 residual layers can achieve slightly better accuracy but lower audio quality. One may also compare quality of generated audio samples between conditional and unconditional generation based on IS, mIS, and MOS. For both WaveNet and DiffWave, IS increases by >20%, mIS almost doubles, and MOS increases by ≥ 0.11. These results indicate that the digit labels reduces the difficulty of the generative task and helps improving the generation quality of WaveNet and DiffWave." }, { "heading": "5.4 ADDITIONAL RESULTS", "text": "Zero-shot speech denoising: The unconditional DiffWave model can readily perform speech denoising. The SC09 dataset provides six types of noises for data augmentation in recognition tasks: white noise, pink noise, running tap, exercise bike, dude miaowing, and doing the dishes. These noises are not used during the training phase of our unconditional DiffWave in Section 5.2. We add 10% of each type of noise to test data, feed these noisy utterances into the reverse process at t = 25, and then obtain the outputs x0’s. The audio samples are in Section V on the demo website. Note that our model is not trained on a denoising task and has zero knowledge about any noise type other than the white noise added in diffusion process. It indicates DiffWave learns a good prior of raw audio.\nInterpolation in latent space: We can do interpolation with the digit conditioned DiffWave model in Section 5.3 on the SC09 dataset. The interpolation of voices xa0 , x b 0 between two speakers a, b is done in the latent space at t = 50. We first sample xat ∼ q(xt|xa0) and xbt ∼ q(xt|xb0) for the two speakers. We then do linear interpolation between xat and x b t : x λ t = (1− λ)xat + λxbt for 0 < λ < 1. Finally, we sample xλ0 ∼ pθ(xλ0 |xλt ). The audio samples are in Section VI on the demo website." }, { "heading": "6 CONCLUSION", "text": "In this paper, we present DiffWave, a versatile generative model for raw waveform. In the neural vocoding task, it readily models the fine details of waveform conditioned on mel spectrogram and matches the strong autoregressive neural vocoder in terms of speech quality. In unconditional and class-conditional generation tasks, it properly captures the large variations within the data and produces realistic voices and consistent word-level pronunciations. To the best of our knowledge, DiffWave is the first waveform model that exhibits such versatility. DiffWave raises a number of open problems and provides broad opportunities for future research. For example, it would be meaningful to push the model to generate longer utterances, as DiffWave potentially has very large receptive fields. Second, optimizing the inference speed would be beneficial for applying the model in production TTS, because DiffWave is still slower than flow-based models. We found the most effective denoising steps in the reverse process occur near x0, which suggests an even smaller T is possible in DiffWave. In addition, the model parameters θ are shared across the reverse process, so the persistent kernels that stash the parameters on-chip would largely speed-up inference on GPUs (Diamos et al., 2016)." }, { "heading": "B DETAILS OF THE FAST SAMPLING ALGORITHM", "text": "Let Tinfer T be the number of steps in the reverse process (sampling) and {ηt}Tinfert=1 be the userdefined variance schedule, which can be independent with the training variance schedule {βt}Tt=1. Then, we compute the corresponding constants in the same way as Eq. (4):\nγt = 1− ηt, γ̄t = t∏\ns=1\nγs, η̃t = 1− γ̄t−1 1− γ̄t ηt for t > 1 and η̃1 = η1. (13)\nAs step s during sampling, we need to select an t and use θ(·, t) to eliminate noise. This is realized by aligning the noise levels from the user-defined and the training variance schedules. Ideally, we want √ ᾱt = √ γ̄s. However, since this is not always possible, we interpolate √ γ̄s between two\nconsecutive training noise levels √ ᾱt+1 and √ ᾱt, if √ γ̄s is between them. We therefore obtain the desired aligned diffusion step t, which we denote taligns , via the following equation:\ntaligns = t+\n√ ᾱt − √ γ̄s√\nᾱt − √ ᾱt+1\nif √ γ̄s ∈ [ √ ᾱt+1, √ ᾱt ]. (14)\nNote that, taligns is floating-point number, which is different from the integer diffusion-step at training.\nFinally, the parameterizations of µθ and σθ are defined in a similar way as Eq. (5):\nµfastθ (xs, s) = 1 √ γs\n( xs −\nηs√ 1− γ̄s θ(xs, t align s )\n) , and σfastθ (xs, s) = η̃ 1 2 s . (15)\nThe fast sampling algorithm is summarized in Algorithm 3.\nAlgorithm 3 Fast Sampling Sample xTinfer ∼ platent = N (0, I) for s = Tinfer, Tinfer − 1, · · · , 1 do\nCompute µfastθ (xs, s) and σ fast θ (xs, s) using Eq. (15)\nSample xs−1 ∼ N (xs−1;µfastθ (xs, s), σfastθ (xs, s)2I) end for return x0\nIn neural vocoding task, we use user-defined variance schedules {0.0001, 0.001, 0.01, 0.05, 0.2, 0.7} for DiffWave LARGE and {0.0001, 0.001, 0.01, 0.05, 0.2, 0.5} for DiffWave BASE in Section 5.1. The fast sampling algorithm is similar to the sampling algorithm in Chen et al. (2020) in the sense of considering the noise levels as a controllable variable during sampling. However, the fast sampling algorithm for DiffWave does not need to modify the training procedure (Algorithm 1), and can just reuse the trained model checkpoint with large T ." }, { "heading": "C DETAILS OF THE MODEL ARCHITECTURE", "text": "" }, { "heading": "D DETAILS OF AUTOMATIC EVALUATION METRICS IN SECTION 5.2 AND 5.3", "text": "The automatic evaluation metrics used in Section 5.2 and 5.3 are described as follows. Given an input audio x, an 1024-dimensional feature vector (denoted as Ffeature(x)) is computed by the ResNeXT F , and is then transformed to the 10-dimensional multinomial distribution (denoted as pF (x)) with a fully connected layer and a softmax layer. Let Xtrain be the trainset, pgen be the distribution of generated data, and Xgen ∼ pgen(i.i.d.) be the set of generated audio samples. Then, we compute the following automatic evaluation metrics:\n• Fréchet Inception Distance (FID) (Heusel et al., 2017) computes the Wasserstein-2 distance between Gaussians fitted to Ffeature(Xtrain) and Ffeature(Xgen). That is,\nFID = ‖µg − µt‖2 + Tr ( Σt + Σg − 2(ΣtΣg) 1 2 ) ,\nwhere µt,Σt are the mean vector and covariance matrix of Ffeature(Xtrain), and where µg,Σg are the mean vector and covariance matrix of Ffeature(Xgen). • Inception Score (IS) (Salimans et al., 2016) computes the following:\nIS = exp ( Ex∼pgenKL ( pF (x)‖Ex′∼pgenpF (x′) )) ,\nwhere Ex′∼pgenpF (x′) is the marginal label distribution. • Modified Inception Score (mIS) (Gurumurthy et al., 2017) computes the following:\nmIS = exp ( Ex,x′∼pgenKL (pF (x)‖pF (x′)) ) .\n• AM Score (Zhou et al., 2017) computes the following: AM = KL ( Ex′∼qdatapF (x′)‖Ex∼pgenpF (x) ) + Ex∼pgenH(pF (x)),\nwhere H(·) computes the entropy. Compared to IS, AM score takes into consideration the the prior distribution of pF (Xtrain). • Number of Statistically-Different Bins (NDB) (Richardson & Weiss, 2018): First, Xtrain\nis clustered into K bins by K-Means in the feature space (where K = 50 in our evaluation). Next, each sample in Xgen is assigned to its nearest bin. Then, NDB is the number of bins that contain statistically different proportion of samples between training samples and generated samples." } ]
2,021
DIFFWAVE: A VERSATILE DIFFUSION MODEL FOR AUDIO SYNTHESIS
SP:efbb0e2e944f1d810a6f0b6bc71e636af9ae9c13
[ "The authors present a seq2seq model with a sparse transformer encoder and an LSTM decoder. They utilize a learning curriculum wherein the autoregressive decoder is initially trained using teacher forcing and is gradually fed its past predictions as training progresses. The authors introduce a new dataset for long term dance generation. They utilize both subjective and objective metrics to evaluate their method. The proposed method outperforms other baselines for dance generation. Finally they conduct ablation studies demonstrating the benefits of using a transformer encoder over other architectures, and the benefits of the proposed curriculum learning scheme." ]
Dancing to music is one of human’s innate abilities since ancient times. In machine learning research, however, synthesizing dance movements from music is a challenging problem. Recently, researchers synthesize human motion sequences through autoregressive models like recurrent neural network (RNN). Such an approach often generates short sequences due to an accumulation of prediction errors that are fed back into the neural network. This problem becomes even more severe in the long motion sequence generation. Besides, the consistency between dance and music in terms of style, rhythm and beat is yet to be taken into account during modeling. In this paper, we formalize the music-conditioned dance generation as a sequence-to-sequence learning problem and devise a novel seq2seq architecture to efficiently process long sequences of music features and capture the fine-grained correspondence between music and dance. Furthermore, we propose a novel curriculum learning strategy to alleviate error accumulation of autoregressive models in long motion sequence generation, which gently changes the training process from a fully guided teacher-forcing scheme using the previous ground-truth movements, towards a less guided autoregressive scheme mostly using the generated movements instead. Extensive experiments show that our approach significantly outperforms the existing state-of-the-arts on automatic metrics and human evaluation. We also make a demo video to demonstrate the superior performance of our proposed approach at https://www.youtube.com/watch?v=lmE20MEheZ8.
[ { "affiliations": [], "name": "CURRICULUM LEARNING" }, { "affiliations": [], "name": "Ruozi Huang" }, { "affiliations": [], "name": "Huang Hu" }, { "affiliations": [], "name": "Wei Wu" }, { "affiliations": [], "name": "Kei Sawada" }, { "affiliations": [], "name": "Mi Zhang" }, { "affiliations": [], "name": "Daxin Jiang" } ]
[ { "authors": [ "Samy Bengio", "Oriol Vinyals", "Navdeep Jaitly", "Noam Shazeer" ], "title": "Scheduled sampling for sequence prediction with recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Zhe Cao", "Tomas Simon", "Shih-En Wei", "Yaser Sheikh" ], "title": "Realtime multi-person 2d pose estimation using part affinity fields", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Caroline Chan", "Shiry Ginosar", "Tinghui Zhou", "Alexei A Efros" ], "title": "Everybody dance now", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Joyce L Chen", "Virginia B Penhune", "Robert J Zatorre" ], "title": "Listening to musical rhythms recruits motor regions of the brain", "venue": "Cerebral cortex,", "year": 2008 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": "arXiv preprint arXiv:1904.10509,", "year": 2019 }, { "authors": [ "Hai Ci", "Chunyu Wang", "Xiaoxuan Ma", "Yizhou Wang" ], "title": "Optimizing network structure for 3d human pose estimation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Daniel PW Ellis" ], "title": "Beat tracking by dynamic programming", "venue": "Journal of New Music Research,", "year": 2007 }, { "authors": [ "Rukun Fan", "Songhua Xu", "Weidong Geng" ], "title": "Example-based automatic music-driven conventional dance motion synthesis", "venue": "IEEE transactions on visualization and computer graphics,", "year": 2011 }, { "authors": [ "Joao P Ferreira", "Thiago M Coutinho", "Thiago L Gomes", "José F Neto", "Rafael Azevedo", "Renato Martins", "Erickson R Nascimento" ], "title": "Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio", "venue": "Computers & Graphics,", "year": 2021 }, { "authors": [ "Katerina Fragkiadaki", "Sergey Levine", "Panna Felsen", "Jitendra Malik" ], "title": "Recurrent network models for human dynamics", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N Dauphin" ], "title": "Convolutional sequence to sequence learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Partha Ghosh", "Jie Song", "Emre Aksan", "Otmar Hilliges" ], "title": "Learning human motion models for long-term predictions", "venue": "In 2017 International Conference on 3D Vision (3DV),", "year": 2017 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Geoffrey Hinton", "Li Deng", "Dong Yu", "George E Dahl", "Abdel-rahman Mohamed", "Navdeep Jaitly", "Andrew Senior", "Vincent Vanhoucke", "Patrick Nguyen", "Tara N Sainath" ], "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "venue": "IEEE Signal processing magazine,", "year": 2012 }, { "authors": [ "Chieh Ho", "Wei-Tze Tsai", "Keng-Sheng Lin", "Homer H Chen" ], "title": "Extraction and alignment evaluation of motion beats for street dance", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2013 }, { "authors": [ "Ashesh Jain", "Amir R Zamir", "Silvio Savarese", "Ashutosh Saxena" ], "title": "Structural-rnn: Deep learning on spatio-temporal graphs", "venue": "In Proceedings of the ieee conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Alex M Lamb", "Anirudh Goyal Alias Parth Goyal", "Ying Zhang", "Saizheng Zhang", "Aaron C Courville", "Yoshua Bengio" ], "title": "Professor forcing: A new algorithm for training recurrent networks", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Hsin-Ying Lee", "Xiaodong Yang", "Ming-Yu Liu", "Ting-Chun Wang", "Yu-Ding Lu", "Ming-Hsuan Yang", "Jan Kautz" ], "title": "Dancing to music", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Juheon Lee", "Seohyun Kim", "Kyogu Lee" ], "title": "Listen to dance: Music-driven choreography generation using autoregressive encoder-decoder network", "venue": "arXiv preprint arXiv:1811.00818,", "year": 2018 }, { "authors": [ "Minho Lee", "Kyogu Lee", "Jaeheung Park" ], "title": "Music similarity-based approach to generating dance motion sequence", "venue": "Multimedia tools and applications,", "year": 2013 }, { "authors": [ "Andreas M Lehrmann", "Peter V Gehler", "Sebastian Nowozin" ], "title": "Efficient nonlinear markov models for human motion", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Chen Li", "Zhen Zhang", "Wee Sun Lee", "Gim Hee Lee" ], "title": "Convolutional sequence to sequence model for human dynamics", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zimo Li", "Yi Zhou", "Shuangjiu Xiao", "Chong He", "Zeng Huang", "Hao Li" ], "title": "Auto-conditioned recurrent networks for extended complex human motion synthesis", "venue": "arXiv preprint arXiv:1707.05363,", "year": 2017 }, { "authors": [ "Jiasen Lu", "Caiming Xiong", "Devi Parikh", "Richard Socher" ], "title": "Knowing when to look: Adaptive attention via a visual sentinel for image captioning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Wei Mao", "Miaomiao Liu", "Mathieu Salzmann", "Hongdong Li" ], "title": "Learning trajectory dependencies for human motion prediction", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Brian McFee", "Colin Raffel", "Dawen Liang", "Daniel PW Ellis", "Matt McVicar", "Eric Battenberg", "Oriol Nieto" ], "title": "librosa: Audio and music signal analysis in python", "venue": "In Proceedings of the 14th python in science conference,", "year": 2015 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Marc’Aurelio Ranzato", "Sumit Chopra", "Michael Auli", "Wojciech Zaremba" ], "title": "Sequence level training with recurrent neural networks", "venue": "arXiv preprint arXiv:1511.06732,", "year": 2015 }, { "authors": [ "Scott Reed", "Zeynep Akata", "Xinchen Yan", "Lajanugen Logeswaran", "Bernt Schiele", "Honglak Lee" ], "title": "Generative adversarial text to image synthesis", "venue": "arXiv preprint arXiv:1605.05396,", "year": 2016 }, { "authors": [ "Xuanchi Ren", "Haoran Li", "Zijian Huang", "Qifeng Chen" ], "title": "Music-oriented dance video synthesis with pose perceptual loss", "venue": "arXiv preprint arXiv:1912.06606,", "year": 2019 }, { "authors": [ "Eli Shlizerman", "Lucio Dery", "Hayden Schoen", "Ira Kemelmacher-Shlizerman" ], "title": "Audio to body dynamics", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Guofei Sun", "Yongkang Wong", "Zhiyong Cheng", "Mohan S Kankanhalli", "Weidong Geng", "Xiangdong Li" ], "title": "Deepdance: music-to-dance motion choreography with adversarial learning", "venue": "IEEE Transactions on Multimedia,", "year": 2020 }, { "authors": [ "Taoran Tang", "Jia Jia", "Hanyang Mao" ], "title": "Dance with melody: An lstm-autoencoder approach to music-oriented dance synthesis", "venue": "In Proceedings of the 26th ACM international conference on Multimedia,", "year": 2018 }, { "authors": [ "Graham W Taylor", "Geoffrey E Hinton", "Sam T Roweis" ], "title": "Modeling human motion using binary latent variables", "venue": "In Advances in neural information processing systems,", "year": 2007 }, { "authors": [ "Sergey Tulyakov", "Ming-Yu Liu", "Xiaodong Yang", "Jan Kautz" ], "title": "Mocogan: Decomposing motion and content for video generation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jack Wang", "Aaron Hertzmann", "David J Fleet" ], "title": "Gaussian process dynamical models", "venue": "In Advances in neural information processing systems,", "year": 2006 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Guilin Liu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Video-to-video synthesis", "venue": "arXiv preprint arXiv:1808.06601,", "year": 2018 }, { "authors": [ "Kelvin Xu", "Jimmy Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron Courville", "Ruslan Salakhudinov", "Rich Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Nelson Yalta", "Shinji Watanabe", "Kazuhiro Nakadai", "Tetsuya Ogata" ], "title": "Weakly-supervised deep recurrent neural networks for basic dance step generation", "venue": "In 2019 International Joint Conference on Neural Networks (IJCNN),", "year": 2019 }, { "authors": [ "Zijie Ye", "Haozhe Wu", "Jia Jia", "Yaohua Bu", "Wei Chen", "Fanbo Meng", "Yanfeng Wang" ], "title": "Choreonet: Towards music to dance synthesis with choreographic action unit", "venue": "In Proceedings of the 28th ACM International Conference on Multimedia,", "year": 2020 }, { "authors": [ "Han Zhang", "Tao Xu", "Hongsheng Li", "Shaoting Zhang", "Xiaogang Wang", "Xiaolei Huang", "Dimitris N Metaxas" ], "title": "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Wen Zhang", "Yang Feng", "Fandong Meng", "Di You", "Qun Liu" ], "title": "Bridging the gap between training and inference for neural machine translation", "venue": null, "year": 1906 } ]
[ { "heading": "1 INTRODUCTION", "text": "Arguably, dancing to music is one of human’s innate abilities, as we can spontaneously sway along with the tempo of music we hear. The research in neuropsychology indicates that our brain is hardwired to make us move and synchronize with music regardless of our intention (Chen et al., 2008). Another study of archaeology also suggests that dance is a social communication skill among early humans connected to the ability of survival long time ago (Adshead-Lansdale & Layson, 2006). Nowadays, dance (to music) has become a means to the cultural promotion, a method of emotional expression, a tool for socialization and an art form to bring aesthetic enjoyment. The neurological mechanism behind dancing behavior and the unique value of dance to society motivate us to explore a computational approach to dance creation from a piece of music in artificial intelligence research. Such work is potentially beneficial to a wide range of applications such as dance creation assistant in art and sports, character motion generation for audio games and research on cross-modal behavior.\nIn literature, the music-conditioned dance generation is a relatively new task that attracts increasing research interests recently. Early works (Fan et al., 2011; Lee et al., 2013) synthesize dance sequences from music by retrieval-based methods, which show the limited creativity in practice. Recently, Lee et al. (2019) formulate the task from the generative perspective, and further propose a decompositionto-composition framework. Their model first generates basic dance units from the music clips and then composes them by using the last pose of current unit to initialize the first pose of the next ∗Equal contribution. †Work done during the internship at Microsoft STCA. ‡Corresponding author.\nunit. Although this approach shows better performance than the retrieval-based methods, several challenges still remain.\nFirst, existing generative methods synthesize new human motion sequences through autoregressive models like RNN, which tend to result in short sequences. In other words, the generated sequences often quickly freeze within a few seconds due to an accumulation of prediction errors that are fed back into the neural network. This problem becomes even more severe in long motion sequence generation. For instance, composing a dance for a 1-minute music clip under 15 Frame Per Second (FPS) means generating 900 poses at one time. In practice, we need novel methods which can effectively generate long motion sequences. Besides, how to enhance the harmony between the synthesized dance movements and the given music is a largely unexplored challenge. Inutitively, the movements need to be consistent with the music in terms of style, rhythm and beat. However, to achieve this goal is non-trivial, which requires the generation model to have the capability to capture the fine-grained correspondence between music and dance.\nIn this paper, we formalize music-conditioned dance generation as a sequence-to-sequence learning problem where the fine-grained correspondence between music and dance is represented through sequence modeling and their alignment is established via mapping from a sequence of acoustic features of the music to a sequence of movements of the dance. The model consists of a music encoder and a dance decoder. The encoder transforms low-level acoustic features of an input music clip into high-level latent representations via self-attention with a receptive field restricted within k-nearest neighbors of an element. Thus, the encoder can efficiently process long sequences of music features, e.g., a sequence with more than 1000 elements, and model local characteristics of the music such as chord and rhythm patterns. The decoder exploits a recurrent structure to predict the dance movement frame by frame conditioned on the corresponding element in the latent representations of music feature sequence. Furthermore, we propose a curriculum learning (Bengio et al., 2009) strategy to alleviate error accumulation (Li et al., 2017) of autoregressive models in long motion sequence generation. Specifically, it gently changes the training process from a fully guided teacher-forcing scheme using the previous ground-truth movements, towards a less guided autoregressive scheme which mostly utilizes the generated movements instead. This strategy bridges the gap between training and inference of autoregressive models, and thus alleviates error accumulation at inference.\nThe length of each video clip in the dataset released by Lee et al. (2019) is only about 6 seconds, which cannot be used by other methds to generate long-term dance except for their specially designed decomposition-to-composition framework. To facilitate the task of long-term dance generation with music, we collect a high-quality dataset consisting of 1-minute video clips, totaling about 12 hours. And there are three representative styles in our dataset: “Ballet”, “Hiphop” and “Japanese Pop”.\nOur contributions in this work are four-fold: (1) We formalize music-conditioned dance generation as a sequence-to-sequence learning problem and devise a novel seq2seq architecture for the long-term dance generation with music. (2) We propose a novel curriculum learning strategy to alleviate error accumulation of autoregressive models in long motion sequence generation. (3) To facilitate long-term dance generation with music, we collect a high-quality dataset that is available with our code1. (4) The extensive experiments show that our approach significantly outperforms the existing state-of-the-arts on both automatic metrics and human judgements. The demo video in the supplementary material also exhibits our approach can generate diverse minute-length dances that are smooth, natural-looking, style-consistent and beat-matching with the musics from test set." }, { "heading": "2 RELATED WORK", "text": "Cross-Modal Learning. Most existing works focus on the modeling between vision and text, such as image captioning (Lu et al., 2017; Xu et al., 2015) and text-to-image generation (Reed et al., 2016; Zhang et al., 2017). There are some other works to study the translation between audio and text like Automatic Speech Recognition (ASR) (Hinton et al., 2012) and Text-To-Speech (TTS) (Oord et al., 2016). While the modeling between audio and vision is largely unexplored and the music-conditioned dance generation is a typical cross-modal learning problem from audio to vision.\n1https://github.com/stonyhu/DanceRevolution\nHuman Motion Prediction. Prediction of human motion dynamics has been a challenging problem in computer vision, which suffers from the high spatial-temporal complexity. Existing works (Chan et al., 2019; Wang et al., 2018) represent the human pose as 2D or 3D body keyjoints (Cao et al., 2017) and address the problem via sequence modeling. Early methods, such as hidden markov models (Lehrmann et al., 2014), Gaussian processes (Wang et al., 2006) and restricted boltzmann machines (Taylor et al., 2007), have to balance the model capacity and inference complexity due to complicated training procedures. Recently, neural networks dominate the human motion modeling. For instance, Fragkiadaki et al. (2015) present LSTM-3LR and Encoder-Recurrent-Decoder (ERD) as two recurrent architectures for the task; Jain et al. (2016) propose a structural-RNN to model human-object interactions in a spatio-temporal graph; and Ghosh et al. (2017) equip LSTM-3LR with a dropout autoencoder to enhance the long-term prediction. Besides, convolutional neural networks (CNNs) have also been utilized to model the human motion prediction (Li et al., 2018).\nAudio-Conditioned Dance Generation. In the research of audio-conditioned dance generation, most existing works study 2D dance motion generation with music since the training data for paired 2D pose and music can be extracted from the huge amount of dance videos available online. Various methods have been proposed to handle this task, such as adversarial learning based methods (Lee et al., 2019; Sun et al., 2020; Ferreira et al., 2021), autoencoder methods (Tang et al., 2018) and sequence-to-sequence methods (Lee et al., 2018; Ren et al., 2019; Yalta et al., 2019; Ye et al., 2020). While these works mainly focus on exploring the different neural architectures and overlook the freezing motion issue in dance motion synthesis. In this work, we first propose a novel seq2seq architecture to model the fine-grained correspondence between music and dance, and then introduce a novel curriculum learning strategy to address the freezing motion issue caused by error accumulation (Li et al., 2017) in long-term dance motion generation." }, { "heading": "3 APPROACH", "text": "In this section, we present our approach to music-conditioned dance generation. After formalization of the problem in question, we elaborate the model architecture and the dynamic auto-condition learning approach that facilitates long-term dance generation according to the given music." }, { "heading": "3.1 PROBLEM FORMALIZATION", "text": "Suppose that there is a dataset D = {(Xi, Yi)}Ni=1, where X = {xt}nt=1 is a music clip with xt being a vector of acoustic features at time-step t, and Y = {yt}nt=1 is a sequence of dance movements with\nyt aligned to xt. The goal is to estimate a generation model g(·) from D, and thus given a new music input X , the model can synthesize dance Y to music X based on g(X). We first present our seq2seq architectures chosen for music-conditioned dance generation in the following section. Later in the experiments, we empirically justify the choice by comparing the architectures with other alternatives." }, { "heading": "3.2 MODEL ARCHITECTURE", "text": "In the architecture of g(·), a music encoder first transforms X = (x1, ..., xn) (xi ∈ Rdx) into a hidden sequence Z = (z1, ..., zn) (zi ∈ Rdz ) using a local self-attention mechanism to reduce the memory requirement for long sequence modeling, and then a dance decoder exploits a recurrent structure to autoregressively predicts movements Y = (y1, ..., yn) conditioned on Z.\nMusic Encoder. Encouraged by the compelling performance on music generation (Huang et al., 2018), we define the music encoder with a transformer encoder structure. While the self-attention mechanism (Vaswani et al., 2017) in transformer can effectively represent the multi-scale structure of music, the quadratic memory complexity O(n2) about sequence length n impedes its application to long sequence modeling due to huge of GPU memory consumption (Child et al., 2019). To keep the effectiveness of representation and control the cost, we introduce a local self-attention mechanism that modifies the receptive field of self-attention by restricting the element connections within k-nearest neighbors. Thus, the memory complexity is reduced toO(nk). k could be small in our scenario since we only pursue an effective representation for a given music clip. Therefore, the local patterns of music are encoded in zt that is sufficient for the generation of movement yt at time-step t, which is aligned with the common sense that the dance movement at certain time-step is highly influenced by the nearby clips of music. Yet we can handle long sequences of acoustic features, e.g., more than 1000 elements, in an efficient and memory-economic way.\nSpecifically, we first embed X = (x1, ..., xn) into U = (u1, ..., un) with a linear layer parameterized by WE ∈ Rdx×dz . Then ∀i ∈ {1, ..., n}, zi can be formulated as:\nzi = F(ai), ai = i+bk/2c∑\nj=i−bk/2c\nαij(ujW V l ), U = XW E , (1)\nwhere F(·) : Rdv → Rdz is a feed forward neural network. Each uj is only allowed to attend its k-nearest neighbors including itself, where k is a hyper-parameter referring to the sliding window size of the local self-attention. Then attention weight αij is calculated using a softmax function as:\nαij = exp eij∑j+bk/2c\nt=j−bk/2c exp eit , eij =\n(uiW Q l )(ujW K l ) >\n√ dk\n, (2)\nwhere for the the l-th head, WQl , W K l ∈ Rdz×dk and WVl ∈ Rdz×dv are parameters that transform U into a query, a key, and a value respectively. dz is the dimension of hidden state zi while dk is the dimension of query, key and dv is the dimension of value.\nDance Decoder. We choose a recurrent neural network as the dance decoder in consideration of two factors: (1) the chain structure can well capture the spatial-temporal dependency among human movement dynamics, which has proven to be highly effective in the state-of-the-art methods for human motion prediction (Li et al., 2018; Mao et al., 2019); (2) our proposed learning strategy is tailored for the autoregressive models like RNN, as will be described in the next section. Specifically, with Z = (z1, ..., zn), the dance movements Y = (ŷ1, ..., ŷn) are synthesized by:\nŷi = [hi; zi]W S + b, (3)\nhi = RNN(hi−1, ŷi−1), (4)\nwhere hi is the i-th hidden state of the decoder and h0 is initialized by sampling from the standard normal distribution to enhance variation of the generated sequences. [·; ·] denotes the concatenation operation. WS ∈ R(ds+dz)×dy and b ∈ Rdy are parameters where ds and dy are the dimensions of hi and ŷi, respectively. At the i-th time-step, the decoder predicts the movement ŷi conditioned on hi as well as the latent feature representation zi, and thus can capture the fine-grained correspondence between music feature sequence and dance movement sequence." }, { "heading": "3.3 DYNAMIC AUTO-CONDITION LEARNING APPROACH", "text": "Exposure bias (Ranzato et al., 2015; Lamb et al., 2016; Zhang et al., 2019) is a notorious issue in natural language generation (NLG), which refers to the train-test discrepancy of autoregressive generative models. This kind of models are never exposed to their own prediction errors at the training phase due to the conventional teacher-forcing training scheme. However, compared to NLG, motion sequence generation suffers from the more severe exposure bias problem. In NLG, the bias of predicted probability distribution over vocabulary can be corrected by sampling strategy. For instance, we can still generate the target token with index 2 by sampling on probability distribution [0.3, 0.3, 0.4] whose groundtruth is [0, 0, 1]. While any small biases of the predicted motion at each time-step will be accumulated and propagated to the future, since each generated motion is a real-valued vector (rather than a discrete token) representing 2D or 3D body keyjoints in the continuous space. As a result, the generated motion sequences tend to quickly freeze within a few seconds. This problem becomes even worse in the generation of long motion sequences, e.g., more than 1000 movements.\nScheduled sampling (Bengio et al., 2015) is proposed to alleviate exposure bias in NLG, which does not work in long motion sequence generation. Since its sampling-based strategy would feed long predicted sub-sequences of high bias into the model at early stage, which causes gradient vanishing at training due to error accumulation of these high-biased sub-sequences. Motivated by this, we propose a dynamic auto-condition training method as a novel curriculum learning strategy to alleviate the error accumulation of autoregressive models in long motion sequence generation. Specifically, our learning strategy gently changes the training process from a fully guided teacher-forcing scheme using the previous ground-truth movements, towards a less guided autoregressive scheme mostly using the generated movements instead, which bridges the gap between training and inference.\nAs shown in Figure 1, The decoder predicts Ytgt = {y1, ..., yp, yp+1, ..., yp+q, yp+q+1, ...} from Yin = {y0, ŷ1, ..., ŷp, yp+1, ..., yp+q, ŷp+q+1, ...} which alternates the predicted sub-sequences with length p (e.g., ŷ1, ..., ŷp) and the ground-truth sub-sequences with length q (e.g., yp+1, ..., yp+q). y0 is a predefined begin of pose (BOP). During the training phase, we fix q and gradually increase p according to a growth function f(t) ∈ {const, bλtc, bλt2c, bλetc} where t is the number of training epochs and λ < 1 controls how frequently p is updated. We choose f(t) = bλtc in practice through the empirical comparison. Note that our learning strategy degenerates to the method proposed in Li et al. (2017) when f(t) = const. While the advantage of a dynamic strategy over a static strategy lies in that the former introduces the curriculum learning spirit to dynamically increase the difficulty of curriculum (e.g., larger p) as the model capacity improves during training, which can further improve the model capacity and alleviate the error accumulation of autoregressive predictions. Finally, we estimate parameters of g(·) by minimizing the `1 loss on D:\n`1 = 1\nN N∑ i=1 ||g(Xi)− Y (i)tgt ||1 (5)" }, { "heading": "4 EXPERIMENTAL SETUP", "text": "" }, { "heading": "4.1 DATASET COLLECTION AND PREPROCESSING", "text": "Although the dataset (Lee et al., 2019) contains about 70-hour video clips, the average length of each clip is only about 6 seconds. Moreover, their dataset cannot be used by other methods to generate longterm dance except for their specially designed decomposition-to-composition framework. To facilitate the task of long-term dance generation with music, we collect a high-quality dataset consisting of 790 one-minute clips of pose data with 15 FPS, totaling about 12 hours. And there are three styles in the dataset including “Ballet”, “Hiphop” and “Japanese Pop”. Table 1 shows the statistics of our dataset. In the experiment, we randomly split the dataset into the 90% training set and 10% test set.\nPose Preprocess. For the human pose estimation, we leverage OpenPose (Cao et al., 2017) to extract 2D body keyjoints from videos. Each pose consists of 25 keyjoints2 and is represented by a 50-dimension vector in the continuous space. In practice, we develop a linear interpolation algorithm to find the missing keyjoints from nearby frames to reduce the noise in extracted pose data.\n2https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/ doc/output.md#pose-output-format-body_25\nAudio Preprocess. Librosa (McFee et al., 2015) is a well-known audio and music analysis library in the music information retrieval, which provides flexible ways to extract the spectral and rhythm features of audio data. Specifically, we extract the following features: mel frequency cepstral coefficients (MFCC), MFCC delta, constant-Q chromagram, tempogram and onset strength. To better capture the beat information of music, we convert onset strength into a one-hot vector as an additional feature to explicitly represent the beat sequence of music, i.e., beat one-hot. Finally, we concatenate these audio features as the representation of a music frame, as shown in Table 4 in Section 6.2. During extracting audio features, the sampling rate is 15,400Hz and hop size is 1024. Hence, we have 15 audio samples per second that is aligned with the 15 FPS of pose data." }, { "heading": "4.2 IMPLEMENTATION DETAILS", "text": "The music encoder consists of a stack ofN = 2 identical layers. Each layer has two sublayers: a local self-attention sublayer with l = 8 heads and a position-wise fully connected feed-forward sublayer with 1024 hidden units. Each head contains a scaled dot-product attention layer with dk = dv = 64 and its receptive yield is restricted by setting k = 100. Then we set the sequence length n = 900, dimension of acoustic feature vector dx = 438, dimension of hidden vector dz = 256, respectively. The dance decoder is a 3-layer LSTM with ds = 1024 and the dimension of pose vector dy = 50. We set λ = 0.01 and q = 10 for the proposed learning approach and train the model using the Adam optimizer with the learning rate 1e-4 on 2 NVIDIA V100 GPUs." }, { "heading": "4.3 BASELINES", "text": "The music-conditioned generation is an emerging task and there are few methods proposed to solve this problem. In our experiment, we compare our proposed approach to the following baselines: (1) Dancing2Music. We use Dancing2Music (Lee et al., 2019) as the primary baseline that is the previous state-of-the-art; (2) Aud-MoCoGAN. Aud-MoCoGAN is an auxiliary baseline used in Dancing2Music, which is original from MoCoGAN (Tulyakov et al., 2018); (3) LSTM. Shlizerman et al. (2018) propose a LSTM network to predict body dynamics from the audio signal. We modify their open-source code to take audio features of music as inputs and produce human pose sequences." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "In this section, we conduct extensive experiments to evaluate our approach and compare it to the aforementioned baselines. Note that we do not include comparison experiment on the dataset collect by Lee et al. (2019) due to the above issue in Section 4.1." }, { "heading": "5.1 AUTOMATIC METRICS AND HUMAN EVALUATION", "text": "We evaluate the different methods by automatic metrics and conduct a human evaluation on motion realism and smoothness of generated dance movements, as well as style consistency. Specifically, we randomly sample 60 music clips from the test set to generate dances, then divide the dances generated by three methods and the real dances into four pairs: (LSTM, ours), (Dancing2Music, ours), (real, ours) and (real, real). We invite 10 amateur dancers as the annotators to answer 3 questions for each pair that is blind for them (use (A, B) pair instead): (1) Which dance is more realistic regardless of music? (2) Which dance is more smooth regardless of music? (3) Which dance matches the music better with respect to style? Note that, we do not include Aud-MoCoGAN in the human evaluation, since both it and Dancing2Music are GAN based methods while the latter is the state-of-the-art.\nStyle Consistency Preference\nSmoothness PreferenceMotion Realism Preference\nMotion Realism, Style Consistency and Smoothness. We evaluate the visual quality and realism of generated dances by Fréchet Inception Distance (FID) (Heusel et al., 2017), which is used to measure how close the distribution of generated dances is to the real. Similar to Lee et al. (2019), we train a style classifier on dance movement sequences of three styles and use it to extract features for the given dances. Then we calculate FID between the synthesized dances and the real. As shown in Table 2, our FID score is significantly lower than those of baselines and much closer to that of the real, which means our generated dances are more motion-realistic and more likely to the real dances. Besides, we use the same classifier to measure the style accuracy of generated dance to the music, and our approach achieves 77.6% accuracy and significantly outperforms Dancing2Music by 17.2%.\nHuman evaluation in Figure 2 consistently shows the superior performance of our approach, compared to baselines on motion realism, style consistency and smoothness. We observe the dances generated by LSTM have lots of floating movements and tend to quickly freeze within several seconds due to the error accumulation, which results in lower preferences. While Dancing2Music can generate the smooth dance units, the synthesized dances still have significant jumps where dance units are connected due to the inherent gap between them, which harm its overall performance in practice. This is also reflected in the preference comparisons to our approach, in which only 35% annotators prefer Dancing2Music on motion realism and 21.7% prefer Dancing2Music on smoothness. Besides, the result on style consistency shows our approach can generate more style-consistent dances with musics. Since our approach models the fine-grained correspondence between music and dance at the methodological level and also introduces more specific music features. However, Dancing2Music only considers a global style feature extracted by a pre-trained classifier in the dance generation.\nIn the comparison to real dances, 41.2% annotators prefer our method on motion realism while 30.3% prefer on style consistency. Additionally, we found that 57.9% annotators prefer our approach on smoothness compared to real dances. Since the imperfect OpenPose (Cao et al., 2017) introduces the minor noise on pose data extracted from dance videos. On the other hand, in the preprocessing, we develop an interpolation algorithm to find missing keyjoints and reduce the noise of training pose data.\nBesides, this also indicates the chain structure based decoder can well capture the spatial-temporal dependency among human motion dynamics and produce the smooth movements.\nBeat Coverage and Hit Rate. We also evaluate the beat coverage and hit rate introduced in Lee et al. (2019), which defines the beat coverage as Bk/Bm and beat hit rate as Ba/Bk, where Bk is the number of kinematic beats, Bm is the number of musical beats and Ba is the number of kinematic beats that are aligned with the musical beats. An early study (Ho et al., 2013) reveals that kinematic beat frames occur when the direction of the movement changes. Therefore, similar to Yalta et al. (2019), we use the standard deviation (SD) of movement to detect the kinematic beats in our experiment. The onset strength (Ellis, 2007) is a common way to detect musical beats. Figure 3 shows two short clips of motion SD curves and aligned musical beats. We observed that the kinematic beats occur where the musical beats occur, which is consistent with the common sense that dancers would step on musical beats during dancing but do not step on every musical beat. As we can see in Table 2, LSTM and Aud-MoCoGAN generate dances with few kinematic beats and most of them do not match the musical beats. Our method outperforms Dancing2Music by 6.1% on beat coverage and 2.7% on hit rate, which indicates that introducing the features about musical beat into model is beneficial to the better beat alignment of generated dance and input music.\nDiversity and Multimodality. Lee et al. (2019) introduce the diversity metric and the multimodality metric. The diversity metric evaluates the variations among generated dances corresponding to various music, which reflects the generalization ability of the model and its dependency on the input music. Similarly, we use the average feature distance as the measurement and these features are extracted by the same classifier used in measuring FID. We randomly sample 60 music clips from test set to generate dances and randomly pick 500 combinations to compute the average feature distance. Results in Table 2 show the performance of our approach is superior to Dancing2Music on diversity. The reason is that Dancing2Music only considers a global music style feature in generation while different music of the same style have the almost same style feature. Our approach models the fine-grained correspondence between music and dance at the methodological level, simultaneously takes more specific music features into account. In other words, our approach is more dependent on input music and thus can generate different samples for difference music clips of the same style.\nMultimodality metric evaluates the variations of generated dances for the same music clip. Specifically, we generate 5 dances for each of randomly sampled 60 music clips and compute the average feature distance. As we can see, our approach slightly underperforms Dancing2Music and AudMoCoGAN since it has more dependency on music inputs as aforementioned, which make the generated dances have the limited variation patterns given the same music clip." }, { "heading": "5.2 LONG-TERM DANCE GENERATION", "text": "To further investigate the performance of different methods on long-term dance generation, we evaluate FID of dances generated by different methods over time. Specifically, we first split a generated dance with 1 minute into 15 four-second clips and measure FID of each clip. Figure 4 shows FID scores of LSTM and Aud-MoCoGAN grow rapidly in the early stage due to error accumulation and converge to the high FID, since the generated dances quickly become frozen. Dancing2Music maintains the relatively stable FID scores all the time, which benefits from its decomposition-tocomposition method. While its curve still has subtle fluctuations since it synthesizes whole dances by\ncomposing the generated dance units. Compared to baselines, FID of our method are much lower and close to real dances, which validates the good performance of our method on long-term generation." }, { "heading": "5.3 ABLATION STUDY", "text": "Comparison on Encoder Structures. We compare the different encoder structures in our model with the same LSTM decoder and the same curriculum learning strategy, as shown in Table 3. The transformer encoders outperform LSTM and the encoder in ConvS2S (Gehring et al., 2017) on both FID and style consistency, due to its superior performance on sequence modeling. Although the transformer encoder with our proposed local self-attention slightly underperforms the global self-attention by 0.7 higher in FID and 1.5% lower in ACC, it has much fewer parameters in our settings. This confirms the effectiveness of the local self-attention on long sequence modeling.\nComparison on Learning Strategies. To investigate the performance of our proposed learning approach, we conduct an ablation study to compare the performances of our model using different learning strategies. As we can see the right of in Table 3, all curriculum learning strategies significantly outperform the static auto-condition strategy (Li et al., 2017). Among them, the linear growth function f(t) = bλtc performs best, which might be that increasing the difficulty of curriculum too fast would result in the model degradation. Besides, the original teacher-forcing strategy has the highest FID and the lowest style accuracy due to severe error accumulation problem, which indicates that using our proposed curriculum learning strategy to train the model can effectively alleviate this issue." }, { "heading": "6 CONCLUSION", "text": "In this work, we propose a novel seq2seq architecture for long-term dance generation with music, e.g., about one-minute length. To efficiently process long sequences of music features, we introduce a local self-attention mechanism in the transformer encoder structure to reduce the quadratic memory requirements to the linear in the sequence length. Besides, we also propose a dynamic auto-condition training method as a novel curriculum learning strategy to alleviate the error accumulation of autoregressive models in long motion sequence generation, thus facilitate the long-term dance generation. Extensive experiments have demonstrated the superior performance of our proposed approach on automatic metrics and human evaluation. Future works include the exploration on end-to-end methods that could work with raw audio data instead of preprocessed audio features." }, { "heading": "6.1 MORE DETAILS ABOUT DATA COLLECTION", "text": "The dance videos are collected from YouTube by crowd workers, which are all solo dance videos with 30FPS. We trim the beginning and ending several seconds for each of collected videos to remove the silence parts, and split them into one-minute video clips. Then, we extract 2D pose data from these video clips by OpenPose (Cao et al., 2017) with 15FPS setting and collect 790 one-minute clips of pose data with 15FPS, totaling about 13 hours. Finally, we extract the corresponding audio data from video clips by FFmpeg3. The code is implemented based on PyTorch framework. MIT License will be used for the released code and data." }, { "heading": "6.2 EXTRACTED MUSIC FEATURES", "text": "In this section, we detail the extracted features of music that are feeded into our model. Specifically, we leverage the public Librosa (McFee et al., 2015) to extract the music features including: 20- dim MFCC, 20-dim MFCC delta, 12-dim chroma, 384-dim tempogram, 1-dim onset strength (i.e., envelope) and 1-dim one-hot beat, as shown in Table 4." }, { "heading": "6.3 DOWNSTREAM APPLICATIONS", "text": "The downstream applications of our proposed audio-conditioned dance generation method include: (1) Our model can be used to help professional people choreograph new dances for a given song and teach human how to dance with regard to this song; (2) With the help of 3D human pose estimation (Ci et al., 2019) and 3D animation driving techniques, our model can be used to drive the various 3D character models, such as the 3D model of Hatsune Miku (very popular virtual character in Japan). This technique has the great potential for the virtual advertisement video generation that could be used for the promotion events on social medias like TikTok, Twitter and etc." }, { "heading": "6.4 MUSICAL BEAT DETECTION UNDER LOW SAMPLING RATES", "text": "In this section, we conduct an additional experiment to evaluate the performance of Librosa beat tracker under 15,400Hz sampling rate. Specifically, we first utilize it to extract the onset beats from audio data with 15,400Hz and 22,050Hz respectively, then compare two groups of onset beats under the same time scale. Since 22,050Hz is a common sampling rate used in music information retrieval, we define the beat alignment ratio as B2/B1, where B1 is the number of beats under 15,400Hz and B2 is the number of beats under 15,400Hz that are aligned with the beats under 22,050Hz. One beat is counted when | t1 − t2 |≤ ∆t. t1 denotes the timestamp of a certain beat under 15,400Hz while t2 refers to the timestamp of a certain beat under 22,050Hz, ∆t is the time offset threshold.\nIn the experiment, we set ∆t = 1/15s (FPS is 15) and calculate the beat alignment ratio for audio data of three styles respectively. We randomly sample 10 audio clips from each style to calculate the beat alignment ratio and do the sampling for 10 times to take the average. As shown in Table 5, most of beats extracted with 15,400Hz are aligned with those extracted with 22,050Hz. Besides, we randomly sample an audio clip and visualize beat tracking curves of its first 20 seconds under the sampling rates of 15,400Hz and 22,050Hz. As we can see in Figure 5, most of the beats from 15,400Hz and 22,050Hz are aligned within the time offset threshold ∆t = 1/15s.\n3https://ffmpeg.org/\nTable 5: The beat alignment ratio B2/B1.\nCategory Beat Alignment Ratio (%)\nBallet 88.7 Hiphop 93.2 Japanese Pop 91.6\n0 2.5 5 7.5 10 12 15 18 20\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0 Onset Strength (22050Hz) Onset Strength (15400Hz) Musical Beats (22050Hz) Musical Beats (15400Hz)\nFigure 5: Beat tracking curves for the first 20 seconds of a music clip randomly sampled from audio data under the sampling rates of 15,400Hz and 22,050Hz. Most of the beats are aligned within the time offset threshold ∆t = 1/15s." } ]
2,021
DANCE REVOLUTION: LONG-TERM DANCE GENERA-
SP:18e9f58ab4fc8532cbd298730cff5b7f8ec31a5f
[ "This paper presents the \"Block Skim Transformer\" for extractive question answering tasks. The key idea in this model is using a classifier, on the self-attention distributions of a particular layer, to classify whether a large spans of non-contiguous text (blocks) contain the answer. If a block is rejected by the classifier, it is excluded in subsequent layers of self-attention. During training, no blocks are thrown away and the classifier is applied to every layer to provide a regularization effect, which leads to small improvements in performance in 5 datasets. During inference, blocks are thrown away at a fixed layer. The reduction in sequence length leads to ~1.5x batch size 1 speed improvements." ]
Transformer-based encoder models have achieved promising results on natural language processing (NLP) tasks including question answering (QA). Different from sequence classification or language modeling tasks, hidden states at all positions are used for the final classification in QA. However, we do not always need all the context to answer the raised question. Following this idea, we proposed Block Skim Transformer (BST ) to improve and accelerate the processing of transformer QA models. The key idea of BST is to identify the context that must be further processed and the blocks that could be safely discarded early on during inference. Critically, we learn such information from self-attention weights. As a result, the model hidden states are pruned at the sequence dimension, achieving significant inference speedup. We also show that such extra training optimization objection also improves model accuracy. As a plugin to the transformer based QA models, BST is compatible to other model compression methods without changing existing network architectures. BST improves QA models’ accuracies on different datasets and achieves 1.6× speedup on BERTlarge model.
[ { "affiliations": [], "name": "SKIM TRANSFORMER" } ]
[ { "authors": [ "Iz Beltagy", "Matthew E Peters", "Arman Cohan" ], "title": "Longformer: The long-document transformer", "venue": "arXiv preprint arXiv:2004.05150,", "year": 2020 }, { "authors": [ "Victor Campos", "Brendan Jou", "Xavier Giró-i-Nieto", "Jordi Torres", "Shih-Fu Chang" ], "title": "Skip RNN: learning to skip state updates in recurrent neural networks", "venue": "CoRR, abs/1708.06834,", "year": 2017 }, { "authors": [ "Kevin Clark", "Urvashi Khandelwal", "Omer Levy", "Christopher D Manning" ], "title": "What does bert look at? an analysis of bert’s attention", "venue": "In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2019 }, { "authors": [ "Zihang Dai", "Guokun Lai", "Yiming Yang", "Quoc V Le" ], "title": "Funnel-transformer: Filtering out sequential redundancy for efficient language processing", "venue": "arXiv preprint arXiv:2006.03236,", "year": 2020 }, { "authors": [ "Mostafa Dehghani", "Stephan Gouws", "Oriol Vinyals", "Jakob Uszkoreit", "Lukasz Kaiser" ], "title": "Universal transformers", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Yoav Goldberg" ], "title": "A primer on neural network models for natural language processing", "venue": "Journal of Artificial Intelligence Research,", "year": 2016 }, { "authors": [ "Saurabh Goyal", "Anamitra Roy Choudhary", "Venkatesan Chakaravarthy", "Saurabh ManishRaje", "Yogish Sabharwal", "Ashish Verma" ], "title": "Power-bert: Accelerating bert inference for classification", "venue": null, "year": 2001 }, { "authors": [ "Cong Guo", "Bo Yang Hsueh", "Jingwen Leng", "Yuxian Qiu", "Yue Guan", "Zehuan Wang", "Xiaoying Jia", "Xipeng Li", "Minyi Guo", "Yuhao Zhu" ], "title": "Accelerating sparse dnn models without hardware-support via tile-wise sparsity", "venue": "arXiv preprint arXiv:2008.13006,", "year": 2020 }, { "authors": [ "Richard HR Hahnloser", "H Sebastian Seung" ], "title": "Permitted and forbidden sets in symmetric thresholdlinear networks. In Advances in neural information processing", "venue": null, "year": 2001 }, { "authors": [ "Christian Hansen", "Casper Hansen", "Stephen Alstrup", "Jakob Grue Simonsen", "Christina Lioma" ], "title": "Neural speed reading with structural-jump-lstm", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Zhen Huang", "Shiyi Xu", "Minghao Hu", "Xinyi Wang", "Jinyan Qiu", "Yongquan Fu", "Yuncai Zhao", "Yuxing Peng", "Changjian Wang" ], "title": "Recent trends in deep learning based open-domain textual question answering systems", "venue": "IEEE Access,", "year": 2020 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Mandar Joshi", "Eunsol Choi", "Daniel S Weld", "Luke Zettlemoyer" ], "title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Nikita Kitaev", "Lukasz Kaiser", "Anselm Levskaya" ], "title": "Reformer: The efficient transformer", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tom Kwiatkowski", "Jennimaria Palomaki", "Olivia Redfield", "Michael Collins", "Ankur Parikh", "Chris Alberti", "Danielle Epstein", "Illia Polosukhin", "Jacob Devlin", "Kenton Lee" ], "title": "Natural questions: a benchmark for question answering research", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100, 000+ questions for machine comprehension of text", "venue": "In EMNLP,", "year": 2016 }, { "authors": [ "David E Rumelhart", "Geoffrey E Hinton", "Ronald J Williams" ], "title": "Learning representations by back-propagating", "venue": "errors. nature,", "year": 1986 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "venue": null, "year": 1910 }, { "authors": [ "Minjoon Seo", "Sewon Min", "Ali Farhadi", "Hannaneh Hajishirzi" ], "title": "Neural speed reading via skim-rnn", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yi Tay", "Mostafa Dehghani", "Dara Bahri", "Donald Metzler" ], "title": "Efficient transformers: A survey", "venue": "arXiv e-prints, pp. arXiv–2009,", "year": 2020 }, { "authors": [ "Adam Trischler", "Tong Wang", "Xingdi Yuan", "Justin Harris", "Alessandro Sordoni", "Philip Bachman", "Kaheer Suleman" ], "title": "Newsqa: A machine comprehension", "venue": null, "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Chunpei Wang", "Xiaowang Zhang" ], "title": "Q-bert: A bert-based framework for computing sparql similarity in natural language", "venue": "In Companion Proceedings of the Web Conference", "year": 2020 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz", "Joe Davison", "Sam Shleifer", "Patrick von Platen", "Clara Ma", "Yacine Jernite", "Julien Plu", "Canwen Xu", "Teven Le Scao", "Sylvain Gugger", "Mariama Drame", "Quentin Lhoest", "Alexander M. Rush" ], "title": "Huggingface’s transformers: State-of-the-art natural language processing", "venue": "ArXiv, abs/1910.03771,", "year": 2019 }, { "authors": [ "Zhanghao Wu", "Zhijian Liu", "Ji Lin", "Yujun Lin", "Song Han" ], "title": "Lite transformer with long-short range attention", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhilin Yang", "Peng Qi", "Saizheng Zhang", "Yoshua Bengio", "William W Cohen", "Ruslan Salakhutdinov", "Christopher D Manning" ], "title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering", "venue": null, "year": 2018 }, { "authors": [ "Adams Wei Yu", "Hongrae Lee", "Quoc Le" ], "title": "Learning to skim text", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Manzil Zaheer", "Guru Guruganesh", "Avinava Dubey", "Joshua Ainslie", "Chris Alberti", "Santiago Ontanon", "Philip Pham", "Anirudh Ravula", "Qifan Wang", "Li Yang" ], "title": "Big bird: Transformers for longer sequences", "venue": null, "year": 2007 }, { "authors": [ "Wangchunshu Zhou", "Canwen Xu", "Tao Ge", "Julian McAuley", "Ke Xu", "Furu Wei" ], "title": "Bert loses patience: Fast and robust inference with early", "venue": null, "year": 2006 } ]
[ { "heading": null, "text": "Transformer-based encoder models have achieved promising results on natural language processing (NLP) tasks including question answering (QA). Different from sequence classification or language modeling tasks, hidden states at all positions are used for the final classification in QA. However, we do not always need all the context to answer the raised question. Following this idea, we proposed Block Skim Transformer (BST ) to improve and accelerate the processing of transformer QA models. The key idea of BST is to identify the context that must be further processed and the blocks that could be safely discarded early on during inference. Critically, we learn such information from self-attention weights. As a result, the model hidden states are pruned at the sequence dimension, achieving significant inference speedup. We also show that such extra training optimization objection also improves model accuracy. As a plugin to the transformer based QA models, BST is compatible to other model compression methods without changing existing network architectures. BST improves QA models’ accuracies on different datasets and achieves 1.6× speedup on BERTlarge model." }, { "heading": "1 INTRODUCTION", "text": "With the rapid development of neural networks in NLP tasks, the Transformer (Vaswani et al., 2017) that uses multi-head attention (MHA) mechanism is a recent huge leap (Goldberg, 2016). It has become a standard building block of recent NLP models. The Transformer-based BERT (Devlin et al., 2018) model further advances the model accuracy by introducing self-supervised pre-training and has reached the state-of-the-art accuracy on many NLP tasks.\nOne of the most challenging tasks in NLP is question answering (QA) (Huang et al., 2020). Our key insight is that when human beings are answering a question with a passage as a context, they do not spend the same level of comprehension for each of the sentences equally across the paragraph. Most of the contents are quickly skimmed over with little attention on it. However, in the Transformer architecture, all tokens go through the same amount of computation, which suggests that we can take advantage of that by discarding many of the tokens in the early layers of the Transformer. This redundant nature of the transformer induces high execution overhead on the input sequence dimension.\nTo mitigate the inefficiencies in QA tasks, we propose to assign more attention to some blocks that are more likely to contain actual answer while terminating other blocks early during inference. By doing so, we reduce the overhead of processing irrelevant texts and accelerate the model inference. Meanwhile, by feeding the attention mechanism with the knowledge of the answer position directly during training, the attention mechanism and QA model’s accuracy are improved.\nIn this paper, we provide the first empirical study on attention featuremap to show that an attention map could carry enough information to locate the answer scope. We then propose Block Skim Transformer (BST), a plug-and-play module to the transformer-based models, to accelerate transformer-based models on QA tasks. By handling the attention weight matrices as feature maps, the CNN-based Block Skim module extracts information from the attention mechanism to make a skim decision. With the predicted block mask, BST skips irrelevant context blocks, which do not enter subsequent layers’ computation. Besides, we devise a new training paradigm that jointly trains the Block Skim\nobjective with the native QA objective, where extra optimization signals regarding the question position are given to the attention mechanism directly.\nIn our evaluation, we show BST improves the QA accuracy and F1 score on all the datasets and models we evaluated. Specifically, BERTlarge is accelerated for 1.6× without any accuracy loss and nearly 1.8× with less than 0.5% F1 score degradation. This paper contributes to the following 3 aspects.\n• We for the first time show that an attention map is effective for locating the answer position in the input sequence.\n• We propose Block Skim Transformer (BST), which leverages the attention mechanism to improve and accelerate transformer models on QA tasks. The key is to extract information from the attention mechanism during processing and intelligently predict what blocks to skim.\n• We evaluate BST on several Transformer-based model architectures and QA datasets and demonstrate BST ’s efficiency and generality." }, { "heading": "2 RELATED WORK", "text": "Recurrent Models with Skimming. The idea to skip or skim irrelevant section or tokens of input sequence has been studied in NLP models, especially recurrent neural networks (RNN) (Rumelhart et al., 1986) and long short-term memory network (LSTM) (Hochreiter & Schmidhuber, 1997). LSTM-Jump (Yu et al., 2017) uses the policy-gradient reinforcement learning method to train a LSTM model that decides how many time steps to jump at each state. They also use hyper-parameters to control the tokens before jump, maximum tokens to jump, and maximum number of jumping. Skim-RNN (Seo et al., 2018) dynamically decides the dimensionality and RNN model size to be used at next time step. In specific, they adopt two ”big” and ”small” RNN models and select the ”small” one for skimming. Structural-Jump-LSTM (Hansen et al., 2018) use two agents to decide whether jump a small step to next token or structurally to next punctuation. Skip-RNN (Campos et al., 2017) learns to skip state updates thus results in reduced computation graph size. The difference of BST to these works are two-fold. Firstly, the previous works make skimming decisions based on the hidden states or embeddings during processing. However, we are the first to analyze and utilize the attention relationship for skimming. Secondly, our work is based on Transformer model (Vaswani et al., 2017), which has outperformed the recurrent type models on most NLP tasks.\nTransformer with Input Reduction. On contrast to aforementioned recurrent models, in the processing of Transformer-based model, all input sequence tokens are calculated in parallel. As such, skimming can be regarded as reduction on sequence dimension. PoWER-BERT (Goyal et al., 2020) extracts input sequence token-wise during processing based on attention scores to each token. During the fine-tuning process for downstream tasks, Goyal et al. proposes soft-extract layer to train the model jointly. Funnel-Transformer (Dai et al., 2020) proposes a novel pyramid architecture with input sequence length dimension reduced gradually regardless of semantic clues. For tasks requiring full sequence length output, like masked language modeling and extractive question answering, Funnel-Transformer up-sample at the input dimension to recover. Universal Transformer (Dehghani et al., 2018) proposes a dynamic halting mechanism that determines the refinement steps for each token. Different from these works, BST utilizes attention information between question and token pairs and skims the input sequence at the block granularity accordingly.\nEfficient Transformer. There are also many attempts for designing efficient Transformers (Zhou et al., 2020; Wu et al., 2019; Tay et al., 2020). Well studied model compression methods for Transformer models include pruning (Guo et al., 2020), quantization (Wang & Zhang, 2020), distillation (Sanh et al., 2019), weight sharing. Plenty of works and efforts focus on dedicated efficient attention mechanism considering its quadratic complexity of sequence length (Kitaev et al., 2019; Beltagy et al., 2020; Zaheer et al., 2020). BST is orthogonal to these techniques on the input dimension and therefore is compatible with them. We demonstrate this feasibility with the weight sharing model Albert (Lan et al., 2019) in Sec. 5." }, { "heading": "3 PROBLEM FORMULATION: IS ATTENTION EFFECTIVE FOR SKIM", "text": "Transformer. Transformer model with multi-head self-attention mechanism calculates hidden states for each position as a weighted sum of input hidden states. The weight vector is calculated by parameterized linear projection query Q and key K as eq. 1. Given a sequence of input embeddings, the output contextual embedding is composed by the input sequence with different attention at each position.\nAttention(Q,K) = So f tmax( QKT√\ndk ), (1)\nwhere Q,K are query and key matrix of input embeddings, dk is the length of a query or key vector. Multiple parallel groups of such attention weights, also referred to as attention heads, make it possible to attend to information at different positions.\nQA is one of the ultimate downstream tasks in the NLP. Given a text document and a question about the context, the answer is a contiguous span of the text. To predict the start and end position of the input context given a question, the embedding of each certain token is processed for all transformer layers in the encoder model. In many end-to-end open domain QA systems, information retrieval is the advance procedure at coarse-grained passage or paragraph level. Under the characteristic of extractive QA problem that answer spans are contiguous, our question is that whether we can utilize such idea at fine-grained block granularity during the processing of transformer. Is the attention weights effective for distinguish the answer blocks?\nTo answer the above question, we build a simple logistic regression model with attention matrix from each layer to predict whether an input sentence block contains the answer. The attention matrices are profiled from a BERTlarge SQuAD QA model and reduced to block level following Eq. 2 (Clark et al., 2019). The attention from block [a,b] attending to block [c,d] is aggregated to one value. And the attention between a block and the question sentence, special tokens \"[CLS]\" and \"[SEP]\" are used to denote the attending relation of the block. Such 6-dimensional vector from all attention heads in the layer are concatenated as the final classification feature. The result is shown in Fig. 1 with attention matrices from different layers. Simple logistic regression with hand crafted feature from attention weight achieves quite promising classification accuracy. This suggests that the attending relationship between question and targets is indeed capable for figuring out answer position.\nBlockAttention([a,b], [c,d]) = 1\nb−a\nb\n∑ i=a\nd\n∑ j=c Attention(i, j) (2)" }, { "heading": "4 BLOCK SKIMMING TRANSFORMER (BST)", "text": "" }, { "heading": "4.1 ARCHITECTURE OVERVIEW OF BST", "text": "We propose the Block Skimming Transformer (BST) model to accelerate the question answering task without degrading the answer accuracy. Unlike the conventional Transform-based model that uses all input tokens throughout the entire layers, our BST model accurately identifies the irrelevant contexts for the question in the early layers, and remove those irrelevant contexts in the following layers. As such, our model reduces the computation requirement and enables fast question answering.\nIn Sec. 3, we have shown that it is feasible to identify those tokens that are irrelevant to the question through a hand-crafted feature using the attentions relationship among tokens. However, using this approach could significantly hurt the question answering task accuracy as we show later. As such, we propose an end-to-end learnable feature extractor that captures the attention behavior better.\nFig. 2 shows the overall architecture of our BST model, where a layer is composed of a Transformer layer and a learnable Block Skim Module (BSM). The BSM adopts the convolutional neural network for feature extraction. The input is attention matrices of attention heads, which are treated as feature maps of multiple input channels. The output is a block-level mask that corresponds to the relevance of a block of input tokens to the question.\nIn each BSM module, we use convolution to collect local attending information and use pooling to reduce the size of feature maps. Two 3×3 convolution and one 1×1 convolution are connected with pooling operations intersected. For all the convolution operations, ReLU funcition (Hahnloser & Seung, 2001) is used as activation function. To locate the answer context blocks, we use a linear classification layer to calculate the score for each block. Also, two Batch Normalization layers (Ioffe & Szegedy, 2015) are inserted to improve the model accuracy.\nFormally, we denote the input sequence of a transformer layer as X = (x0,x1, . . . ,xn). Then the attention matrices of this layer are denoted as Attention(X). Given the attention output of a transformer layer, the kth block prediction result B is represented as B = BST (Attention(X)), where BST is the proposed architecture. The main functions of BST is expressed as Eq. 3.\nBST (Attention) = Linear(Conv1×1(Conv3×3(Pool(Conv3×3(Pool(Attention)))))) (3)" }, { "heading": "4.2 JOINT TRAINING OF QA AND BLOCK-SKIM CLASSIFIERS", "text": "There are two types of classifiers in our BST model, where the first is the original QA classifier at the last layer and the second is the block-level relevance classifier at each layer. We jointly train these classifiers so that the training objective is to minimize the sum of all classifiers’ losses.\nThe loss function of each block-level classifier is calculated as the cross entropy loss against the ground truth label whether a block contains answer tokens or not. Equation 4 gives the formal definition. The total loss of the block-level classifier LBST is the sum of all blocks that only contain passage tokens. The reason is that we only want to throw away blocks with irrelevant passage tokens instead of questions. Blocks that have question tokens or padding tokens are not used in the training process. To be more detailed, such blocks are pre-processed and dropped during the training process.\nLBST = ∑ bi∈{passage blocks} CELoss(bi,yi)\nyi = {\n1 , block i has answer tokens 0 , block i has no answer tokens\n(4)\nTo calculate the final total loss Ltotal , we introduce two hyper-parameters in Equation 5. We first use the hyper-parameter α so that different models and settings could adjust the ratio between the QA loss and block-level relevance classifier loss. We then use the other hyper-parameter β to balance the loss from positive and negative relevance blocks because there are typically many more blocks that contain no answer tokens (negative bocks) than the blocks that do contain answer tokens (positive bocks). We explain how to tune those hyper-parameters for different models and settings later.\nLtotal = LQA +α ∑ ith layer (βL i,y=1BST +L i,y=0 BST ) (5)\nAlthough we add the block-level relevance classification loss in the joint training, we do not actually throw away any blocks because it can skip answer blocks and the QA task training becomes unstable. In this sense, the block-level relevance classification loss can be viewed as a regularization method for the QA training as we force attention heads to better distinguish the answer blocks and non-answer blocks. As we show in the experiment, this regularization effect leads to accuracy improvement for the QA task." }, { "heading": "4.3 USING BST FOR QA", "text": "We now describe how to use the BST model to accelerate the QA task. In the above joint training process, we add the BSM module in every layer. However, we only augment a specific layer with the BSM module during the inference to save computation and avoid heavy changes to the underlying Transformer model. As such, the layer index for augmenting is a hyper-parameter in our model.\nOnce the BSM-augmented layer is chosen, we split the input sequence by the block granularity, which is another hyper-parameter in our model. The model skips a set of blocks according to the BSM module results for the following layers. It should be noted that the BST training process does not throw away any blocks because if a relevant block with answer tokens is rejected, the training of the original QA task is confused and becomes unstable.\nTo maintain compatibility with the original Transformer model, we forward the skipped blocks directly to the last layer for the QA classifier. With those design features, BST works as an add-on component to the original Transformer model and is compatible with many Transformer variant models as well as model compression methods. In specific, we will demonstrate that BST works well with Transformer-based Roberta (Liu et al., 2019), which has a different pre-training objective and sequence encoding, and Albert (Lan et al., 2019), which shares weights among layers for a reduced model size.\nWe provide an analytical model to demonstrate the speedup potential of BST . Suppose that we insert the BSM module at the layer l out of the total L layers, and a portion of k blocks remain for the following layers. The performance speedup is formulated by Equation 6 if we ignore the computation overhead in the BSM module. In fact, the computation of a single BSM module is much smaller than Transformer layers. For example, if k = 1/3 blocks remain after l/L = 1/3 layers, we achieve a 1.8× ideal speedup. Similarly, if k = 1/4 blocks remain after l/L = 1/4 layers, the ideal speedup is 2.29×.\nspeedup = L ·N ·Tlayer\nl ·N ·Tlayer +(L− l) ·N · k ·Tlayer = 1 1− (1− l/L)(1− k)\n(6)" }, { "heading": "5 EXPERIMENT", "text": "" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Dataset. We evaluate our method on 5 extractive QA datasets, including SQuAD 1.1 (Rajpurkar et al., 2016), Natural Questions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017), NewsQA (Trischler et al., 2016) and HotpotQA (Yang et al., 2018). The diversity of these datasets such as various passage lengths and different document sources lets us evaluate the general applicability of the proposed BST method. We follow the setting of BERT model to use the structure of Transformer encoder and a linear classification layer for all the datasets.\nModel. As we mentioned earlier, BST works as a plugin module to the oracle Transformer model, and therefore applicable to Transformer-based models. To illustrate this point, we apply our method to three different models including BERT, Roberta (Liu et al., 2019) with a different pre-training objective, and Albert (Lan et al., 2019) with parameter sharing layers. For all three models, we evaluate the base setting with 12 heads and 12 layers and the large setting with 24 layers and 16 heads as described in prior work (Devlin et al., 2018).\nTraining Setting. We implement the proposed method based on open-sourced library from Wolf et al. (2019). For each baseline model, we use the released pre-trained checkpoints 1. We follow the training setting used by Devlin et al. (2018) and Liu et al. (2019) to perform the fine-tuning on the above extractive QA datasets. We initialize the learning rate to 3e−5 for BERT and Roberta and 5e−5 for Albert with a linear learning rate scheduler. For SQuAD dataset, we apply batch size 16 and maximum sequence length 384. And for the other datasets, we apply batch size 32 and maximum sequence length 512. We perform all the experiments reported with random seed 42. We train a baseline model and BST model with the same setting for two epochs and report accuracies from MRQA task benchmark for comparison. We use four V100 GPUs with 32 GB memory for training and report performance speedup on multiple different hardware platforms.\nFor the following experiments, we use the block size 32 unless explicitly mentioned. We set the hyper-parameter β to 4 for all experiments and α to 1 except Albert. We use the α value of 0.05 for Albert. In the Albert model, the parameters of transformer layers are shared but BST modules\n1We use pre-trained language model checkpoints released from https://huggingface.co/models\nin our method do not share parameters. As such, we decrease the loss from BST to prevent model over-fitting and its impact on the QA task parameters.\nPerformance Evaluation. We measure the performance speedup of our method on the 2.20GHz Intel Xeon(R) Silver 4210 CPU with the batch size of one. On the GPU, the batch size of one could not fully utilize the GPU computation resource so the inference time is bottlenecked at the memory. Our evaluation scenario closely resembles prior work (Wu et al., 2019) that targets the mobile application domain. We evaluate BST with different layers and prediction thresholds of BST classifier to explore the trade-off between performance speedup and model accuracy. For example, a lower prediction threshold could lead to more skipped blocks, which means a better performance speedup. On the other hand, it also increases the chance of skipping answer blocks, which hurts the QA task accuracy." }, { "heading": "5.2 BST AS A REGULARIZATION METHOD", "text": "We first evaluate BST model as a regularization method to improve the QA task accuracy. Specifically, we compare the accuracy of three baseline models and their BST variants. In their BST versions, the BSM modules only participate the training process, and are removed in the inference task.\nThe upper half of Tbl. 1 shows the accuracy comparison on SQuAD dataset. By only changing the training process, BST improves the extractive QA accuracy for all baseline models. On average, BST exceeds the baseline by 0.32% in exact match and by 0.19% in F1 score. The accuracy improvement of BST is generally greater on large models. We attribute this to a stronger regularization effect for larger models. The results show the wide applicability of our method to different models.\nTbl. 1 also demonstrates the BST classifier F1 score trained jointly (but not used in this setting) with the baseline models. For simplicity, we only show the results of layer 4 and the middle layer (layer 6 for base and 12 for large model). On average, the block-level relevance classifier has a notably high F1 score even at early layers (averaged 81.38%) and even higher scores at the middle layer.\nThe lower half of Tbl. 1 shows BERTlarge results on multiple QA datasets. BST outperforms the baseline training objective on all datasets evaluated and exceeds with 0.52% exact match and 0.33% F1 score on average. The results show the wide applicability of our method to different datasets with varying difficulty and complexity. Meanwhile, we also observe a modest correlation between the block-level relevance classifier and the QA task. In other words, the BST classifier tends to be higher on datasets with a higher QA accuracy except for TriviaQA dataset." }, { "heading": "5.3 QA TASK SPEEDUP WITH BST", "text": "We now demonstrate the BST’s ability to accelerate the QA task. Fig. 3 demonstrates the performance speedup against F1 score evaluated with BERTbase and BERTlarge model on SQuAD dataset. By tuning the prediction threshold of the BST classifier, we can trade-off between acceleration and accuracy loss. Here we evaluate the BST classifier with 0,0.5,0.9,0.99 prediction threshold with\nclassifier at layer 4 and the middle layer respectively. On BERTbase, BST achieves 1.38× speedup with the same accuracy to the baseline. With a more aggressive skipping strategy, 1.4× speedup is obtained with minor accuracy loss (less than 1.5%). On BERTlarge, BST achieves 1.6× speedup with minor accuracy improvement and nearly 1.8× speedup with less than 0.5% F1 score degradation. Generally, the specific layer for inserting the BSM module can be determined by hyper-parameter search according to Eq. 6. As shown in Fig. 3, skipping the irrelevant blocks at layer 4 tends to be better than that at the middle layer, which is layer 6 for BERTbase and layer 12 for BERTlarge. This is because more computation is reduced when skipping at earlier layers and the BST classifier already has a quite good prediction accuracy at early layers." }, { "heading": "5.4 ABLATION STUDY", "text": "We compare our BST method with a series ablation of design components to study their individual effect. The experiments are performed based on the same setting as Sec. 5.1. We perform the experiments described in Tbl. 2, which has also the detailed results, and summarize the key finds as follows.\n• (3) Instead of joint training as described in Sec. 4.2, we perform a two-step training. We first perform the fine-tuning for the QA task. We then perform the BSM module training with the baseline QA model frozen. In other words, we only use the BST objective and only update the weights in the BSM modules. Therefore, the QA accuracy remains the same as the baseline model, which is lower than the joint training (id 3). Meanwhile, the BST classifier also has a lower accuracy than the joint training especially at layer 6.\n• (4) Instead of BSM module, we use the hand-crafted feature in Sec. 3. The resulted block-level relevance classification accuracy is considerably lower than our learned BST model.\n• (5) Instead of adding BST module to all layers, we only deploy it into one layer. The experiment result shows that it is beneficial for the model to have BST loss added to every layer.\n• (6) We skim blocks during the joint QA-BST training process. Because the mis-skimmed blocks may confuse the QA optimization, this training strategy results in considerable accuracy loss.\n• (7-11) We evaluate the accuracy with different block sizes. Specifically, when the block size is 1, it is equivalent to skim at the token granularity. Our experimental result shows that the accuracy of BST classifier is better when the block size is larger. On the other hand, a larger block size also leads to less number of blocks and therefore the performance speedup becomes limited on the studied datasets. To this end, we choose the block size of 32 as a design sweet spot." }, { "heading": "6 CONCLUSION", "text": "In this work, we provide a plug-and-play module BST to Transformer and its variants for efficient QA processing. Our empirical study shows that the attention mechanism in the form of a weighted feature map can provide instructive information for locating the answer span. In fact, we find that an attention\nmap can distinguish between answers and other tokens. Leveraging this insight, we propose to learn the attention in a supervised manner. In effect, BST terminates irrelevant blocks at early layers, significantly reducing the computations. Besides, the proposed BST training objective provides attention mechanism with extra learning signal and improves QA accuracy on all datasets and models we evaluated. With the use of BST module, such distinction is strengthened in a supervised fashion. This idea may be also applicable to other tasks and architectures." } ]
2,020
null
SP:977fc8d3bb7266d1beaecc609a91970783347ed3
[ "The authors discuss how a classifier’s performance over the initial class sample can be used to extrapolate its expected accuracy on a larger, unobserved set of classes by mean of the dual of the ROC function, swapping the roles of classes and samples. Grounded on such function, the authors develop a novel ANN approach learning to estimate the accuracy of classifiers on arbitrarily large sets of classes. Effectiveness of the approach is demonstrated on a suite of benchmark datasets, both synthetic and real-world." ]
Multiclass classifiers are often designed and evaluated only on a sample from the classes on which they will eventually be applied. Hence, their final accuracy remains unknown. In this work we study how a classifier’s performance over the initial class sample can be used to extrapolate its expected accuracy on a larger, unobserved set of classes. For this, we define a measure of separation between correct and incorrect classes that is independent of the number of classes: the reversed ROC (rROC), which is obtained by replacing the roles of classes and data-points in the common ROC. We show that the classification accuracy is a function of the rROC in multiclass classifiers, for which the learned representation of data from the initial class sample remains unchanged when new classes are added. Using these results we formulate a robust neural-network-based algorithm, CleaneX, which learns to estimate the accuracy of such classifiers on arbitrarily large sets of classes. Unlike previous methods, our method uses both the observed accuracies of the classifier and densities of classification scores, and therefore achieves remarkably better predictions than current state-of-the-art methods on both simulations and real datasets of object detection, face recognition, and brain decoding.
[ { "affiliations": [], "name": "Yuli Slavutsky" }, { "affiliations": [], "name": "Yuval Benjamini" } ]
[ { "authors": [ "Felix Abramovich", "Marianna Pensky" ], "title": "Classification with many classes: challenges and pluses", "venue": "Journal of Multivariate Analysis,", "year": 2019 }, { "authors": [ "Brandon Amos", "Bartosz Ludwiczuk", "Mahadev Satyanarayanan" ], "title": "Openface: A general-purpose face recognition library with mobile applications", "venue": "CMU School of Computer Science,", "year": 2016 }, { "authors": [ "Bhaskar Bhattacharya", "Gareth Hughes" ], "title": "On shape properties of the receiver operating characteristic curve", "venue": "Statistics & Probability Letters,", "year": 2015 }, { "authors": [ "Ricardo Cao", "Antonio Cuevas", "Wensceslao González Manteiga" ], "title": "A comparative study of several smoothing methods in density estimation", "venue": "Computational Statistics & Data Analysis,", "year": 1994 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Tom Fawcett" ], "title": "An introduction to roc analysis", "venue": "Pattern recognition letters,", "year": 2006 }, { "authors": [ "Li Fei-Fei", "Rob Fergus", "Pietro Perona" ], "title": "One-shot learning of object categories", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2006 }, { "authors": [ "Luzia Gonçalves", "Ana Subtil", "M Rosário Oliveira", "P d Bermudez" ], "title": "Roc curve estimation: An overview", "venue": "REVSTAT–Statistical Journal,", "year": 2014 }, { "authors": [ "Gary B. Huang", "Manu Ramesh", "Tamara Berg", "Erik Learned-Miller" ], "title": "Labeled faces in the wild: A database for studying face recognition in unconstrained environments", "venue": "Technical Report 07-49,", "year": 2007 }, { "authors": [ "Vidit Jain", "Erik Learned-Miller" ], "title": "Fddb: A benchmark for face detection in unconstrained settings", "venue": "Technical report, UMass Amherst Technical Report,", "year": 2010 }, { "authors": [ "Kendrick N Kay", "Thomas Naselaris", "Ryan J Prenger", "Jack L Gallant" ], "title": "Identifying natural images from human brain activity", "venue": null, "year": 2008 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Gregory Koch", "Richard Zemel", "Ruslan Salakhutdinov" ], "title": "Siamese neural networks for one-shot image recognition", "venue": "In ICML deep learning workshop,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Vitaly Kuznetsov", "Mehryar Mohri", "Umar Syed" ], "title": "Multi-class deep boosting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Yunwen Lei", "Urun Dogan", "Alexander Binder", "Marius Kloft" ], "title": "Multi-class svms: From tighter data-dependent generalization bounds to novel algorithms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Jian Li", "Yong Liu", "Rong Yin", "Hua Zhang", "Lizhong Ding", "Weiping Wang" ], "title": "Multi-class learning: From theory to algorithm", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Jingzhou Liu", "Wei-Cheng Chang", "Yuexin Wu", "Yiming Yang" ], "title": "Deep learning for extreme multilabel text classification", "venue": "In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2017 }, { "authors": [ "Weiyang Liu", "Yandong Wen", "Zhiding Yu", "Ming Li", "Bhiksha Raj", "Le Song" ], "title": "Sphereface: Deep hypersphere embedding for face recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Malik Magdon-Ismail", "Amir F Atiya" ], "title": "Neural networks for density estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 1999 }, { "authors": [ "Thomas Naselaris", "Kendrick N Kay", "Shinji Nishimoto", "Jack L Gallant" ], "title": "Encoding and decoding in fmri", "venue": null, "year": 2011 }, { "authors": [ "Maxime Oquab", "Leon Bottou", "Ivan Laptev", "Josef Sivic" ], "title": "Learning and transferring mid-level image representations using convolutional neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Sinno Jialin Pan", "Qiang Yang" ], "title": "A survey on transfer learning", "venue": "IEEE Transactions on knowledge and data engineering,", "year": 2010 }, { "authors": [ "George Papamakarios", "Theo Pavlakou", "Iain Murray" ], "title": "Masked autoregressive flow for density estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Katja Seeliger", "Matthias Fritsche", "Umut Güçlü", "Sanne Schoenmakers", "J-M Schoffelen", "SE Bosch", "MAJ Van Gerven" ], "title": "Convolutional neural network-based encoding and decoding of visual object recognition in space and time", "venue": null, "year": 2018 }, { "authors": [ "Shai Shalev-Shwartz", "Shai Ben-David" ], "title": "Understanding machine learning: From theory to algorithms", "venue": "Cambridge university press,", "year": 2014 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Benigno Uria", "Iain Murray", "Hugo Larochelle" ], "title": "A deep and tractable density estimator", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Charles Zheng", "Rakesh Achanta", "Yuval Benjamini" ], "title": "Extrapolating expected accuracies for large multi-class problems", "venue": "Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Charles Y. Zheng", "Yuval Benjamini" ], "title": "Estimating mutual information in high dimensions via classification", "venue": "error. arXiv e-prints, art", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Advances in machine learning and representation learning led to automatic systems that can identify an individual class from very large candidate sets. Examples are abundant in visual object recognition (Russakovsky et al., 2015; Simonyan & Zisserman, 2014), face identification (Liu et al., 2017b), and brain-machine interfaces (Naselaris et al., 2011; Seeliger et al., 2018). In all of these domains, the possible set of classes is much larger than those observed at training or testing.\nAcquiring and curating data is often the most expensive component in developing new recognition systems. A practitioner would prefer knowing early in the modeling process whether the datacollection apparatus and the classification algorithm are expected to meet the required accuracy levels. In large multi-class problems, the pilot data may contain considerably fewer classes than would be found when the system is deployed (consider, for example, the case in which researchers develop a face recognition system that is planned to be used on 10,000 people, but can only collect 1,000 in the initial development phase). This increase in the number of classes changes the difficulty of the classification problem and therefore the expected accuracy. The magnitude of change varies depending on the classification algorithm and the interactions between the classes: usually classification accuracy will deteriorate as the number of classes increases, but this deterioration varies across classifiers and data-distributions. For pilot experiments to work, theory and algorithms are needed to estimate how accuracy of multi-class classifiers is expected to change when the number of classes grows. In this work, we develop a prediction algorithm that observes the classification results for a small set of classes, and predicts the accuracy on larger class sets.\nIn large multiclass classification tasks, a representation is often learned on a set of k1 classes, whereas the classifier is eventually used on a new larger class set. On the larger set, classification\ncan be performed by applying simple procedures such as measuring the distances in an embedding space between a new example x ∈ X and labeled examples associated with the classes yi ∈ Y . Such classifiers, where the score assigned to a data point x to belong to a class y is independent of the other classes, are defined as marginal classifiers (Zheng et al., 2018). Their performance on the larger set describes how robust the learned representation is. Examples of classifiers that are marginal when used on a larger class set include siamese neural networks (Koch et al., 2015), oneshot learning (Fei-Fei et al., 2006) and approaches that directly optimize the embedding (Schroff et al., 2015). Our goal in this work is to estimate how well a given marginal classifier will perform on a large unobserved set of k2 classes, based on its performance on a smaller set of k1 classes.\nRecent works (Zheng & Benjamini, 2016; Zheng et al., 2018) set a probabilistic model for rigorously studying this problem, assuming that the k1 available classes are sampled from the same distribution as the larger set of k2 classes. Following the framework they propose, we assume that the sets of k1 and k2 classes on which the classifier is trained and evaluated are sampled independently from an infinite continuous set Y according to Yi ∼ PY (y), and for each class, r data points are sampled independently from X according to the conditional distribution PX|Y (x | y). In their work, the authors presented two methods for predicting the expected accuracy, one of them originally due to Kay et al. (2008). We cover these methods in Section 2.\nAs a first contribution of this work (Section 3), we provide a theoretical analysis that connects the accuracy of marginal classifiers to a variant of the receiver operating characteristic (ROC) curve, which is achieved by reversing the roles of classes and data points in the common ROC. We show that the reversed ROC (rROC) measures how well a classifier’s learned representation separates the correct from the incorrect classes of a given data point. We then prove that the accuracy of marginal classifiers is a function of the rROC, allowing the use of well researched ROC estimation methods (Gonçalves et al., 2014; Bhattacharya & Hughes, 2015) to predict the expected accuracy. Furthermore, the reversed area under the curve (rAUC) equals the expected accuracy of a binary classifier, where the expectation is taken over all randomly selected pairs of classes.\nWe use our results regarding the rROC to provide our second contribution (Section 4): CleaneX (Classification Expected Accuracy Neural EXtrapolation), a new neural-network-based method for predicting the expected accuracy of a given classifier on an arbitrarily large set of classes1. CleaneX differs from previous methods by using both the raw classification scores and the observed classification accuracies for different class-set sizes to calibrate its predictions. In Section 5 we verify the performance of CleaneX on simulations and real data-sets. We find it achieves better overall predictions of the expected accuracy, and very few “large” errors, compared to its competitors. We discuss the implications, and how the method can be used by practitioners, in Section 6." }, { "heading": "1.1 PRELIMINARIES AND NOTATION", "text": "In this work x are data points, y are classes, and when referred to as random variables they are denoted by X,Y respectively. We denote by y(x) the correct class of x, and use y∗ when x is implicitly understood. Similarly, we denote by y′ an incorrect class of x.\nWe assume that for each x and y the classifier h assigns a score Sy(x), such that the predicted class of x is arg maxy Sy(x). On a given dataset of k classes, {y1, . . . , yk}, the accuracy of the trained classifier h is the probability that it assigns the highest score to the correct class\nA(y1, . . . , yk) = PX(Sy∗(x) ≥ maxki=1Syi(x)) (1) where PX is the distribution of the data points x in the sample of classes. Since r points are sampled from each class, PX assumes a uniform distribution over the classes within the given sample.\nAn important quantity for a data point x is the probability of the correct class y∗ to outscore a randomly chosen incorrect class Y ′ ∼ PY |Y 6=y∗ , that is Cx = PY ′(Sy∗(x) ≥ Sy′(x)). This is the cumulative distribution function of the incorrect scores, evaluated at the value of the correct score.\nWe denote the expected accuracy over all possible subsets of k classes from Y by Ek[A] and its estimator by Êk[A]. We refer to the curve of Ek[A] at different values of k ≥ 2 as the accuracy curve. Given a sample of K classes, the average accuracy over all subsets of k ≤ K classes from the sample is denoted by ĀKk .\n1Code is publicly available at: https://github.com/YuliSl/CleaneX" }, { "heading": "2 RELATED WORK", "text": "Learning theory provides bounds of sample complexity in multiclass classification that depend on the number of classes (Shalev-Shwartz & Ben-David, 2014), and the extension to large mutliclass problems is a topic of much interest (Kuznetsov et al., 2014; Lei et al., 2015; Li et al., 2018). However, these bounds cannot be used to estimate the expected accuracy. Generalization to out-of-label accuracy includes the work of Jain & Learned-Miller (2010). The generalization of classifiers from datasets with few classes to larger class sets include those of Oquab et al. (2014) and Griffin et al. (2007), and are closely related to transfer learning (Pan et al., 2010) and extreme classification (Liu et al., 2017a). More specific works include that of Abramovich & Pensky (2019), which provides lower and upper bounds for the distance between classes that is required in order to achieve a given accuracy.\nKay et al. (2008), as adapted by Zheng et al. (2018), propose to estimate the accuracy of a marginal classifier on a given set of k classes by averaging over x the probability that its correct class outscores a single random incorrect class, raised to the power of k − 1 (the number of incorrect classes in the sample), that is Ek[A] = EX [PY ′(Sy∗(x) ≥ Sy′(x))k−1] = Ex[Ck−1x ]. (2) Therefore, the expected accuracy can be predicted by estimating the values of Cx on the available data. To do so, the authors propose using kernel density estimation (KDE) choosing the bandwidth with pseudo-likelihood cross-validation (Cao et al., 1994).\nZheng et al. (2018) define a discriminability function\nD(u) = PX (PY ′ (Sy∗(x) > Sy′(x)) ≤ u) , (3) and show that for marginal classifiers, the expected accuracy at k classes is given by\nEk[A] = 1− (k − 1) ∫ 1\n0\nD(u)uk−2du. (4)\nThe authors assume a non-parametric regression model with pre-chosen basis functions bj , so that D(u) = ∑ j βjbj . To obtain β̂ the authors minimize the mean squared error (MSE) between the resulting estimation Êk[A] and the observed accuracies Āk1k ." }, { "heading": "3 REVERSED ROC", "text": "In this section we show that the expected accuracy, Ek[A], can be better understood by studying an ROC-like curve. To do so, we first recall the definition of the common ROC: for two classes in a setting where one class is considered as the positive class and the other as the negative one, the ROC is defined as the graph of the true-positive rate (TPR) against the false-positive rate (FPR) (Fawcett, 2006). The common ROC curve represents the separability that a classifier h achieves between data points of the positive class and those of the negative one. At a working point in which the FPR of the classifier is u, we have ROC (u) = TPR ( FPR−1 (u) ) .\nIn a multiclass setting, we can define ROCy for each class y by considering y as the positive class, and the union of all other classes as the negative one. An adaptation of the ROC for this setting can be defined as the expectation of ROCy over the classes, that is ROC(u) = ∫ Y ROCy (u) dP (y). In terms of classification scores, we have TPRy(t) = PX(Sy(x) > t | y(x) = y), FPRy(t) = PX(Sy(x) > t | y(x) 6= y) and thus FPR−1y (u) = supt {PX(Sy(x) > t | y(x) 6= y) ≥ u}. Here, we single out each time one of the classes y and compare the score of the data points that belong to this class with the score of those that do not. However, when the number of classes is large, we could instead single out a data point x and compare the score that it gets for the correct class with the scores for the incorrect ones. This reverse view is formalized in the following definition, where we exchange the roles of data points x and classes y, to obtain the reversed ROC: Definition 1. Given a data point x, its corresponding reversed true-positive rate is\nrTPRx (t) = { 1 Sy∗(x) > t\n0 Sy∗(x) ≤ t (5)\nThe reversed false-positive rate is\nrFPRx (t) = PY ′ (Sy′(x) > t) (6)\nand accordingly rFPR−1x (u) = sup\nt {PY ′ (Sy′(x) > t) ≥ u} . (7)\nConsequently, the reversed ROC is\nrROCx(u) = rTPRx ( rFPR−1y (u) ) = { 1 Sy∗(x) > supt {PY ′ (Sy′(x) > t) ≥ u} 0 otherwise\n(8)\nand the average reversed ROC is2 rROC (u) = ∫ X rROCx (u) dP (x). (9)\nSince PY ′ (Sy′(x) > t) is a decreasing function of t, it can be seen that rROCx(u) = 1 iff u > PY ′ (Sy′(x) > Sy∗) = 1−Cx (see Proposition 1 in Appendix A). However, even though rROCx is a step function, the rROC resembles a common ROC curve, as illustrated in Figure 1." }, { "heading": "3.1 THE REVERSED ROC AND CLASSIFICATION ACCURACY", "text": "In what follows, we show that the classification accuracy can be expressed using the average reversed ROC curve. We assume a marginal classifier which assigns scores without ties, that is for all x and all yi 6= yj we have Syi(x) 6= Syj (x) almost surely. In such cases the following theorem holds: Theorem 1. The expected classification accuracy at k classes is\nEk[A] = 1− (k − 1) ∫ 1\n0\n( 1− rROC (1− u) ) uk−2du. (10)\n2Note that dP (x) in Equation equation 9 assumes a uniform distribution with respect to a given sample of classes {y1, . . . , yk} and their corresponding data points.\nTo prove this theorem we show that\n1− rROC(1− u) = PX (PY ′ (Sy∗(x) > Sy′(x)) ≤ u) = D(u) (11) and the rest follows immediately from the results of Zheng et al. (2018) (see Equation 4). We provide the detailed proof in Appendix A.\nNow, using the properties of the rROC we get Ek[A] = 1− (k − 1) ∫ 1\n0\n( 1− rROC (1− u) ) uk−2du\n= ∫ 1 0 (∫ X rROCx(1− u)dP (x) ) (k − 1)uk−2du\n= ∫ X ∫ Cx 0 (k − 1)uk−2du dP (x) = ∫ X Ck−1x dP (x) = Ex[Ck−1x ]. (12)\nTherefore, in order to predict the expected accuracy it suffices to estimate the values of Cx. A consequence of the result above is that the expressions that Kay et al. (2008) and Zheng et al. (2018) estimate (Equations 2 and 4 respectively) are in fact the same. Nevertheless, their estimation methods differ significantly.\nFinally, we note that the theoretical connection between the reversed ROC and the evaluation of classification models extends beyond this particular work. For example, by plugging k = 2 into Theorem 1 it immediately follows that the area under the reversed ROC curve (rAUC) is the expected accuracy of two classes: rAUC := ∫ 1 0 rROC (u) du = E2[A]." }, { "heading": "4 EXPECTED ACCURACY PREDICTION", "text": "In this section we present a new algorithm, CleaneX, for the prediction of the expected accuracy of a classifier. The algorithm is based on a neural network that estimates the values of Cx using the classifier’s scores on data from the available k1 classes. These estimations are then used, based on the results of the previous section, to predict the classification accuracy at k2 > k1 classes.\nThe general task of estimating densities using neural networks has been widely addressed (MagdonIsmail & Atiya, 1999; Papamakarios et al., 2017; Dinh et al., 2016; Uria et al., 2014). However, in our case, we need to estimate the cumulative distribution only at the value of the correct score Sy∗ (x). This is an easier task to perform and it allows us to design an estimation technique that learns to estimate the CDF in a supervised manner. We use a neural network f(· ; θ) : Rk1 → R whose input is a vector of the correct score followed by the k1 − 1 incorrect scores, and its output is Ĉx. Once Ĉx values are estimated for each x, the expected accuracy for each 2 ≤ k ≤ k1 can be estimated according to Equation 12: Êk[A] = 1N ∑ x Ĉ k−1 x , where N = rk1. In each iteration, the network’s weights θ are optimized by minimizing the average over 2 ≤ k ≤ k1 square distances from the observed accuracies Āk1k : 1k1−1 ∑k1 k=2 ( 1 N ∑ x Ĉ k−1 x − Āk1k )2 . After the weights have converged, the estimated expected accuracy at k2 > k1 classes can be calculated by Êk2 [A] = 1N ∑ x Ĉ k2−1 x . Note that regardless of the target k2, the network’s input size is k1. The proposed method is described in detail in Algorithm 1, below.\nUnlike KDE and non-parametric regression, our method does not require a choice of basis functions or kernels. It does require choosing the network architecture, but as will be seen in the next sections, in all of our experiments we use the same architecture and hyperparameters, indicating that our algorithm requires very little tuning when applied to new classification problems.\nAlthough the neural network allows more flexibility compared to the non-parametric regression, the key difference between the method we propose and previous ones is that CleaneX combines two sources of information: the classification scores (which are used as the network’s inputs) and the average accuracies of the available classes (which are used in the minimization objective). Unlike the KDE and regression based methods which use only one type of information, we estimate the CDF of the incorrect scores evaluated at the correct score, Cx, but calibrate the estimations to produce accuracy curves that fit the observed ones. We do so by exploiting the mechanism of neural networks to iteratively optimize an objective function over the whole dataset.\nAlgorithm 1: CleaneX Input : The classifier’s score function S, a training set of N examples x from the set of k1\navailable classes, target number of classes k2; a feedforward neural network f(·; θ), initial network weights θ0, number of training iterations J , learning rate η.\nOutput: Estimated accuracy at k2 classes. for k = 2, . . . , k1 do\nCompute Āk1k end for each x in training set do\nSet S′(x)← ( Sy′1(x), . . . , Sy′k1−1 (x) ) Sort S′(x) Set Sx ← (Sy∗(x), S′(x))\nend for j = 1, . . . , J do\nfor each x do Set Ĉx ← f(Sx; θj) end Update network parameters performing a gradient descent step:\nθj ← θj−1 − η∇θ (\n1 k1−1 ∑k1 k=2 ( 1 N ∑ x Ĉ k−1 x − Āk1k )2) end Return Êk2 [A] = 1N ∑ x Ĉ k2−1 x" }, { "heading": "5 SIMULATIONS AND EXPERIMENTS", "text": "Here we compare CleaneX to the KDE method (Kay et al., 2008) and to the non-parametric regression (Zheng et al., 2018) on simulated datasets that allow to examine under which conditions each method performs better, and on real-world datasets. Our goal is to predict the expected accuracy Ek[A] ∈ [0, 1] for different values of k. Therefore, for each method we compute the estimated expected accuracies Êk[A] for 2 ≤ k ≤ k2 and measure the success of the prediction using the root of the mean squared error (RMSE): ( 1 k2−1 ∑k2 k=2(Êk[A]− Āk1k )2 )1/2 .\nFor our method, we use in all the experiments an identical feed-forward neural network with two hidden layers of sizes 512 and 128, a rectified linear activation between the layers, and a sigmoid applied on the output. We train the network according to Algorithm 1 for J = 10, 000 iterations with learning rate of η = 10−4 using Adam optimizer (Kingma & Ba, 2014). For the regression based method we choose a radial basis and for the KDE based method a normal kernel, as recommended by the authors. The technical implementation details are provided in Appendix C.\nA naive calculation of ĀKk requires averaging the obtained accuracy over all possible subsets of k classes from K classes. However, Zheng et al. (2018) showed that for marginal classifiers ĀKk =\n1 (Kk) 1 rk\n∑ x ( Rx k−1 ) , where the sum is taken over all data points x from the K classes, and Rx is the\nnumber of incorrect classes that the correct class of x outperforms, that is Rx = ∑ y′ 1{Sy∗(x) > Sy′(x)}. We use this result to compute Āk2k values." }, { "heading": "5.1 SIMULATION STUDIES", "text": "Here we provide comparison of the methods under different parametric settings. We simulate both classes and data points as d-dimensional vectors, with d = 5 (and d = 3, 10 shown in Appendix B). Settings vary in the distribution of classes Y and data-points X|Y , and in the spread of data-points around the class centroids. We sample the classes y1, . . . , yk2 from a multivariate normal distribution\nN (0, I) or a multivariate uniform distribution U(− √ 3, √\n3)3. We then sample r = 10 data points for each class, either from a multivariate normal distribution N (y, σ2I) or from a multivariate uniform distribution U(y − √ 3σ2, y + √ 3σ2). The difficulty level is determined by σ2 = 0.1, 0.2. The classification scores Sy(x) are set to be the euclidean distances between x and y (in this case the correct scores are expected to be lower than the incorrect ones, requiring some straightforward modifications to our analysis). For each classification problem, we subsample 50 times k1 classes, for k1 = 100, 500, and predict the accuracy at 2 ≤ k ≤ k2 = 2000 classes.\nThe results of the simulations, summarized in Figure 2, show the distribution of RMSE values for each method and setting over 50 repetitions. The corresponding accuracy curves are shown in Figure 4 in Appendix B. It can be seen that extrapolation to 20 times the original number of classes can be achieved with reasonable errors (the median RMSE is less than 5% in almost all scenarios). Our method performs better or similar to the competing methods, often with substantial gain. For example, for d = 5 the results show that our method outperforms the KDE based method in all cases; it outperforms the regression based method in 7 out of 8 settings for k1 = 100, and for k1 = 500 the regression based method and CleaneX achieve very similar results. As can be seen in Figures 2 and 4, the accuracy curves produced by the KDE method are highly biased and therefore it consistently achieves the worst performance. The results of the regression method, on the other hand, are more variable than our method, especially at k1 = 100. Additional results for d = 3, 10 (see Appendix B) are consistent with these results, though all methods predict better for d = 10." }, { "heading": "5.2 EXPERIMENTS", "text": "Here we present the results of three experiments performed on datasets from the fields of computer vision and computational neuroscience. We repeat each experiment 50 times. In each repetition we sub-sample k1 classes and predict the accuracy at 2 ≤ k ≤ k2 classes. Experiment 1 - Object Detection (CIFAR-100) We use the CIFAR dataset (Krizhevsky et al., 2009) that consists of 32×32 color images from 100 classes, each class containing 600 images. Each image is embedded into a 512-dimensional space by a VGG-16 network (Simonyan & Zisserman, 2014), which was pre-trained on the ImageNet dataset (Deng et al., 2009). On the training set, the centroid of each class is calculated and the classification scores for each image in the test set are set to be the euclidean distances of the image embedding from the centroids. The classification accuracy is extrapolated from k1 = 10 to k2 = 100 classes.\nExperiment 2 - Face Recognition (LFW) We use the “Labeled Faces in the Wild” dataset (Huang et al., 2007) and follow the procedure described in Zheng et al. (2018): we restrict the dataset to the 1672 individuals for whom it contains at least 2 face photos and include in our data exactly 2 randomly chosen photos for each person. We use one of them as a label y, and the other as a data\n3The choice of √ 3 results in class covariances that equal to those in the multivariate normal case.\npoint x, consistent with a scenario of single-shot learning. Each photo is embedded into a 128- dimensional space using OpenFace embedding (Amos et al., 2016). The classification scores are set to be the euclidean distance between the embedding of each photo and the photos that are used as labels. Classification accuracy is extrapolated from k1 = 200 to k2 = 1672 classes.\nExperiment 3 - Brain Decoding (fMRI) We analyze a “mind-reading” task described by Kay et al. (2008) in which a vector of v = 1250 neural responses is recorded while a subject watches n = 1750 natural images inside a functional MRI. The goal is to identify the correct image from the neural response vector. Similarly to the decoder in (Naselaris et al., 2011), we use nt = 750 images and their response vectors to fit an embedding g(·) of images into the brain response space, and to estimate the residual covariance Σ. The remaining n − nt = k2 = 1000 examples are used as an evaluation set for g(·). For image y and brain vector x, the score is the negative Mahalanobis distance −‖g(y) − x‖2Σ. For each response vector, the image with the highest score is selected. Classification accuracy is extrapolated from k1 = 200 to k2 = 1000 classes.\nThe experimental results, summarized in Figure 3.D, show that generally CleaneX outperforms the regression and KDE methods: Figure 3.E shows that the median ratio between RMSE values of the competing methods and CleaneX is higher than 1 in all cases except for KDE on the LFW dataset. As can be seen in Figure 3.A and B, the estimated accuracy curves produced by CleaneX have lower variance than those of the regression method (which do not always decrease with k), and on average our method outperforms the regression, achieving lower prediction errors by 18% (CIFAR), 32% (LFW) and 22% (fMRI). On the other hand, the KDE method achieves lower variance but high bias in two of three experiments (Figure 3.C), where it is outperformed by CleaneX by 7% (CIFAR) and 73% (fMRI). However, in contrast to the simulation results and its performance on the CIFAR and fMRI datasets, the KDE method achieves exceptionally low bias on the LFW dataset, outperforming our method by 38%. Overall, CleaneX produces more reliable results across the experiments." }, { "heading": "6 DISCUSSION", "text": "In this work we presented the reversed ROC and showed its connection to the accuracy of marginal classifiers. We used this result to develop a new method for accuracy prediction.\nAnalysis of the two previous methods for accuracy extrapolation reveals that each of them uses only part of the relevant information for the task. The KDE method estimates Cx based only on the scores, ignoring the observed accuracies of the classifier. Even when the estimates of Cx are unbiased, the exponentiation in Ex[Ck−1x ] introduces bias. Since the estimation is not calibrated using the observed accuracies, the resulting estimations are often biased. As can be seen in Figure 2, the bias is higher for k1 = 500 than for k1 = 100, indicating that this is aggregated when k1 is small. In addition, we found the method to be sensitive to monotone transformations of the scores, such as taking squared-root or logarithm. In contrast, the non-parametric regression based method uses pre-chosen basis functions to predict the accuracy curves and therefore has limited versatility to capture complicated functional forms. Since it ignores the values of the classification scores, the resulting predicted curves do not necessarily follow the form of Ex[Ck−1x ], introducing high variance estimations. As can be seen in Figure 2, the variance is higher for k1 = 100 compared to k1 = 500, indicating higher variance for small values of k1. This can be expected since the number of basis functions that are used to approximate the discriminability function depends on k1.\nIn CleaneX, we exploit the mechanism of neural networks in a manner that allows us to combine both sources of information with less restriction on the shape of the curve. Since the extrapolation follows the known exponential form using the estimated Cx values it is characterized with low variance, and since the result is calibrated by optimizing an objective that depends on the observed accuracies, our method has low bias, and therefore consistently outperforms the previous methods.\nComparing the results for k1 = 500 between different dimensions (d = 5 in Figure 2 and d = 3, 10 in Figure 6) it can be seen that the bias of the KDE based method and the variance of the regression based method are lower in higher data dimensions. Zheng & Benjamini (2016) show that if both X and Y are high dimensional vectors and their joint distribution factorizes, then the classification scores of Bayesian classifiers (which are marginal) are approximately Gausssian. The KDE and the regression based methods use Gaussian kernel and Gaussian basis functions respectively, and are perhaps more efficient in estimating approximately Gaussian score distributions. Apparently, scores for real data-sets behave closer to low-dimensional data, where the flexibility of CleaneX (partly due to being a non-parametric method) is an advantage.\nA considerable source of noise in the estimation is the selection of the k1 classes. The Āk1k curves diverge from Ek[A] as the number of classes increases, and therefore it is hard to recover if the initial k1 classes deviated from the true accuracy. The effect of choosing different initial subsets can be seen by comparing the grey curves and the orange curves that continue them, for example in Figure 3. We leave the design of more informative sampling schemes for future work.\nTo conclude, we found the method we present to be considerably more stable than previous methods. Therefore, it is now possible for researchers to reliably estimate how well the classification system they are developing will perform using representations learned only on a subset of the target class set.\nAlthough our work focuses on marginal classifiers, its importance extends beyond this class of algorithms. First, preliminary results show that our method yields good estimates even when applied to (shallow) non-marginal classifiers such as multi-logistic regression. Moreover, if the representation can adapt as classes are introduced, we expect accuracy to exceed that of a fixed representation. We can therefore use our algorithm to measure the degree of adaptation of the representation. Generalization of our method to non marginal classifiers is a prominent direction for future work." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Etam Benger for many fruitful discussions, and Itamar Faran and Charles Zheng for commenting on the manuscript. YS is supported by the Israeli Council For Higher Education DataScience fellowship and the CIDR center at the Hebrew University of Jerusalem." }, { "heading": "A PROOF OF THEOREM 1", "text": "We begin by proving the following simple proposition: Proposition 1. Given x, let\nG1 =\n{ y ∣∣∣∣ Sy (x) > sup t {PY ′ (Sy′ (x) > t) ≥ u} } (13)\nand G2 = {y | PY ′ (Sy′ (x) > Sy (x)) < u} , (14)\nthen G1 = G2.\nProof. Let Gc1, G c 2 be the complement sets of G1, G2, respectively. Then\nGc1 = {y | Sy(x) ≤ sup t {PY ′ (Sy′ (x) > t) ≥ u}} (15)\nand Gc2 = {y | PY ′ (Sy′ (x) > Sy (x)) ≥ u}. (16)\nSince PY ′ (Sy′ (x) > t) is not increasing in t we have\ny ∈ Gc1 ⇔ Sy(x) ≤ sup t {PY ′ (Sy′ (x) > t) ≥ u}\n⇔ PY ′ (Sy′ (x) > Sy (x)) ≥ u (17) ⇔ y ∈ Gc2\nand therefore G1 = G2.\nWe now prove Theorem 1, which was presented in Section 3:\nProof. We show that 1 − rROC (1− u) = D (u), and the rest follows immediately according to Theorem 3 of Zheng et al. (2018) (see Equation 4):\n1− rROCx (1− u) = 1− 1 [ Sy∗(x) > sup\nt {PY ′ (Sy′(x) > t) ≥ 1− u}\n] (Definition 1)\n= 1− 1 [PY ′ (Sy′(x) > Sy∗(x)) < 1− u] (Proposition 1) = 1 [PY ′ (Sy′(x) ≤ Sy∗(x)) ≤ u] = 1 [PY ′ (Sy′(x) < Sy∗(x)) ≤ u] , (18)\nwhere 1[·] denotes the indicator function, and the last equality follows from the assumption that Sy′(x) 6= Sy(x) almost surely. From here,\n1− rROC (1− u) (19) = EX [1 (PY ′ (Sy′(x) < Sy∗(x)) ≤ u)] = PX (PY ′ (Sy′(x) < Sy∗(x)) ≤ u) = D(u)\nas required." }, { "heading": "B SIMULATIONS - ADDITIONAL RESULTS", "text": "Figures 4 and 5 show the accuracy curves for which we presented summarized results in Figure 2.\nIn Figure 6 we provide additional simulation results for data in dimensions d = 3, 10. We simulate eight d-dimensional datasets of k2 = 2000 classes and extrapolate the accuracy from k1 = 500 classes. As in Section 5.1, the datasets are combinations of two distributions of Y and two of X | Y , in two different levels of classification difficulty. We sample the classes y1, . . . , yk2 from a multivariate normal distribution N (0, I) or a multivariate uniform distribution U(−1, 1). We sample r = 10 data points for each class from a multivariate normal distribution N(y, σ2I) or from\na multivariate uniform distribution U(y − σ2, y + σ2). For d = 3 we set σ2 to 0.1 for the easier classification task and 0.2 for the difficult one. For d = 10, the values of σ2 are set to 0.6 and 0.9, respectively. The classification scores Sy(x) are the euclidean distances between x and y.\nIt can be seen that the results for d = 3 are consistent with those for d = 5. That is, our method outperforms the two competing methods, with an even bigger gap, in seven of the eight simulated datasets. For d = 10 the regression based method achieves comparable results to ours. As in the other simulations, our method produces less variance than the regression and less bias than KDE.\nC IMPLEMENTATION DETAILS\nAll the code in this work was implemented in Python 3.6. For the CleaneX algorithm we used TensorFlow 1.14; for the regression based method we used the scipy.optimize package with the “Newton-CG” method; kernel density estimation was implemented using the density function from the stats library in R, imported to Python through the rpy2 package." } ]
2,021
PREDICTING CLASSIFICATION ACCURACY WHEN ADDING NEW UNOBSERVED CLASSES
SP:eb5f64c7d1e303394f4650a14806e60dba1afdd3
[ "The paper presented an adaptive inference model for efficient action recognition in videos. The core of the model is the dynamic gating of feature channels that controls the fusion between two frame features, whereby the gating is conditioned on the input video and helps to reduce the computational cost at runtime. The proposed model was evaluated on several video action datasets and compared against a number of existing deep models. The results demonstrated a good efficiency-accuracy trade-off for the proposed model. " ]
Temporal modelling is the key for efficient video action recognition. While understanding temporal information can improve recognition accuracy for dynamic actions, removing temporal redundancy and reusing past features can significantly save computation leading to efficient action recognition. In this paper, we introduce an adaptive temporal fusion network, called AdaFuse, that dynamically fuses channels from current and past feature maps for strong temporal modelling. Specifically, the necessary information from the historical convolution feature maps is fused with current pruned feature maps with the goal of improving both recognition accuracy and efficiency. In addition, we use a skipping operation to further reduce the computation cost of action recognition. Extensive experiments on Something V1&V2, Jester and Mini-Kinetics show that our approach can achieve about 40% computation savings with comparable accuracy to state-of-the-art methods. The project page can be found at https://mengyuest.github.io/AdaFuse/
[ { "affiliations": [], "name": "Yue Meng" }, { "affiliations": [], "name": "Rameswar Panda" }, { "affiliations": [], "name": "Chung-Ching Lin" }, { "affiliations": [], "name": "Prasanna Sattigeri" }, { "affiliations": [], "name": "Leonid Karlinsky" }, { "affiliations": [], "name": "Kate Saenko" }, { "affiliations": [], "name": "Aude Oliva" }, { "affiliations": [], "name": "Rogerio Feris" } ]
[ { "authors": [ "Sadjad Asghari-Esfeden", "Mario Sznaier", "Octavia Camps" ], "title": "Dynamic motion representation for human action recognition", "venue": "In The IEEE Winter Conference on Applications of Computer Vision,", "year": 2020 }, { "authors": [ "Emmanuel Bengio", "Pierre-Luc Bacon", "Joelle Pineau", "Doina Precup" ], "title": "Conditional computation in neural networks for faster models", "venue": "arXiv preprint arXiv:1511.06297,", "year": 2015 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Joao Carreira", "Andrew Zisserman" ], "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "venue": "In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Yinpeng Chen", "Xiyang Dai", "Mengchen Liu", "Dongdong Chen", "Lu Yuan", "Zicheng Liu" ], "title": "Dynamic convolution: Attention over convolution kernels", "venue": "arXiv preprint arXiv:1912.03458,", "year": 2019 }, { "authors": [ "Zhourong Chen", "Yang Li", "Samy Bengio", "Si Si" ], "title": "You look twice: Gaternet for dynamic filter selection in cnns", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jeffrey Donahue", "Lisa Anne Hendricks", "Sergio Guadarrama", "Marcus Rohrbach", "Subhashini Venugopalan", "Kate Saenko", "Trevor Darrell" ], "title": "Long-term recurrent convolutional networks for visual recognition and description", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Hehe Fan", "Zhongwen Xu", "Linchao Zhu", "Chenggang Yan", "Jianjun Ge", "Yi Yang" ], "title": "Watching a small portion could be as good as watching all: Towards efficient video classification", "venue": "In IJCAI International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Quanfu Fan", "Chun-Fu Richard Chen", "Hilde Kuehne", "Marco Pistoia", "David Cox" ], "title": "More is less: Learning efficient video representations by big-little network and depthwise temporal aggregation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Christoph Feichtenhofer", "Haoqi Fan", "Jitendra Malik", "Kaiming He" ], "title": "Slowfast networks for video recognition", "venue": "arXiv preprint arXiv:1812.03982,", "year": 2018 }, { "authors": [ "Michael Figurnov", "Maxwell D Collins", "Yukun Zhu", "Li Zhang", "Jonathan Huang", "Dmitry Vetrov", "Ruslan Salakhutdinov" ], "title": "Spatially adaptive computation time for residual networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Ruohan Gao", "Tae-Hyun Oh", "Kristen Grauman", "Lorenzo Torresani" ], "title": "Listen to look: Action recognition by previewing audio", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Xitong Gao", "Yiren Zhao", "Łukasz Dudziak", "Robert Mullins", "Cheng-zhong Xu" ], "title": "Dynamic channel pruning: Feature boosting and suppression", "venue": "arXiv preprint arXiv:1810.05331,", "year": 2018 }, { "authors": [ "Peter W Glynn" ], "title": "Likelihood ratio gradient estimation for stochastic systems", "venue": "Communications of the ACM,", "year": 1990 }, { "authors": [ "Raghav Goyal", "Samira Ebrahimi Kahou", "Vincent Michalski", "Joanna Materzynska", "Susanne Westphal", "Heuna Kim", "Valentin Haenel", "Ingo Fruend", "Peter Yianilos", "Moritz Mueller-Freitag" ], "title": "The\" something something\" video database for learning and evaluating visual common sense", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Alex Graves" ], "title": "Adaptive computation time for recurrent neural networks", "venue": "arXiv preprint arXiv:1603.08983,", "year": 2016 }, { "authors": [ "Yunhui Guo", "Honghui Shi", "Abhishek Kumar", "Kristen Grauman", "Tajana Rosing", "Rogerio Feris" ], "title": "Spottune: transfer learning through adaptive fine-tuning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kai Han", "Yunhe Wang", "Qi Tian", "Jianyuan Guo", "Chunjing Xu", "Chang Xu" ], "title": "Ghostnet: More features from cheap operations", "venue": null, "year": 1911 }, { "authors": [ "Kensho Hara", "Hirokatsu Kataoka", "Yutaka Satoh" ], "title": "Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Weizhe Hua", "Yuan Zhou", "Christopher M De Sa", "Zhiru Zhang", "G Edward Suh" ], "title": "Channel gating neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "S. Ji", "W. Xu", "M. Yang", "K. Yu" ], "title": "3d convolutional neural networks for human action recognition", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Boyuan Jiang", "MengMeng Wang", "Weihao Gan", "Wei Wu", "Junjie Yan" ], "title": "Stm: Spatiotemporal and motion encoding for action recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Andrej Karpathy", "George Toderici", "Sanketh Shetty", "Thomas Leung", "Rahul Sukthankar", "Li Fei-Fei" ], "title": "Large-scale video classification with convolutional neural networks", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Will Kay", "Joao Carreira", "Karen Simonyan", "Brian Zhang", "Chloe Hillier", "Sudheendra Vijayanarasimhan", "Fabio Viola", "Tim Green", "Trevor Back", "Paul Natsev" ], "title": "The kinetics human action video dataset", "venue": "arXiv preprint arXiv:1705.06950,", "year": 2017 }, { "authors": [ "Bruno Korbar", "Du Tran", "Lorenzo Torresani" ], "title": "Scsampler: Sampling salient clips from video for efficient action recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Chengxi Li", "Yue Meng", "Stanley H Chan", "Yi-Ting Chen" ], "title": "Learning 3d-aware egocentric spatialtemporal interaction via graph convolutional networks", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2020 }, { "authors": [ "Yan Li", "Bin Ji", "Xintian Shi", "Jianguo Zhang", "Bin Kang", "Limin Wang" ], "title": "Tea: Temporal excitation and aggregation for action recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Ji Lin", "Yongming Rao", "Jiwen Lu", "Jie Zhou" ], "title": "Runtime neural pruning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ji Lin", "Chuang Gan", "Song Han" ], "title": "Tsm: Temporal shift module for efficient video understanding", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Zhaoyang Liu", "Donghao Luo", "Yabiao Wang", "Limin Wang", "Ying Tai", "Chengjie Wang", "Jilin Li", "Feiyue Huang", "Tong Lu" ], "title": "Teinet: Towards an efficient architecture for video recognition", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Farzaneh Mahdisoltani", "Guillaume Berger", "Waseem Gharbieh", "David Fleet", "Roland Memisevic" ], "title": "On the effectiveness of task granularity for transfer learning", "venue": "arXiv preprint arXiv:1804.09235,", "year": 2018 }, { "authors": [ "Joanna Materzynska", "Guillaume Berger", "Ingo Bax", "Roland Memisevic" ], "title": "The jester dataset: A large-scale video dataset of human gestures", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Mason McGill", "Pietro Perona" ], "title": "Deciding how to decide: Dynamic routing in artificial neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yue Meng", "Chung-Ching Lin", "Rameswar Panda", "Prasanna Sattigeri", "Leonid Karlinsky", "Aude Oliva", "Kate Saenko", "Rogerio Feris" ], "title": "Ar-net: Adaptive frame resolution for efficient action recognition", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Mathew Monfort", "Alex Andonian", "Bolei Zhou", "Kandan Ramakrishnan", "Sarah Adel Bargal", "Tom Yan", "Lisa Brown", "Quanfu Fan", "Dan Gutfruend", "Carl Vondrick" ], "title": "Moments in time dataset: one million videos for event understanding", "venue": "arXiv preprint arXiv:1801.03150,", "year": 2018 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "Bowen Pan", "Rameswar Panda", "Camilo Luciano Fosco", "Chung-Ching Lin", "Alex J Andonian", "Yue Meng", "Kate Saenko", "Aude Oliva", "Rogerio Feris" ], "title": "Va-red2: Video adaptive redundancy reduction", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Two-stream convolutional networks for action recognition in videos", "venue": "In Neural Information Processing System (NIPS),", "year": 2014 }, { "authors": [ "Swathikiran Sudhakaran", "Sergio Escalera", "Oswald Lanz" ], "title": "Gate-shift networks for video action recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "arXiv preprint arXiv:1905.11946,", "year": 2019 }, { "authors": [ "Du Tran", "Lubomir Bourdev", "Rob Fergus", "Lorenzo Torresani", "Manohar Paluri" ], "title": "Learning spatiotemporal features with 3d convolutional networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Jamie Ray", "Yann LeCun", "Manohar Paluri" ], "title": "A closer look at spatiotemporal convolutions for action recognition", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Matt Feiszli" ], "title": "Video classification with channelseparated convolutional networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Andreas Veit", "Serge Belongie" ], "title": "Convolutional networks with adaptive inference graphs", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Thomas Verelst", "Tinne Tuytelaars" ], "title": "Dynamic convolutions: Exploiting spatial sparsity for faster inference", "venue": "arXiv preprint arXiv:1912.03203,", "year": 2019 }, { "authors": [ "Limin Wang", "Yuanjun Xiong", "Zhe Wang", "Yu Qiao", "Dahua Lin", "Xiaoou Tang", "Luc Van Gool" ], "title": "Temporal segment networks: Towards good practices for deep action recognition", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Xiaolong Wang", "Abhinav Gupta" ], "title": "Videos as space-time region graphs", "venue": "In Proceedings of the European conference on computer vision (ECCV),", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Xin Wang", "Fisher Yu", "Zi-Yi Dou", "Trevor Darrell", "Joseph E Gonzalez" ], "title": "Skipnet: Learning dynamic routing in convolutional networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Wenhao Wu", "Dongliang He", "Xiao Tan", "Shifeng Chen", "Yi Yang", "Shilei Wen" ], "title": "Dynamic inference: A new approach toward efficient video action recognition", "venue": "arXiv preprint arXiv:2002.03342,", "year": 2020 }, { "authors": [ "Zuxuan Wu", "Tushar Nagarajan", "Abhishek Kumar", "Steven Rennie", "Larry S Davis", "Kristen Grauman", "Rogerio Feris" ], "title": "Blockdrop: Dynamic inference paths in residual networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zuxuan Wu", "Caiming Xiong", "Yu-Gang Jiang", "Larry S Davis" ], "title": "Liteeval: A coarse-to-fine framework for resource efficient video recognition", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Zuxuan Wu", "Caiming Xiong", "Chih-Yao Ma", "Richard Socher", "Larry S Davis" ], "title": "Adaframe: Adaptive frame selection for fast video recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Saining Xie", "Chen Sun", "Jonathan Huang", "Zhuowen Tu", "Kevin Murphy" ], "title": "Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Brandon Yang", "Gabriel Bender", "Quoc V Le", "Jiquan Ngiam" ], "title": "Condconv: Conditionally parameterized convolutions for efficient inference", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Serena Yeung", "Olga Russakovsky", "Greg Mori", "Li Fei-Fei" ], "title": "End-to-end learning of action detection from frame glimpses in videos", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Bolei Zhou", "Alex Andonian", "Aude Oliva", "Antonio Torralba" ], "title": "Temporal relational reasoning in videos", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Mohammadreza Zolfaghari", "Kamaljeet Singh", "Thomas Brox" ], "title": "Eco: Efficient convolutional network for online video understanding", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Over the last few years, video action recognition has made rapid progress with the introduction of a number of large-scale video datasets (Carreira & Zisserman, 2017; Monfort et al., 2018; Goyal et al., 2017). Despite impressive results on commonly used benchmark datasets, efficiency remains a great challenge for many resource constrained applications due to the heavy computational burden of deep Convolutional Neural Network (CNN) models.\nMotivated by the need of efficiency, extensive studies have been recently conducted that focus on either designing new lightweight architectures (e.g., R(2+1)D (Tran et al., 2018), S3D (Xie et al., 2018), channel-separated CNNs (Tran et al., 2019)) or selecting salient frames/clips conditioned on the input (Yeung et al., 2016; Wu et al., 2019b; Korbar et al., 2019; Gao et al., 2020). However, most of the existing approaches do not consider the fact that there exists redundancy in CNN features which can significantly save computation leading to more efficient action recognition. In particular, orthogonal to the design of compact models, the computational cost of a CNN model also has much to do with the redundancy of CNN features (Han et al., 2019). Furthermore, the amount of redundancy depends on the dynamics and type of events in the video: A set of still frames for a simple action (e.g. “Sleeping”) will have a higher redundancy comparing to a fast-changed action with rich interaction and deformation (e.g. “Pulling two ends of something so that it gets stretched”). Thus, based on the input we could compute just a subset of features, while the rest of the channels can reuse history feature maps or even be skipped without losing any accuracy, resulting in large computational savings compared to computing all the features at a given CNN layer. Based on this intuition, we present a new perspective for efficient action recognition by adaptively deciding what channels to compute or reuse, on a per instance basis, for recognizing complex actions.\nIn this paper, we propose AdaFuse, an adaptive temporal fusion network that learns a decision policy to dynamically fuse channels from current and history feature maps for efficient action recognition. Specifically, our approach reuses history features when necessary (i.e., dynamically decides which channels to keep, reuse or skip per layer and per instance) with the goal of improving both recognition\n∗Email: mengyuethu@gmail.com. This work was done while Yue was an AI Resident at IBM Research.\naccuracy and efficiency. As these decisions are discrete and non-differentiable, we rely on a Gumbel Softmax sampling approach (Jang et al., 2016) to learn the policy jointly with the network parameters through standard back-propagation, without resorting to complex reinforcement learning as in (Wu et al., 2019b; Fan et al., 2018; Yeung et al., 2016). We design the loss to achieve both competitive performance and resource efficiency required for action recognition. Extensive experiments on multiple benchmarks show that AdaFuse significantly reduces the computation without accuracy loss.\nThe main contributions of our work are as follows:\n• We propose a novel approach that automatically determines which channels to keep, reuse or skip per layer and per target instance for efficient action recognition.\n• Our approach is model-agnostic, which allows this to be served as a plugin operation for a wide range of 2D CNN-based action recognition architectures.\n• The overall policy distribution can be seen as an indicator for the dataset characteristic, and the block-level distribution can bring potential guidance for future architecture designs.\n• We conduct extensive experiments on four benchmark datasets (Something-Something V1 (Goyal et al., 2017), Something-Something V2 (Mahdisoltani et al., 2018), Jester (Materzynska et al., 2019) and Mini-Kinetics (Kay et al., 2017)) to demonstrate the superiority of our proposed approach over state-of-the-art methods." }, { "heading": "2 RELATED WORK", "text": "Action Recognition. Much progress has been made in developing a variety of ways to recognize complex actions, by either applying 2D-CNNs (Karpathy et al., 2014; Wang et al., 2016; Fan et al., 2019) or 3D-CNNs (Tran et al., 2015; Carreira & Zisserman, 2017; Hara et al., 2018). Most successful architectures are usually based on the two-stream model (Simonyan & Zisserman, 2014), processing RGB frames and optical-flow in two separate CNNs with a late fusion in the upper layers (Karpathy et al., 2014) or further combining with other modalities (Asghari-Esfeden et al., 2020; Li et al., 2020a). Another popular approach for CNN-based action recognition is the use of 2D-CNN to extract frame-level features and then model the temporal causality using different aggregation modules such as temporal averaging in TSN (Wang et al., 2016), a bag of features scheme in TRN (Zhou et al., 2018), channel shifting in TSM (Lin et al., 2019), depthwise convolutions in TAM (Fan et al., 2019), non-local neural networks (Wang et al., 2018a), temporal enhancement and interaction module in TEINet (Liu et al., 2020), and LSTMs (Donahue et al., 2015). Many variants of 3D-CNNs such as C3D (Tran et al., 2015; Ji et al., 2013), I3D (Carreira & Zisserman, 2017) and ResNet3D (Hara et al., 2018), that use 3D convolutions to model space and time jointly, have also been introduced for action recognition. SlowFast (Feichtenhofer et al., 2018) employs two pathways to capture temporal information by processing a video at both slow and fast frame rates. Recently, STM (Jiang et al., 2019) proposes new channel-wise convolutional blocks to jointly capture spatio-temporal and motion information in consecutive frames. TEA (Li et al., 2020b) introduces a motion excitation module including multiple temporal aggregation modules to capture both short- and long-range temporal evolution in videos. Gate-Shift networks (Sudhakaran et al., 2020) use spatial gating for spatial-temporal decomposition of 3D kernels in Inception-based architectures.\nWhile extensive studies have been conducted in the last few years, limited efforts have been made towards efficient action recognition (Wu et al., 2019b;a; Gao et al., 2020). Specifically, methods for efficient recognition focus on either designing new lightweight architectures that aim to reduce the complexity by decomposing the 3D convolution into 2D spatial convolution and 1D temporal convolution (e.g., R(2+1)D (Tran et al., 2018), S3D (Xie et al., 2018), channel-separated CNNs (Tran et al., 2019)) or selecting salient frames/clips conditioned on the input (Yeung et al., 2016; Wu et al., 2019b; Korbar et al., 2019; Gao et al., 2020). Our approach is most related to the latter which focuses on conditional computation and is agnostic to the network architecture used for recognizing actions. However, instead of focusing on data sampling, our approach dynamically fuses channels from current and history feature maps to reduce the computation. Furthermore, as feature maps can be redundant or noisy, we use a skipping operation to make it more efficient for action recognition.\nConditional Computation. Many conditional computation methods have been recently proposed with the goal of improving computational efficiency (Bengio et al., 2015; 2013; Veit & Belongie, 2018; Wang et al., 2018b; Graves, 2016; Meng et al., 2020; Pan et al., 2021). Several works have been\nproposed that add decision branches to different layers of CNNs to learn whether to exit the network for faster inference (Figurnov et al., 2017; McGill & Perona, 2017; Wu et al., 2020). BlockDrop (Wu et al., 2018) effectively reduces the inference time by learning to dynamically select which layers to execute per sample during inference. SpotTune (Guo et al., 2019) learns to adaptively route information through finetuned or pre-trained layers. Conditionally parameterized convolutions (Yang et al., 2019) or dynamic convolutions (Chen et al., 2019a; Verelst & Tuytelaars, 2019) have also been proposed to learn specialized convolutional kernels for each example to improve efficiency in image recognition. Our method is also related to recent works on dynamic channel pruning (Gao et al., 2018; Lin et al., 2017) that generate decisions to skip the computation for a subset of output channels. While GaterNet (Chen et al., 2019b) proposes a separate gating network to learn channel-wise binary gates for the backbone network, Channel gating network (Hua et al., 2019) identifies regions in the features that contribute less to the classification result, and skips the computation on a subset of the input channels for these ineffective regions. In contrast to the prior works that focus on only dropping unimportant channels, our proposed approach also reuses history features when necessary to make the network capable for strong temporal modelling." }, { "heading": "3 METHODOLOGY", "text": "In this section, we first show the general approach using 2D-CNN for action recognition. Then we present the concept of adaptive temporal fusion and analyze its computation cost. Finally, we describe the end-to-end optimization and network specifications.\nUsing 2D-CNN for Action Recognition. One popular solution is to first generate frame-wise predictions and then utilize a consensus operation to get the final prediction (Wang et al., 2016). The network takes uniformly sampled T frames {X1...XT } and predicts the un-normalized class score:\nP (X1, ..., XT ; Θ) = G (F(X1; Θ),F(X2; Θ), ...,F(XT ; Θ)) (1)\nwhere F(·; Θ) is the 2D-CNN with learnable parameters Θ. The consensus function G reduces the frame-level predictions to a final prediction. One common practice for G is the averaging operation. The major drawback is that this cannot capture the order of the frames. The network performs poorly on datasets that contain temporal-related labels (e.g. “turning left”, “moving forward”, etc). LSTM (Hochreiter & Schmidhuber, 1997) can also be used as G to get the final prediction (Donahue et al., 2015), but it cannot capture low-level features across the frames, as mentioned in Lin et al. (2019). A few works have been recently proposed to model temporal causality using a bag of features scheme in TRN (Zhou et al., 2018), channel shifting in TSM (Lin et al., 2019), depthwise convolutions in TAM (Fan et al., 2019). Different from these methods, in this work, we hypothesis that an inputdependent fusion of framewise features will be beneficial for temporal understanding and efficiency, as the amount of temporal information depends on the dynamics and the type of events in the video. Hence we propose adaptive temporal fusion for action recognition.\nAdaptive Temporal Fusion. Consider a single 2D convolutional layer: yt = φ(Wx ∗xt+bx), where xt ∈ Rc×h×w denotes the input feature map at time step t with c channels and spatial dimension h × w, and yt ∈ Rc ′×h′×w′ is the output feature map. Wx ∈ Rc ′×k×k×c denotes the convolution filters (with kernel size k× k) and bx ∈ Rc ′\nis the bias. We use “∗” for convolution operation. φ(·) is the combination of batchnorm and non-linear functions (e.g. ReLU (Nair & Hinton, 2010)).\nWe introduce a policy network consisting of two fully-connected layers and a ReLU function designed to adaptively select channels for keeping, reusing or skipping. As shown in Figure 1, at time t, we first generate feature vectors vt−1, vt ∈ Rc from history feature map xt−1 and current feature map xt via global average pooling. Then the policy network predicts:\npt = g(vt−1, vt; Θg) (2)\nwhere pt ∈ {0, 1, 2}c ′\nis a channel-wise policy (choosing “keep”, “reuse” or “skip”) to generate the output feature map: if pit = 0, the i-th channel of output feature map will be computed via the normal convolution; if pit = 1, it will reuse the i-th channel of the feature map yt−1 which has been already computed at time t − 1; otherwise, the i-th channel will be just padded with zeros. Formally, this output feature map can be written as ỹt = f(yt−1, yt, pt) where the i-th channel is:\nỹit = 1 [ pit = 0 ] · yit + 1 [ pit = 1 ] · yit−1 (3)\nhere 1 [·] is the indicator function. In Figure 1, the policy network instructs the convolution layer to only compute the first and fourth channels, reuses the second channel of the history feature and skips the third channel. Features from varied time steps are adaptively fused along the channel dimension.\nAdaptive temporal fusion enables the 2D convolution to capture temporal information: its temporal perceptive field grows linearly to the depth of the layers, as more features from different time steps are fused when going deeper in the network. Our novel design can be seen as a general methodology for many state-of-the-art 2D-CNN approaches: if we discard \"skip\" and use a predefined fixed policy, then it becomes the online temporal fusion in Lin et al. (2019). If the policy only chooses from \"skip\" and \"keep\", then it becomes dynamic pruning methods (Gao et al., 2018; Hua et al., 2019). Our design is a generalized approach taking both temporal modelling and efficiency into consideration.\nComplexity Analysis. To illustrate the efficiency of our framework, we compute the floating point operations (FLOPS), which is a hardware-independent metric and widely used in the field of efficient action recognition1(Wu et al., 2019b; Gao et al., 2020; Meng et al., 2020; Fan et al., 2019). To compute saving from layers before and after the policy network, we add another convolution after ỹt with kernel Wy ∈ Rc ′′×k′×k′×c′ and bias by ∈ Rc ′′\n. The total FLOPS for each convolution will be:{ mx = c\n′ · h′ · w′ · (k · k · c+ 1) my = c ′′ · h′′ · w′′ · (k′ · k′ · c′ + 1) (4)\nWhen the policy is applied, only those output channels used in time t or going to be reused in time t+ 1 need to be computed in the first convolution layer, and only the channels not skipped in time t count for input feature maps for the second convolution layer. Hence the overall FLOPS is:\nM = T−1∑ τ=0\n[ 1\nc′ c′−1∑ i=0\nKeep at τ or resue at τ + 1︷ ︸︸ ︷ 1 [ piτ · (piτ+1 − 1) = 0\n] ·mx︸ ︷︷ ︸\nFLOPS from the first conv at time τ\n+ (1− 1 c′ c′−1∑ i=0 Skip at τ︷ ︸︸ ︷ 1(piτ = 2)) ·my︸ ︷︷ ︸\nFLOPS from the second conv at time τ\n] (5)\nThus when the policy network skips more channels or reuses channels that are already computed in the previous time step, the FLOPS for those two convolution layers can be reduced proportionally.\nLoss functions. We take the average of framewise predictions as the video prediction and minimize:\nL = ∑\n(x,y)∼Dtrain\n[ −y log(P (x)) + λ ·\nB−1∑ i=0 Mi\n] (6)\n1Latency is another important measure for efficiency, which can be reduced via CUDA optimization for sparse convolution (Verelst & Tuytelaars, 2019). We leave it for future research.\nThe first term is the cross entropy between one-hot encoded ground truth labels y and predictions P (x). The second term is the FLOPS measure for all the B temporal fusion blocks in the network. In this way, our network is learned to achieve both accuracy and efficiency at a trade-off controlled by λ.\nDiscrete policies for “keep”, “reuse” or “skip” shown in Eq. 3 and Eq. 5 make L non-differentiable hence hard to optimize. One common practice is to use a score function estimator (e.g. REINFORCE (Glynn, 1990; Williams, 1992)) to avoid backpropagating through categorical samplings, but the high variance of the estimator makes the training slow to converge (Wu et al., 2019a; Jang et al., 2016). As an alternative, we use Gumbel-Softmax Estimator to enable efficient end-to-end optimization.\nTraining using Gumbel Softmax Estimator. Specifically, the policy network first generates a logit q ∈ R3 for each channel in the output feature map and then we use Softmax to derive a normalized categorical distribution: π = {ri|ri = exp(qi)exp (q0)+exp (q1)+exp (q2)}. With the Gumbel-Max trick, discrete samples from the distribution π can be drawn as (Jang et al., 2016): r̂ = argmaxi(log ri+Gi), where Gi = − log(− logUi) is a standard Gumbel distribution with i.i.d. Ui sampled from a uniform distribution Unif(0, 1). Since the argmax operator is not differentiable, the Gumbel Softmax distribution is used as a continuous approximation. In forward pass we represent the discrete sample r̂ as a one-hot encoded vector and in back-propagation we relax it to a real-valued vector R = {R0, R1, R2} via Softmax as follows:\nRi = exp ((log ri +Gi)/τ)∑2 j=1 exp ((log rj +Gj)/τ)\n(7)\nwhere τ is a temperature factor controlling the “smooothness” of the distribution: lim τ→∞ R converges to a uniform distribution and lim\nτ→0 R becomes a one-hot vector. We set τ = 0.67 during the training.\nNetwork Architectures and Notations. Our adaptive temporal fusion module can be easily plugged into any existing 2D-CNN models. Specifically, we focus on BN-Inception (Ioffe & Szegedy, 2015), ResNet (He et al., 2016) and EfficientNet (Tan & Le, 2019). For Bn-Inception, we add a policy network between every two consecutive Inception modules. For ResNet/EfficientNet, we insert the policy network between the first and the second convolution layers in each “residual block\"/“inverted residual block\". We denote our model as AdaFuseMethodBackbone, where the “Backbone” is chosen from {“R18”(ResNet18), “R50”(ResNet50), “Inc”(BN-Inception), “Eff”(EfficientNet)}, and the “Method” can be {“TSN”, “TSM”, “TSM+Last”}. More details can be found in the following section." }, { "heading": "4 EXPERIMENTS", "text": "We first show AdaFuse can significantly improve the accuracy and efficiency of ResNet18, BNInception and EfficientNet, outperforming other baselines by a large margin on Something-V1. Then on all datasets, AdaFuse with ResNet18 / ResNet50 can consistently outperform corresponding base models. We further propose two instantiations using AdaFuse on TSM (Lin et al., 2019) to compare with state-of-the-art approaches on Something V1 & V2: AdaFuseTSMR50 can save over 40% FLOPS at a comparable classification score under same amount of computation budget, AdaFuseTSM+LastR50 outperforms state-of-the-art methods in accuracy. Finally, we perform comprehensive ablation studies and quantitative analysis to verify the effectiveness of our adaptive temporal fusion.\nDatasets. We evaluate AdaFuse on Something-Something V1 (Goyal et al., 2017) & V2 (Mahdisoltani et al., 2018), Jester (Materzynska et al., 2019) and a subset of Kinetics (Kay et al., 2017). Something V1 (98k videos) & V2 (194k videos) are two large-scale datasets sharing 174 human action labels (e.g. pretend to pick something up). Jester (Materzynska et al., 2019) has 27 annotated classes for hand gestures, with 119k / 15k videos in training / validation set. Mini-Kinetics (assembled by Meng et al. (2020)) is a subset of full Kinetics dataset (Kay et al., 2017) containing 121k videos for training and 10k videos for testing across 200 action classes.\nImplementation details. To make a fair comparison, we carefully follow the training procedure in Lin et al. (2019). We uniformly sample T = 8 frames from each video. The input dimension for the network is 224× 224. Random scaling and cropping are used as data augmentation during training (and we further adopt random flipping for Mini-Kinetics). Center cropping is used during inference. All our networks are using ImageNet pretrained weights. We follow a step-wise learning rate scheduler with the initial learning rate as 0.002 and decay by 0.1 at epochs 20 & 40. To train\nour adaptive temporal fusion approach, we set the efficiency term λ = 0.1. We train all the models for 50 epochs with a batch-size of 64, where each experiment takes 12∼ 24 hours on 4 Tesla V100 GPUs. We report the number of parameters used in each method, and measure the averaged FLOPS and Top1/Top5 accuracy for all the samples from each testing dataset.\nAdaptive Temporal Fusion improves 2D CNN Performance. On Something V1 dataset, we show AdaFuse ’s improvement upon 2D CNNs by comparing with several baselines as follows:\n• TSN (Wang et al., 2016): Simply average frame-level predictions as the video-level prediction. • CGNet (Hua et al., 2019): A dynamic pruning method to reduce computation cost for CNNs. • Threshold: We keep a fixed portion of channels base on their activation L1 norms and skip the\nchannels in smaller norms. It serves as a baseline for efficient recognition. • RANDOM: We use temporal fusion with a randomly sampled policy (instead of using learned\npolicy distribution). The distribution is chosen to match the FLOPS of adaptive methods. • LSTM: Update per-frame predictions by hidden states in LSTM and averages all predictions as\nthe video-level prediction.\nWe implement all the methods using publicly available code and apply adaptive temporal fusion in TSN using ResNet18, BN-Inception and EfficientNet backbones, denoting them as AdaFuseTSNR18 AdaFuseTSNInc and AdaFuseTSNEff-x respectively (“x” stands for different scales of the EfficientNet backbones). As shown in Table 1, AdaFuseTSNR18 uses the similar FLOPS as those efficient methods (“CGNet” and “Threshold”) but has a great improvement in classification accuracy Specifically, AdaFuseTSNR18 and AdaFuseTSNInc outperform corresponding TSN models by more than 20% in Top-1 accuracy, while using only 74% of FLOPS. Interestingly, comparing to TSN, even temporal fusion with a random policy can achieve an absolute gain of 12.7% in accuracy, which shows that temporal fusion can greatly improve the action recognition performance of 2D CNNs. Additionally equipped with the adaptive policy, AdaFuseTSNR18 can get 9.4% extra improvement in classification. LSTM is the most competitive baseline in terms of accuracy, while AdaFuseTSNR18 has an absolute gain of 8.5% in accuracy and uses only 70% of FLOPS. When using a more efficient architecture as shown in Table.2, our approach can still reduce 10% of the FLOPS while improving the accuracy by a large margin. To further validate AdaFuse being model-agnostic and robust, we conduct extensive experiments using ResNet18 and\nResNet50 backbones on Something V1 & V2, Jester and Mini-Kinetics. As shown in Table 3, AdaFuseTSNR18 and AdaFuseTSNR50 consistently outperform their baseline TSN and LSTM models with a 35% saving in FLOPS on average. Our approach harvests large gains in accuracy and efficiency on temporal-rich datasets like Something V1 & V2 and Jester. When comes to Mini-Kinetics, AdaFuse can still achieve a better accuracy with 20%∼33% computation reduction. Comparison with Adaptive Inference Method. We compare our approach with AR-Net (Meng et al., 2020), which adaptively chooses frame resolutions for efficient inference. As shown in Table 4, on Something V1, Jester and Mini-Kinetics, we achieve a better accuracy-efficiency trade-off than AR-Net while using 40% less parameters. On temporal-rich dataset like Something-V1, our approach attains the largest improvement, which shows AdaFuseTSNR50 ’s capability for strong temporal modelling.\nComparison with State-of-the-Art Methods. We apply adaptive temporal fusion with different backbones (ResNet50 (He et al., 2016), BN-Inception (Ioffe & Szegedy, 2015)) and designs (TSN (Wang et al., 2016), TSM (Lin et al., 2019)) and compare with State-of-the-Art methods on Something V1 & V2. As shown in Table 5, using BN-Inception as backbone, AdaFuseTSNInc is 4% better than “TRNMultiscale” (Zhou et al., 2018) in accuracy, using only 75% of the FLOPS. AdaFuseTSNR50 with ResNet50 can even outperform 3D CNN method “I3D” (Carreira & Zisserman, 2017) and hybrid 2D/3D CNN method “ECO” (Zolfaghari et al., 2018) with much less FLOPS.\nAs for adaptive temporal fusion on “TSM” (Lin et al., 2019), AdaFuseTSMR50 achieves more than 40% savings in computation but at 1% loss in accuracy (Table 5). We believe this is because TSM uses temporal shift operation, which can be seen as a variant of temporal fusion. Too much temporal fusion could cause performance degradation due to a worse spatial modelling capability. As a remedy, we just adopt adaptive temporal fusion in the last block in TSM to capture high-level semantics (more intuition can be found later in our visualization experiments) and denote it as AdaFuseTSM+LastR50 . On Something V1 & V2 datasets, AdaFuseTSM+LastR50 outperforms TSM and all other state-of-the-art methods in accuracy with a 5% saving in FLOPS comparing to TSM. From our experiments, we observe that the performance of adaptive temporal fusion depends on the position of shift modules in TSM and optimizing the position of such modules through additional regularization could help us not only to achieve better accuracy but also to lower the number of parameters. We leave this as an interesting future work.\nWe depict the accuracy, computation cost and model sizes in Figure 2. All the results are computed from Something V1 validation set. The graph shows GFLOPS / accuracy on x / y-axis and the diameter of each data point is proportional to the number of model parameters. AdaFuse (blue points) owns the best trade-off for accuracy and efficiency at a comparable model size to other 2D CNN approaches. Once again it shows AdaFuse is an effective and efficient design for action recognition.\nPolicy Visualizations. Figure 3 shows overall policy (“Skip”, “Reuse” and “Keep”) differences across all datasets. We focus on the quotient of “Reuse / Keep” as it indicates the mixture ratio for feature fusion. The quotients on Something V1&V2 and Jester datasets are very high (0.694, 0.741 and 0.574 respectively) when comparing to Mini-Kinetics (0.232). This is probably because the first three datasets contain more temporal relationship than Kinetics. Moreover, Jester has the highest percentage in skipping which indicates many actions in this dataset can be correctly recognized with few channels: Training on Jester is more biased towards optimizing for efficiency as the accuracy loss is very low. Distinctive policy patterns show different characteristics of datasets, which conveys a potential of our proposed approach to be served as a “dataset inspector”.\nFigure 4 shows a more fine-grained policy distribution on Something V2. We plot the policy usage in each residual block inside the ResNet50 architecture (shown in light red/orange/blue) and use 3rd-order polynomials to estimate the trend of each policy (shown in black dash curves). To further study the time-sensitiveness of the policies, we calculate the number of channels where the policies stay unchanged across the frames in one video (shown in dark red/orange/blue). We find earlier layers tend to skip more and reuse/keep less, and vice versa. The first several convolution blocks normally capture low-level feature maps in large spatial sizes, so the “information density” on channel dimension should be less which results in more redundancy across channels. Later blocks often capture high-level semantics and the feature maps are smaller in spatial dimensions, so the “semantic density” could be higher and less channels will be skipped. In addition, low-level features change faster across the frames (shades, lighting intensity) whereas high-level semantics change slowly across the frames (e.g. \"kicking soccer\"), that’s why more features can be reused in later layers to avoid computing the same semantic again. As for the time-sensitiveness, earlier layers tend to be less sensitive and vice versa. We find that “reuse” is the most time-sensitive policy, as “Reuse (Instance)” ratio is very low, which again shows the functioning of adaptive temporal fusion. We believe these findings will provide insights to future designs of effective temporal fusions.\nHow does the adaptive policy affect the performance? We consider AdaFuseTSNR18 on Something V1 dataset and break down by using “skip”, ‘reuse” and adaptive (Ada.) policy learning. As shown\nTable 6: Effect of different policies (using AdaFuseTSNR18 ) on Something V1 dataset.\nMethod Skip Reuse Ada. FLOPS Top1\nTSN 8 8 8 14.6G 14.8 Ada. Skip 4 8 4 6.6G 9.5\nAda. Reuse 8 4 4 13.8G 36.3 Random 4 4 8 10.4G 27.5 AdaFuseTSNR18 4 4 4 10.3G 36.9\nTable 7: Effect of hidden sizes and efficient weights on the performance of AdaFuseTSM+LastR50 on SthV2.\n#Hidden Units λ #Params FLOPS Top1 Skip Reuse\n1024 0.050 39.1M 31.53G 59.71 13% 14% 1024 0.075 39.1M 31.29G 59.75 15% 13% 1024 0.100 39.1M 31.04G 59.40 18% 12% 2048 0.100 54.3M 30.97G 59.96 21% 10% 4096 0.100 84.7M 31.04G 60.00 25% 8%\nin Table 6, “Ada. Skip” saves 55% of FLOPS comparing to TSN but at a great degradation in accuracy. This shows naively skipping channels won’t give a better classification performance. “Ada. Reuse” approach brings 21.5% absolute gain in accuracy, which shows the importance of temporal fusion. However, it fails to save much FLOPS due to the absence of skipping operation. Combining “Keep” with both “Skip” and “Reuse” via just a random policy is already achieving a better trade-off comparing to TSN, and by using adaptive learning approach, AdaFuseTSNR18 reaches the highest accuracy with the second-best efficiency. In summary, the “Skip” operation contributes the most to the computation efficiency, the “Reuse” operation boosts the classification accuracy, while the adaptive policy adds the chemistry to the whole system and achieves the best performance.\nHow to achieve a better performance? Here we investigate different settings to improve the performance of AdaFuseTSM+LastR50 on Something V2 dataset. As shown in Table 7, increasing λ will obtain a better efficiency but might result in accuracy degradation. Enlarging the number of hidden units for the policy network can get a better overall performance: as we increase the size from 1024 to 4096, the accuracy keeps increasing. When the policy network grows larger, it learns to skip more to reduce computations and to reuse history features wisely for recognition. But notice that the model size grows almost linearly to hidden layer sizes, which leads to a considerable overhead to the FLOPS computation. As a compromise, we only choose λ = 0.75 and hidden size 1024 for AdaFuseTSM+LastR50 . We leave the design for a more advanced and delicate policy module for future works.\nRuntime/Hardware. Sparse convolutional kernels are often less efficient on current hardwares, e.g., GPUs. However, we strongly believe that it is important to explore models for efficient video action recognition which might guide the direction of new hardware development in the years to come. Furthermore, we also expect wall-clock time speed-up in the inference stage via efficient CUDA implementation, which we anticipate will be developed." }, { "heading": "5 CONCLUSIONS", "text": "We have shown the effectiveness of adaptive temporal fusion for efficient video recognition. Comprehensive experiments on four challenging and diverse datasets present a broad spectrum of accuracyefficiency models. Our approach is model-agnostic, which allows it to be served as a plugin operation for a wide range of architectures for video recognition tasks.\nAcknowledgements. This work is supported by the Intelligence Advanced Research Projects Activity (IARPA) via DOI/IBC contract number D17PC00341. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. This work is also partly supported by the MIT-IBM Watson AI Lab.\nDisclaimer. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/IBC, or the U.S. Government." } ]
2,021
ADAFUSE: ADAPTIVE TEMPORAL FUSION NETWORK FOR EFFICIENT ACTION RECOGNITION
SP:8b0cee077c1bcdf9a546698dc041654ca6a222ed
[ "This paper is basically unreadable. The sentence structure / grammar is strange, and if that was the only issue it could be overlooked. The paper also does not describe or explain the motivation and interpretation of anything, but instead just lists equations. For example, eta is the parameter that projects a spherical geodesic onto an the ellipsoid one, and an ellipsoid geodesic prevents updates of the core-set towards the boundary regions where the characteristics of the distribution cannot be captured. However, what are these characteristics, and how can they motivate how to choose eta?" ]
We present geometric Bayesian active learning by disagreements (GBALD), a framework that performs BALD on its geometric interpretation interacting with a deep learning model. There are two main components in GBALD: initial acquisitions based on core-set construction and model uncertainty estimation with those initial acquisitions. Our key innovation is to construct the core-set on an ellipsoid, not typical sphere, preventing its updates towards the boundary regions of the distributions. Main improvements over BALD are twofold: relieving sensitivity to uninformative prior and reducing redundant information of model uncertainty. To guarantee the improvements, our generalization analysis proves that, compared to typical Bayesian spherical interpretation, geodesic search with ellipsoid can derive a tighter lower error bound and achieve higher probability to obtain a nearly zero error. Experiments on acquisitions with several scenarios demonstrate that, yielding slight perturbations to noisy and repeated samples, GBALD further achieves significant accuracy improvements than BALD, BatchBALD and other baselines.
[]
[ { "authors": [ "Jordan T Ash", "Chicheng Zhang", "Akshay Krishnamurthy", "John Langford", "Alekh Agarwal" ], "title": "Deep batch active learning by diverse, uncertain gradient lower bounds", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Arsenii Ashukha", "Alexander Lyzhov", "Dmitry Molchanov", "Dmitry Vetrov" ], "title": "Pitfalls of in-domain uncertainty estimation and ensembling in deep learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Mihai Bādoiu", "Sariel Har-Peled", "Piotr Indyk" ], "title": "Approximate clustering via core-sets", "venue": "In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing,", "year": 2002 }, { "authors": [ "Shai Ben-David", "Ulrike Von Luxburg" ], "title": "Relating clustering stability to properties of cluster boundaries", "venue": "In 21st Annual Conference on Learning Theory (COLT", "year": 2008 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural network", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Trevor Campbell", "Tamara Broderick" ], "title": "Bayesian coreset construction via greedy iterative geodesic ascent", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Trevor Campbell", "Tamara Broderick" ], "title": "Automated scalable bayesian inference via hilbert coresets", "venue": "The Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "David Cohn", "Les Atlas", "Richard Ladner" ], "title": "Improving generalization with active learning", "venue": "Machine learning,", "year": 1994 }, { "authors": [ "Arnaud Doucet", "Simon Godsill", "Christophe Andrieu" ], "title": "On sequential monte carlo sampling methods for bayesian filtering", "venue": "Statistics and computing,", "year": 2000 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In international conference on machine learning,", "year": 2016 }, { "authors": [ "Yarin Gal", "Riashat Islam", "Zoubin Ghahramani" ], "title": "Deep bayesian active learning with image data", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Mingfei Gao", "Zizhao Zhang", "Guo Yu", "Sercan O Arik", "Larry S Davis", "Tomas Pfister" ], "title": "Consistencybased semi-supervised active learning: Towards minimizing labeling cost", "venue": null, "year": 2020 }, { "authors": [ "Daniel Golovin", "Andreas Krause", "Debajyoti Ray" ], "title": "Near-optimal bayesian active learning with noisy observations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Bo Han", "Quanming Yao", "Xingrui Yu", "Gang Niu", "Miao Xu", "Weihua Hu", "Ivor Tsang", "Masashi Sugiyama" ], "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Sariel Har-Peled", "Soham Mazumdar" ], "title": "On coresets for k-means and k-median clustering", "venue": "In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing,", "year": 2004 }, { "authors": [ "Neil Houlsby", "Ferenc Huszár", "Zoubin Ghahramani", "Máté Lengyel" ], "title": "Bayesian active learning for classification and preference learning", "venue": "arXiv preprint arXiv:1112.5745,", "year": 2011 }, { "authors": [ "Khaled Jedoui", "Ranjay Krishna", "Michael Bernstein", "Li Fei-Fei" ], "title": "Deep bayesian active learning for multiple correct outputs", "venue": "arXiv preprint arXiv:1912.01119,", "year": 2019 }, { "authors": [ "Andreas Kirsch", "Joost van Amersfoort", "Yarin Gal" ], "title": "Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Aaron Lou", "Isay Katsman", "Qingxuan Jiang", "Serge Belongie", "Ser-Nam Lim", "Christopher De Sa" ], "title": "Differentiating through the frechet mean", "venue": null, "year": 2020 }, { "authors": [ "Feiping Nie", "Hua Wang", "Heng Huang", "Chris Ding" ], "title": "Early active learning via robust representation and structured sparsity", "venue": "In Twenty-Third International Joint Conference on Artificial Intelligence,", "year": 2013 }, { "authors": [ "Michael Osborne", "Roman Garnett", "Zoubin Ghahramani", "David K Duvenaud", "Stephen J Roberts", "Carl E Rasmussen" ], "title": "Active learning of model evidence using bayesian quadrature", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Hae-Sang Park", "Chi-Hyuck Jun" ], "title": "A simple and fast algorithm for k-medoids clustering", "venue": "Expert systems with applications,", "year": 2009 }, { "authors": [ "Robert Pinsler", "Jonathan Gordon", "Eric Nalisnick", "José Miguel Hernández-Lobato" ], "title": "Bayesian batch active learning as sparse subset approximation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Harold J Price", "Allison R Manson" ], "title": "Uninformative priors for bayes", "venue": "theorem. In AIP Conference Proceedings,", "year": 2002 }, { "authors": [ "Nicholas Roy", "Andrew McCallum" ], "title": "Toward optimal active learning through monte carlo estimation of error reduction", "venue": "ICML, Williamstown, pp", "year": 2001 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "Active learning for convolutional neural networks: A core-set approach", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Rodney W Strachan", "Herman K Van Dijk" ], "title": "Bayesian model selection with an uninformative prior", "venue": "Oxford Bulletin of Economics and Statistics,", "year": 2003 }, { "authors": [ "Min Tang", "Xiaoqiang Luo", "Salim Roukos" ], "title": "Active learning for statistical natural language parsing", "venue": "In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics,", "year": 2002 }, { "authors": [ "Stephen A Vavasis" ], "title": "Approximation algorithms for indefinite quadratic programming", "venue": "Mathematical Programming,", "year": 1992 }, { "authors": [ "Zengmao Wang", "Bo Du", "Weiping Tu", "Lefei Zhang", "Dacheng Tao" ], "title": "Incorporating distribution matching into uncertainty for multiple kernel active learning", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Lack of training labels restricts the performance of deep neural networks (DNNs), though prices of GPU resources were falling fast. Recently, leveraging the abundance of unlabeled data has become a potential solution to relieve this bottleneck whereby expert knowledge is involved to annotate those unlabeled data. In such setting, the deep learning community introduced active learning (AL) (Gal et al., 2017) that, maximizing the model uncertainty (Ashukha et al., 2019; Lakshminarayanan et al., 2017) to acquire a set of highly informative or representative unlabeled data, and solicit experts’ annotations. During this AL process, the learning model tries to achieve a desired accuracy using minimal data labeling. Recent shift of model uncertainty in many fields, such as Bayesian neural networks (Blundell et al., 2015), Monte-Carlo (MC) dropout (Gal & Ghahramani, 2016), and Bayesian core-set construction (Sener & Savarese, 2018), shows that, new scenarios arise from deep Bayesian AL (Pinsler et al., 2019; Kirsch et al., 2019).\nBayesian AL (Golovin et al., 2010; Jedoui et al., 2019) presents an expressive probabilistic interpretation on model uncertainty (Gal & Ghahramani, 2016). Theoretically, for a simple regression model such as linear, logistic, and probit, AL can derive their closed-forms on updating one sparse subset that maximally reduces the uncertainty of the posteriors over the regression parameters (Pinsler et al., 2019). However, for a DNN model, optimizing massive training parameters is not easily tractable. It is thus that Bayesian approximation provides alternatives including importance sampling (Doucet et al., 2000) and Frank-Wolfe optimization (Vavasis, 1992). With importance sampling, a typical approach is to express the information gain in terms of the predictive entropy over the model, and it is called Bayesian active learning by disagreements (BALD) (Houlsby et al., 2011).\nBALD has two interpretations: model uncertainty estimation and core-set construction. To estimate the model uncertainty, a greedy strategy is applied to select those data that maximize the parameter disagreements between the current training model and its subsequent updates as (Gal et al., 2017). However, naively interacting with BALD using uninformative prior (Strachan & Van Dijk, 2003)(Price & Manson, 2002), which can be created to reflect a balance among outcomes when no information is available, leads to unstable biased acquisitions (Gao et al., 2020), e.g. insufficient prior labels. Moreover, the similarity or consistency of those acquisitions to the previous acquired samples, brings redundant information to the model and decelerates its training.\nCore-set construction (Campbell & Broderick, 2018) avoids the greedy interaction to the model by capturing characteristics of the data distributions. By modeling the complete data posterior over the\ndistributions of parameters, BALD can be deemed as a core-set construction process on a sphere (Kirsch et al., 2019), which seamlessly solicits a compact subset to approximate the input data distribution, and efficiently mitigates the sensitivity to uninformative prior and redundant information.\nFrom the view of geometry, updates of core-set construction is usually optimized with sphere geodesic as (Nie et al., 2013; Wang et al., 2019). Once the core-set is obtained, deep AL immediately seeks annotations from experts and starts the training. However, data points located at the boundary regions of the distribution, usually win uniform distribution, cannot be highly-representative candidates for the core-set. Therefore, constructing the coreset on a sphere may not be the optimal choice for deep AL.\nThis paper presents a novel AL framework, namely Geometric BALD (GBALD), over the geometric interpretation of BALD that, interpreting BALD with core-set construction on an ellipsoid, initializes an effective representation to drive a DNN model. The goal is to seek for significant\naccuracy improvements against an uninformative prior and redundant information. Figure 1 describes this two-stage framework. In the first stage, geometric core-set construction on an ellipsoid initializes effective acquisitions to start a DNN model regardless of the uninformative prior. Taking the core-set as the input features, the next stage ranks the batch acquisitions of model uncertainty according to their geometric representativeness, and then solicits some highly-representative examples from the batch. With the representation constraints, the ranked acquisitions reduce the probability of sampling nearby samples of the previous acquisitions, preventing redundant acquisitions. To guarantee the improvement, our generalization analysis shows that, the lower bound of generalization errors of AL with the ellipsoid is proven to be tighter than that of AL with the sphere. Achieving a nearly zero generalization error by AL with ellipsoid is also proven to have higher probability. Contributions of this paper can be summarized from Geometric, Algorithmic, and Theoretical perspectives.\n• Geometrically, our key innovation is to construct the core-set on an ellipsoid, not typical sphere, preventing its updates towards the boundary regions of the distributions.\n• In term of algorithm design, in our work, from a Bayesian perspective, we propose a two-stage framework that sequentially introduces the core-set representation and model uncertainty, strengthening their performance “independently”. Moreover, different to the typical BALD optimizations, we present geometric solvers to construct core-set and estimate model uncertainty, which result in a different view for Bayesian active learning.\n• Theoretically, to guarantee those improvements, our generalization analysis proves that, compared to typical Bayesian spherical interpretation, geodesic search with ellipsoid can derive a tighter lower error bound and achieve higher probability to obtain a nearly zero error. See Appendix B.\nThe rest of this paper is organized as follows. In Section 2, we first review the related work. Secondly, we elaborate BALD and GBALD in Sections 3 and 4, respectively. Experimental results are presented in Section 5. Finally, we conclude this paper in Section 6." }, { "heading": "2 RELATED WORK", "text": "Model uncertainty. In deep learning community, AL (Cohn et al., 1994) was introduced to improve the training of a DNN model by annotating unlabeled data, where the data which maximize the model uncertainty (Lakshminarayanan et al., 2017) are the primary acquisitions. For example, in ensemble deep learning (Ashukha et al., 2019), out-of-domain uncertainty estimation selects those data which do not follow the same distribution as the input training data; in-domain uncertainty draws the data from the original input distribution, producing reliable probability estimates. Gal &\nGhahramani (2016) use MC dropout to estimate predictive uncertainty for approximating a Bayesian convolutional neural network. Lakshminarayanan et al. (2017) estimate predictive uncertainty using a proper scoring rule as the training criteria to fed a DNN.\nBayesian AL. Taking a Bayesian perspective (Golovin et al., 2010), AL can be deemed as minimizing the Bayesian posterior risk with multiple label acquisitions over the input unlabeled data. A potential informative approach is to reduce the uncertainty about the parameters using Shannon’s entropy (Tang et al., 2002). This can be interpreted as seeking the acquisitions for which the Bayesian parameters under the posterior disagree about the outcome the most, so this acquisition algorithm is referred to as Bayesian active learning by disagreement (BALD) (Houlsby et al., 2011).\nDeep AL. Recently, deep Bayesian AL attracted our eyes. Gal et al. (2017) proposed to cooperate BALD with a DNN to improve the training. The unlabeled data which maximize the model uncertainty provide positive feedback. However, it needs to repeatedly update the model until the acquisition budget is exhausted. To improve the acquisition efficiency, batch sampling with BALD is applied as (Kirsch et al., 2019; Pinsler et al., 2019). In BatchBALD, Kirsch et al. (2019) developed a tractable approximation to the mutual information of one batch of unlabeled data and current model parameters. However, those uncertainty evaluations of Bayesian AL whether in single or batch acquisitions all take greedy strategies, which lead to computationally infeasible, or excursive parameter estimations. For deep Bayesian AL, being short of interactions to DNN can not maximally drive their model performance as (Pinsler et al., 2019; Sener & Savarese, 2018), etc." }, { "heading": "3 BALD", "text": "BALD has two different interpretations: model uncertainty estimation and core-set construction. We simply introduce them in this section." }, { "heading": "3.1 MODEL UNCERTAINTY ESTIMATION", "text": "We consider a discriminative model p(y∣x, θ) parameterized by θ that maps x ∈ X into an output distribution over a set of y ∈ Y . Given an initial labeled (training) set D0 ∈ X × Y , the Bayesian inference over this parameterized model is to estimate the posterior p(θ∣D0), i.e. estimate θ by repeatedly updating D0. AL adopts this setting from a Bayesian view.\nWith AL, the learner can choose unlabeled data from Du = {xi}Nj=1 ∈ X , to observe the outputs of the current model, maximizing the uncertainty of the model parameters. Houlsby et al. (2011) proposed a greedy strategy termed BALD to update D0 by estimating a desired data x∗ that maximizes the decrease in expected posterior entropy:\nx∗ = arg max x∈Du H[θ∣D0] −Ey∼p(y∣x,D0)[H[θ∣x, y,D0]], (1)\nwhere the labeled and unlabeled sets are updated by D0 = D0 ∪ {x∗, y∗},Du = Du/x∗, and y∗ denotes the output of x∗. In deep AL, y∗ can be annotated as a label from experts and θ yields a DNN model." }, { "heading": "3.2 CORE-SET CONSTRUCTION", "text": "Let p(θ∣D0) be updated by its log posterior logp(θ∣D0, x∗), y∗ ∈ {yi}Ni=1, assume the outputs are conditional independent of the inputs, i.e. p(y∗∣x∗,D0) = ∫θ p(y\n∗∣x∗, θ)p(θ∣D0)dθ, then we have the complete data log posterior following (Pinsler et al., 2019):\nEy∗[logp(θ∣D0, x∗, y∗)] = Ey∗[logp(θ∣D0) + logp(y∗∣x∗, θ) − logp(y∗∣x∗,D0)]\n= logp(θ∣D0) +Ey∗[logp(y∗∣x∗, θ) +H[y∗∣x∗,D0]]\n= logp(θ∣D0) + N\n∑ i=1\n⎛ ⎝ Eyi[logp(yi∣xi, θ) +H[yi∣xi,D0]] ⎞ ⎠ .\n(2)\nThe key idea of core-set construction is to approximate the log posterior of Eq. (2) by a subset of D′u ⊆ Du such that: EYu[logp(θ∣D0,Du,Yu)] ≈ EY ′u[logp(θ∣D0,D ′ u,Y ′ u)], where Yu and Y ′ u denote the predictive labels of Du and D′u respectively by the Bayesian discriminative model, that is, p(Yu∣Du,D0) = ∫θ p(Yu∣Du, θ)p(θ∣D0)dθ, and p(Y ′ u∣D ′ u,D0) = ∫θ p(Y ′ u∣D ′ u, θ)p(θ∣D0)dθ. Here D′u can be indicated by a core-set (Pinsler et al., 2019) that highly represents Du. Optimization tricks such as Frank-Wolfe optimization (Vavasis, 1992) then can be adopted to solve this problem.\nMotivations. Eqs. (1) and (2) provide the Bayesian rules of BALD over model uncertainty and core-set construction respectively, which further attract the attention of the deep learning community. However, the two interpretations of BALD are limited by: 1) redundant information and 2) uninformative prior, where one major reason which causes these two issues is the poor initialization on the prior, i.e. p(D0∣θ). For example, unbalanced label initialization on D0 usually leads to an uninformative prior, which further conducts the acquisitions of AL to select those unlabeled data from one or some fixed classes; highly-biased results with (Gao et al., 2020) redundant information are inevitable. Therefore, these two limitations affect each other.\n4 GBALD\nGBALD consists of two components: initial acquisitions based on core-set construction and model uncertainty estimation with those initial acquisitions." }, { "heading": "4.1 GEOMETRIC INTERPRETATION OF CORE-SET", "text": "Modeling the complete data posterior over the parameter distribution can relieve the above two limitations of BALD. Typically, finding the acquisitions of AL is equivalent to approximating a core-set centered with spherical embeddings (Sener & Savarese, 2018). Let wi be the sampling weight of\nxi, ∥wi∥0 ≤ N , the core-set construction is to optimize:\nmin w XXXXXXXXXXX N ∑ i=1 Eyi[logp(yi∣xi, θ) +H[yi∣xi,D0]]\n´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ L\n− N\n∑ i=1 wiEyi[logp(yi∣xi, θ) +H[yi∣xi,D0]] ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ L(w)\nXXXXXXXXXXX\n2\n, (3)\nwhere L and L(w) denote the full and expected (weighted) log-likelihoods, respectively (Campbell & Broderick, 2018; 2019). Specifically, ∑Ni=1H[yi∣xi,D0] = −∑yi p(yi∣xi,D0)log(p(yi∣xi,D0), where p(yi∣xi,D0) = ∫θ p(yi∣xi, θ)p(θ∣D0)dθ. Note ∥ ⋅ ∥ denotes the ` 2 norm.\nThe approximation of Eq. (3) implicitly requires that the complete data log posterior of Eq. (2) w.r.t. L must be close to an expected posterior w.r.t. L(w) such that approximating a sparse subset for the original inputs by sphere geodesic search is feasible (see Figure 2(a)). Generally, solving this optimization is intractable due to cardinality constraint (Pinsler et al., 2019). Campbell & Broderick (2019) proposed to relax the constraint in Frank–Wolfe optimization, in which mapping X is usually performed in a Hilbert space (HS) with a bounded inner product operation. In this solution, the sphere embedded in the HS replaces the cardinality constraint with a polynomial constraint. However, the initialization on D0 affects the iterative approximation to Du at the beginning of the geodesic search. Moreover, the posterior of p(θ∣D0) is uninformative, if the initialized D0 is empty or not correct. Therefore, the typical Bayesian core-set construction of BALD cannot ideally fit an uninformative prior. The another geometric interpretation of core-set construction, such as k-centers (Sener & Savarese, 2018), is not restricted to this setting. We thus follow the construction of k-centers to find the core-set.\nk-centers. Sener & Savarese (2018) proposed a core-set representation approach for active deep learning based on k-centers. This approach can be adopted in core-set construction of BALD without the help of the discriminative model. Therefore, the uninformative prior has no further influence on the core-set. Typically, the k-centers approach uses a greedy strategy to search the data x̃ whose nearest distance to elements of D0 is the maximal:\nx̃ = arg max xi∈Du min ci∈D0 ∥xi − ci∥, (4)\nthen D0 is updated by D0 ∪ {x̃, ỹ}, Du is updated by Du/x̃, where ỹ denotes the output of x̃. This max-min operation usually performs k times to construct the centers.\nFrom the view of geometry, k-centers can be deemed as the core-set construction via spherical geodesic search (Bādoiu et al., 2002; Har-Peled & Mazumdar, 2004). Specifically, the max-min\noptimization guidesD0 to be updated into one data, which draws the longest line segment from xi,∀i across the sphere center. The iterative update on x̃ is then along its unique diameter through the sphere center. However, this greedy optimization has large probability that yields the core-set to fall into boundary regions of the sphere, which cannot capture the characteristics of the distribution." }, { "heading": "4.2 INITIAL ACQUISITIONS BASED ON CORE-SET CONSTRUCTION", "text": "We present a novel greedy search which rescales the geodesic of a sphere into an ellipsoid following Eq. (4), in which the iterative update on the geodesic search is rescaled (see Figure 2(b)). We follow the importance sampling strategy to begin the search.\nInitial prior on geometry. Initializing p(D0∣θ) is performed with a group of internal spheres centered with Dj ,∀j, subjected to Dj ∈ D0, in which the geodesic between D0 and the unlabeled data is over those spheres. Since D0 is known, specification of θ plays the key role for initializing p(D0∣θ). Given a radius R0 for any observed internal sphere, p(yi∣xi, θ) is firstly defined by\np(yi∣xi, θ) = ⎧⎪⎪⎪⎪⎪ ⎨ ⎪⎪⎪⎪⎪⎩\n1, ∃j, ∥xi −Dj∥ ≤ R0,\nmax ⎧⎪⎪ ⎨ ⎪⎪⎩ R0 ∥xi −Dj∥ ⎫⎪⎪ ⎬ ⎪⎪⎭ , ∀j, ∥xi −Di∥ > R0, (5)\nthereby θ yields the parameter R0. When the data is enclosed with a ball, the probability of Eq. (5) is 1. The data near the ball, is given a probability of max{ R0∥xi−Dj∥} constrained by min∥xi −Dj∥,∀j, i.e. the probability is assigned by the nearest ball to xi, which is centered with Dj . From Eq. (3), the information entropy of yi ∼ {y1, y2, ..., yN} over xi ∼ {x1, x2, ..., xN} can be expressed as the integral regarding p(yi∣xi, θ):\nN\n∑ i=1\nH(yi∣xi,D0) = − N\n∑ i=1 ∫ θ p(yi∣xi, θ)p(θ∣D0)dθlog(∫ θ p(yi∣xi, θ)p(θ∣D0))dθ, (6)\nwhich can be approximated by −∑Ni=1 p(yi∣xi, θ)log(p(yi∣xi, θ)) following the details of Eq. (3). In short, this indicates an approximation to the entropy over the entire outputs on Du that assumes the prior p(D0∣θ) w.r.t. p(yi∣xi, θ) is already known from Eq. (5).\nMax-min optimization. Recalling the max-min optimization trick of k-centers in the core-set construction of (Sener & Savarese, 2018), the minimizer of Eq. (3) can be divided into two parts: minx∗ L and maxw L(w), where D0 is updated by acquiring x∗. However, updates of D0 decide the minimizer of L with regard to the internal spheres centered with Di,∀i. Therefore, minimizing L should be constrained by an unbiased full likelihood over X to alleviate the potential biases from the initialization of D0. Let L0 denote the unbiased full likelihood over X that particularly stipulates D0 as the k-means centers written as U of X which jointly draw the input distribution. We define L0 = ∣∑ N i=1Eyi[logp(yi∣xi, θ) +H[yi∣xi,U]]∣ to regulate L, that is\nmin x∗\n∥L0 −L∥ 2, s.t. D0 = D0 ∪ {x ∗, y∗},Du = Du/x ∗. (7)\nThe other sub optimizer is maxw L(w). We present a greedy strategy following Eq. (1):\nmax 1≤i≤N min wi\nN ∑ i=1 wiEyi[logp(yi∣xi, θ) +H[yi∣xi,D0]]\n= N\n∑ i=1 wilogp(yi∣xi, θ) −\nN ∑ i=1 wip(yi∣xi, θ)logp(yi∣xi, θ),\n(8)\nwhich can be further written as: ∑Ni=1wilogp(yi∣xi, θ)(1 − logp(yi∣xi, θ)). Let wi = 1,∀i for unbiased estimation of the likelihood L(w), Eq. (8) can be simplified as\nmax xi∈Du min Dj∈D0 logp(yi∣xi, θ), (9) where p(yi∣xi, θ) follows Eq. (5). Combining Eqs. (7) and (9), the optimization of Eq. (3) is then transformed as\nx∗ = arg max xj∈Du min Dj∈D0 ⎧⎪⎪ ⎨ ⎪⎪⎩ ∥L0 −L∥ 2 + logp(yj ∣xj , θ) ⎫⎪⎪ ⎬ ⎪⎪⎭ , (10)\nwhere D0 is updated by acquiring x∗, i.e. D0 = D0 ∪ {x∗, y∗}.\nGeodesic line. For a metric geometry M , a geodesic line is a curve γ which projects its interval I to M : I → M , maintaining everywhere locally a distance minimizer (Lou et al., 2020). Given a constant ν > 0 such that for any a, b ∈ I there exists a geodesic distance\nd(γ(a), γ(b)) ∶= ∫ b a √ gγ(t)(γ′(t), γ′(t))dt, where γ′(t) denotes the geodesic curvature, and g denotes the metric tensor over M . Here, we define γ′(t) = 0, then gγ(t)(0,0) = 1 such that d(γ(a), γ(b)) can be generalized as a segment of a straight line: d(γ(a), γ(b)) = ∥a − b∥.\nEllipsoid geodesic distance. For any observation points p, q ∈M , if the spherical geodesic distance is defined as d(γ(p), γ(q)) = ∥p − q∥. The affine projection obtains its ellipsoid interpretation: d(γ(p), γ(q)) = ∥η(p − q)∥, where η denotes the affine factor subjected to 0 < η < 1.\nOptimizing with ellipsoid geodesic search. The max-min optimization of Eq. (10) is performed on an ellipsoid geometry to prevent the updates of core-set towards the boundary regions, where ellipsoid geodesic line scales the original update on the sphere. Assume xi is the previous acquisition and x∗ is the next desired acquisition, the ellipsoid geodesic rescales the position of x∗ as x∗e = xi + η(x\n∗ −xi). Then, we update this position of x∗e to its nearest neighbor xj in the unlabeled data pool, i.e. arg minxj∈Du ∥xj − x ∗ e∥, also can be written as\narg min xj∈Du\n∥xj − [xi + η(x ∗ − xi)]∥. (11)\nTo study the advantage of ellipsoid geodesic search, Appendix B presents our generalization analysis." }, { "heading": "4.3 MODEL UNCERTAINTY ESTIMATION WITH CORE-SET", "text": "GBALD starts the model uncertainty estimation with those initial core-set acquisitions, in which it introduces a ranking scheme to derive both informative and representative acquisitions.\nSingle acquisition. We follow (Gal et al., 2017) and use MC dropout to perform Bayesian inference on the model of the neural network. It then leads to ranking the informative acquisitions with batch sequences is with high efficiency. We first present the ranking criterion by rewriting Eq. (1) as batch returns:\n{x∗1, x ∗ 2, ..., x ∗ b} = arg max\n{x̂1,x̂2,...,x̂b}⊆Du H[θ∣D0] −Eŷ1∶b∼p(ŷ1∶b∣x̂1∶b,D0)[H[θ∣x̂1∶b, ŷ1∶b,D0]], (12)\nwhere x̂1∶b = {x̂1, x̂2, ..., x̂b}, ŷ1∶b = {ŷ1, ŷ2, ..., ŷb}, ŷi denotes the output of x̂i. The informative acquisition x∗t is then selected from the ranked batch acquisitions x̂1∶b due to the highest representation for the unlabeled data:\nx∗t = arg max x∗i ∈{x∗1,x∗2,...,x∗b} ⎧⎪⎪ ⎨ ⎪⎪⎩ max Dj∈D0 p(yi∣x ∗ i , θ) ∶= R0 ∥x∗i −Dj∥ ⎫⎪⎪ ⎬ ⎪⎪⎭ , (13)\nwhere t denotes the index of the final acquisition, subjected to 1 ≤ t ≤ b. This also adopts the max-min optimization of k-centers in Eq. (4), i.e. x∗t = arg maxx∗i ∈{x∗1,x∗2,...,x∗b}minDj∈D0 ∥x ∗ i −Dj∥.\nBatch acquisitions. The greedy strategy of Eq. (13) can be written as a batch acquisitions by setting its output as a batch set, i.e.\n{x∗t1 , ..., x ∗ tb′\n} = arg max x∗t1 ∶tb′ ⊆{x∗1,x∗2,...,x∗b} p(y∗t1∶tb′ ∣x ∗ t1∶tb′ , θ), (14)\nwhere x∗t1∶tb′ = {x ∗ t1 , ..., x ∗ tb′ }, y∗t1∶tb′ = {y ∗ t1 , ..., y ∗ tb′ }, y∗ti denotes the output of x ∗ ti , 1 ≤ i ≤ b ′, and 1 ≤ b′ ≤ b. This setting can be used to accelerate the acquisitions of AL in a large dataset. Appendix A presents the two-stage GBALD algorithm." }, { "heading": "5 EXPERIMENTS", "text": "In experiments, we start by showing how BALD degenerates its performance with uninformative prior and redundant information, and show that how our proposed GBALD relieves theses limitations.\nOur experiments discuss three questions: 1) is GBALD using core-set of Eq. (11) competitive with uninformative prior? 2) can GBALD using ranking of Eq. (14) improve informative acquisitions of model uncertainty? and 3) can GBALD outperform state-of-the-art acquisition approaches? Following the experiment settings of (Gal et al., 2017; Kirsch et al., 2019), we use MC dropout to implement the Bayesian approximation of DNNs. Three benchmark datasets are selected: MNIST, SVHN and CIFAR10. More experiments are presented in Appendix C." }, { "heading": "5.1 UNINFORMATIVE PRIORS", "text": "As discussed in the introduction, BALD is sensitive to an uninformative prior, i.e. p(D0∣θ). We thus initialize D0 from a fixed class of the tested dataset to observe its acquisition performance. Figure 3 presents the prediction accuracies of BALD with an acquisition budget of 130 over the training\nset of MNIST, in which we randomly select 20 samples from the digit ‘0’ and ‘1’ to initialize D0, respectively. The classification model of AL follows a convolutional neural network with one block of [convolution, dropout, max-pooling, relu], with 32, 3x3 convolution filters, 5x5 max pooling, and 0.5 dropout rate. In the AL loops, we use 2,000 MC dropout samples from the unlabeled data pool to fit the training of the network following (Kirsch et al., 2019).\nThe results show BALD can slowly accelerate the training model due to biased initial acquisitions, which cannot uniformly cover all the label categories. Moreover, the uninformative prior guides BALD to unstable acquisition results. As the shown in Figure 3(b), BALD with Bathsize = 10 shows better performance than that of Batchsize =1; while BALD in Figure 3(a) keeps sta-\nble performance. This is because the initial labeled data does not cover all classes and BALD with Batchsize =1 may further be misled to select those samples from one or a few fixed classes at the first acquisitions. However, Batchsize >1 may result in a random acquisition process that possibly covers more diverse labels at its first acquisitions. Another excursive result of BALD is that the increasing batch size cannot degenerate its acquisition performance in Figure 3(b). Specifically, Batchsize =10 ≻ Batchsize =1 ≻ Batchsize =20,40 ≻ Batchsize =30, where ‘≻’ denotes ‘better’ performance; Batchsize = 20 achieves similar results of Batchsize =40. This undermines the acquisition policy of BALD: its performance would degenerate when the batch size increases, and sometimes worse than random sampling. This also is the reason why we utilize a core-set to start BALD in our framework.\nDifferent to BALD, the core-set construction of GBALD using Eq. (11) provides a complete label matching against all classes. Therefore, it outperforms BALD with the batch sizes of 1, 10, 20, 30, and 40. As the shown learning curves in Figure 3, GBALD with a batch size of 1 and sequence size of 10 (i.e. breakpoints of acquired size are 10, 20, ..., 130) achieves significantly higher accuracies than BALD using different batch sizes since BALD misguides the network updating using poor prior." }, { "heading": "5.2 IMPROVED INFORMATIVE ACQUISITIONS", "text": "Repeated or similar acquisitions delay the acceleration of the model training of BALD. Following the experiment settings of Section 5.1, we compare the best performance of BALD with a batch size of 1 and GBALD with different batch size parameters. Following Eq. (14), we set b = {3,5,7} and b′=1, respectively, that means, we output the highest representative data from a batch of highly-informative\nacquisitions. Different settings on b and b′ are used to observe the parameter perturbations of GBALD.\nTraining by the same parameterized CNN model as Section 5.1, Figure 4 presents the acquisition performance of parameterized BALD and GBALD. As the learning curves shown, BALD cannot accelerate the model as fast as GBALD due to the repeated information over the acquisitions. For GBALD, it ranks the batch acquisitions of the highly-informative samples and selects the highest representative ones. By employing this special ranking strategy, GBALD can reduce the probability of sampling nearby data of the previous acquisitions. It is thus GBALD significantly outperforms BALD, even if we progressively increase the ranked batch size b." }, { "heading": "5.3 ACTIVE ACQUISITIONS", "text": "GBALD using Eqs. (11) and (14) has been demonstrated to achieve successful improvements over BALD. We thus combine these two components into a uniform framework. Figure 5 reports the AL accuracies using different acquisition algorithms on the three image datasets. The selected baselines follow (Gal et al., 2017) including 1) maximizing the variation ratios (Var), 2) BALD, 3) maximizing the entropy (Entropy), 4) k-medoids, and one greedy 5) k-centers approach (Sener & Savarese, 2018). The network architecture is a three-layer MLP with three blocks of [convolution, dropout, max-pooling, relu], with 32, 64, and 128 3x3 convolution filters, 5x5 max pooling, and 0.5 dropout rate. In the AL loops, the MC dropout still randomly samples 2,000 data from the unlabeled data pool to approximate the training of the network architecture following (Kirsch et al., 2019). The initial labeled data of MNIST, SVHN and CIFAR-10 are 20, 1000, 1000 random samples from their full training sets. Details of baselines are presented in Appendix C.1.\nThe batch size of the compared baselines is 100, where GBALD ranks 300 acquisitions to select 100 data for the training, i.e. b = 300, b′ = 100. As the learning curves shown in Figure 5, 1) k-centers algorithm performs more poorly than other compared baselines because the representative optimization with sphere geodesic usually falls into the selection of boundary data; 2) Var, Entropy and BALD algorithms cannot accelerate the network model rapidly due to highly-skewed acquisitions towards few fixed classes at its first acquisitions (start states); 3) k-medoids approach does not interact with the neural network model while directly imports the clustering centers into its training set; 4) The accuracies of the acquisitions of GBALD achieve better performance at the beginning than the Var, Entropy and BALD approaches which fed the training set of the network model via acquisition loops. In short, the network is improved faster after drawing the distribution characteristics of the input dataset with sufficient labels. GBALD thus consists of the representative and informative acquisitions in its uniform framework. Advantages of these two acquisition paradigms are integrated and present higher accuracies than any single paradigm.\nTable 1 reports the mean±std values of the test accuracies of the breakpoints of the learning curves in Figure 5, where breakpoints of MNIST are {0,10,20,30, ...,600}, breakpoints of SVHN are {0,100,200, ...,10000}, and breakpoints of CIFAR10 are {0,100,200, ...,20000}. We then calculate their average accuracies and std values over these acquisition\npoints. As the shown in Table 1, all std values around 0.1, yielding a norm value. Usually, an average\naccuracy on a same acquisition size with different random seeds of DNNs, will result a small std value. Our mean accuracy spans across the whole learning curve.\nThe results show that 1) GBALD achieves the highest average accuracies; k-medoids is ranked the second amongst the compared baselines; 2) k-centers has ranked the worst accuracies amongst these approaches; 3) the others, which iteratively update the training model are ranked at the middle including BALD, Var and Entropy algorithms. Table 2 shows the acquisition numbers of achieving the accuracies of 70%, 80%, and 90% on the three datasets. The three numbers of each cell are the acquisition numbers over MNIST, SVHN, and CIFAR10, respectively. The results show that GBALD can use fewer acquisitions to achieve a desired accuracy than the other algorithms." }, { "heading": "5.4 GBALD VS. BATCHBALD", "text": "Batch active deep learning was recently proposed to accelerate the training of a DNN model. In recent literature, BatchBALD (Kirsch et al., 2019) extended BALD with a batch acquisition setting to converge the network using fewer iteration loops. Different to BALD, BathBALD introduces diversity to avoid repeated or similar output acquisitions.\nHow to set the batch size of the acquisitions attracted our eyes before starting the experiments. It involves with whether our experiment settings are fair and reasonable. From a theoretical view, the larger the batch size, the worse the batch acquisitions will be. Experiments results of (Kirsch et al., 2019) also demonstrated this phenomenon. We thus set different batch sizes to run BatchBALD. Figure 6 reports the comparison results of BALD, BatchBALD, and our proposed GBALD following the experiment settings of Section 5.3. As the shown in this figure,\nBatchBALD degenerates the test accuracies if we progressively increase the bath sizes, where BatchBALD with a batch size of 10 keeps similar learning curves as BALD. This means BatchBALD actually can accelerate BALD with a similar acquisition result if the batch size is not large. That means, if the batch size is between 2 to 10, BatchBALD will degenerate into BALD and maintains highly-consistent results.\nAlso because of this, BatchBALD has the same sensitivity to the uninformative prior. For our GBALD, the core-set solicits sufficient data which properly matches the input distribution (w.r.t. acquired data set size ≤ 100), providing powerful input features to start the DNN model (w.r.t.\nacquired data set size > 100). Table 3 then presents the mean±std of breakpoints ({0,10,20, ...,600}) of active acquisitions on MNIST with batch settings. The statistical results show GBALD has much higher mean accuracy than BatchBALD with different bath sizes. Therefore, evaluating the model uncertainty of DNN using highly-representative core-set samples can improve the performance of the neural network.\n6 CONCLUSION Table 3: Mean±std of BALD, BatchBALD, and GBALD of active acquisitions on MNIST with batch settings.\nAlgorithms Batch sizes Accuracies BALD 1 0.8654±0.0354\nBatchBALD 10 0.8645±0.0365 BatchBALD 40 0.8273±0.0545 BatchBALD 100 0.7902±0.0951\nGBALD 3 0.9106±0.1296 We have introduced a novel Bayesian AL framework, GBALD, from the geometric perspective, which seamlessly incorporates representative (core-set) and informative (model uncertainty estimation) acquisitions to accelerate the training of a DNN model. Our GBALD yields significant improvements over BALD, flexibly resolving the limitations of an uninformative prior and redundant information by optimizing the acquisition on an ellipsoid. Generalization analysis has asserted that training with ellipsoid has tighter lower error bound and higher probability to achieve a zero error than training with a typical sphere. Compared to the representative or informative acquisition algorithms, experiments show that our GBALD spends much fewer acquisitions to accelerate the accuracy. Moreover, it keeps slighter accuracy reduction than other baselines against repeated and noisy acquisitions." }, { "heading": "A. TWO-STAGE GBALD ALGORITHM", "text": "The two-stage GBALD algorithm is described as follows: 1) construct core-set on ellipsoid (Lines 3 to 13), and 2) estimate model uncertainty with a deep learning model (Lines 14 to 21). Core-set construction is derived from the max-min optimization of Eq. (10), then updated with ellipsoid geodesic w.r.t. Eq. (11), where θ yields a geometric probability model w.r.t. Eq. (5). Importing the core-set into D0 derives the deep learning model to return b informative acquisitions one time, where θ yields a deep learning model. Ranking those samples, we select b′ samples with the highest representations as the batch outputs. The iterations of batch acquisitions stop until its budget is exhaust. The final update on D0 is our acquisition set of AL.\nDetails of the hyperparameter settings are presented at Appendix C.6.\nAlgorithm 1: Two-stage GBALD Algorithm 1 Input: Data set X , core-set size NM, batch returns b, batch output b′, iteration budget A. 2 Initialization: α ← 0, core-setM← ∅. 3 Stage 1© begins: 4 Initialize θ to yield a geometric probability model w.r.t. Eq. (5). 5 Perform k-means to initialize U to D0. 6 Core-set construction begins by acquiring x∗i , 7 for i← 1,2, ...,NM do\n8 x∗i ← arg maxxi∈Du minDi∈D0 ⎧⎪⎪ ⎨ ⎪⎪⎩ ∥L0 −L∥ 2 + logp(yi∣xi, θ) ⎫⎪⎪ ⎬ ⎪⎪⎭ , where\nL0 ← ∣∑ N i=1Eyi[logp(yi∣xi, θ) +H[yi∣xi,U]]∣.\n9 Ellipsoid geodesic line scales x∗i : x ∗ i ← arg minxj∈Du ∥xj − [xi + η(x ∗ − xi)]∥.\n10 Update x∗i into core-setM:M← x ∗ i ∪M. 11 Update N ← N − 1. 12 end 13 Import core-set to update D0: D0 ←M ∪ U ′, where U ′ updates each element of U into their\nnearest samples in X . 14 Stage 2© begins: 15 Initialize θ to yield a deep learning model. 16 while α < A do 17 Return b informative deep learning acquisitions in one budget:\n{x∗1, x ∗ 2, ..., x ∗ b}← arg maxx∈Du H[θ∣D0] −Ey∼p(y∣x,D0)[H[θ∣x, y,D0]].\n18 Rank b′ informative acquisitions with the highest geometric representativeness: {x∗t1 , ..., x ∗ tb′ }← arg maxx∗i ∈{x∗1,x∗2,...,x∗b} p(yi∣x ∗ i , θ). 19 Update {x∗t1 , ..., x ∗ tb′ } into D0: D0 ← D0 ∪ {x∗t1 , ..., x ∗ tb′\n}. 20 α ← α + 1. 21 end 22 Output: final update on D0." }, { "heading": "B. GENERALIZATION ERRORS OF GEODESIC SEARCH WITH SPHERE AND ELLIPSOID", "text": "Optimizing with ellipsoid geodesic linearly rescales the spherical search on tighter geometric object. The inherent motivation is that the ellipsoid can prevent the updates of core-set towards the boundary regions of the sphere. This section presents generalization error analysis from geometry, which provides feasible error guarantees for geodesic search with ellipsoid, following the perceptron analysis. Proofs are presented in Appendix D." }, { "heading": "B.1 ERROR BOUNDS OF GEODESIC SEARCH WITH SPHERE", "text": "Given a perceptron function h ∶=w1x1 +w2x2 +w3, the classification task is over two classes A and B embedded in a three-dimensional geometry. Let SA and SB be the spheres that tightly cover A and B, respectively, where SA is with a center ca and radius Ra, and SB is with a center cb and radius Rb. Under this setting, generalization analysis is presented. Theorem 1. Given a linear perceptron function h = w1x1 +w2x2 +w3 that classifies A and B, and a sampling budget k, with representation sampling over SA and SB , the minimum distances to the boundaries of SA and SB of that representation data are defined as da and db, respectively. Let err(h, k) be the classification error rate with respect to h and k, π\nϕ = arcsinRa−da Ra , we then have an\ninequality of error:\nmin ⎧⎪⎪ ⎨ ⎪⎪⎩ 4R3a − (2Ra + tk)(Ra − tk) 2 4R3a + 4R 3 b , 4R3b − (2Rb + t ′ k)(Rb − t ′ k) 2 4R3b + 4R 3 a ⎫⎪⎪ ⎬ ⎪⎪⎭ < err(h, k) < 1 k ,\nwhere tk = R2a 3 + 3\n√\n− µk 2π +\n√ µ2 k\n4π2 − π3R3a 27π3 + 3\n√\n− µk 2π −\n√ µ2 k\n4π2 − π3R3a 27π3 , µk = ( 2k−43k − 1 ϕ cosπ ϕ )πR3a −\n4πR3b 3k , t′k = R2b 3 + 3\n√\n− µ′ k\n2π +\n√ µ′2 k\n4π2 − π3R3 b 27π3 + 3\n√\n− µ′ k\n2π −\n√ µ′2 k\n4π2 − π3R3 b 27π3 , and µ′k = ( 2k−4 3k − 1 ϕ cosπ ϕ )πR3b −\n4πR3a 3k ." }, { "heading": "B.2 ERROR BOUNDS OF GEODESIC SEARCH WITH ELLIPSOID", "text": "Given class A and B are tightly covered by ellipsoid Ea and Eb in a three-dimensional geometry. Let Ra1 be the polar radius of Ea, {Ra2 ,Ra3} be the equatorial radii of Ea, Rb1 be the polar radius of Eb, and {Rb2 ,Rb3} be the equatorial radii of Eb, the generalization analysis is ready to present following these settings. Theorem 2. Given a linear perceptron function h = w1x1 +w2x2 +w3 that classifies A and B, and a sampling budget k, with representation sampling over Ea and Eb, the minimum distance to the boundaries of Ea and Eb of that representation data are defined as da and db, respectively. Let err(h, k) be the classification error rate with respect to h and k, π\nϕ = arcsinRa−da Ra , we then have an\ninequality of error:\nmin ⎧⎪⎪ ⎨ ⎪⎪⎩ 4∏iRai − (2Ra1 + λk)(Ra1 − λk) 2 4∏iRai + 4∏iRbi , 4∏iRbi − (2Rb1 + λ ′ k)(Rb1 − λ ′ k) 2 4∏iRbi + 4∏iRai ⎫⎪⎪ ⎬ ⎪⎪⎭ < err(h, k) < 1 k ,\nwhere i = 1,2,3, λk = R2a1 3 + 3\n√\n− σk 2π +\n√ σ2 k\n4π2 − π3R3a1 27π3 + 3\n√\n− σk 2π −\n√ σ2 k\n4π2 − π3R3a1 27π3 , σk = ( 2k−43k −\nπRa1 2ϕ )π∏iRai − 4π∏iRbi 3k , λ′k = R2b1 3 + 3\n√\n− σk 2π +\n√ σ′2 k\n4π2 − π3R3 b1 27π3 + 3\n√\n− σk 2π −\n√ σ2 k\n4π2 − π3R3 b1 27π3 , and\nσ′k = ( 2k−4 3k − πRb1 2ϕ )π∏iRbi − 4π∏iRai 3k ." }, { "heading": "B.3 PROBABILITY BOUNDS OF ACHIEVING A ZERO GENERALIZATION ERROR", "text": "Let Pr[err(h, k) = 0]Sphere and Pr[err(h, k) = 0]Ellipsoid be the probabilities of achieving a zero generalization error of geodesic search with sphere and ellipsoid, respectively, we present their inequality relationship. Theorem 3. Based on γ-tube theory (Ben-David & Von Luxburg, 2008) of clustering stability, the probability of achieving a zero generalization error of geodesic search with ellipsoid can be defined as the volume ratio of Vol(Tube)\nVol(Sphere) . Then, we have:\nPr[err(h, k) = 0]Sphere = 1 − t3k R3a ,\nwhere tk keeps consistent with Theorem 1. Theorem 4. Based on γ-tube theory (Ben-David & Von Luxburg, 2008) of clustering stability, the probability of achieving a zero generalization error of geodesic search with ellipsoid can be defined as the volume ratio of Vol(Tube)\nVol(Ellipsoid) . Then, we have:\nPr[err(h, k) = 0]Ellipsoid = 1 − λk1λk2λk3 Ra1Ra2Ra3 ,\nwhere λki = R2ai 3 + 3\n√\n− σki 2π +\n√ σ2 ki\n4π2 − π3R3ai 27π3 + 3\n√\n− σki 2π −\n√ σ2 ki\n4π2 − π3R3ai 27π3 , and σki = ( 2k−4 3k −\nπRai 2ϕ )πRa1Ra2Ra3 − 4πRb1Rb2Rb3 3k , i = 1,2,3." }, { "heading": "B.4 MAIN THEORETICAL RESULTS OF GEODESIC SEARCH WITH ELLIPSOID", "text": "" }, { "heading": "TIGHTER LOWER ERROR BOUND", "text": "Let lower[err(h, k)]Sphere and lower[err(h, k)]Ellipsoid be the lower bounds of the generalization errors of geodesic search with sphere and ellipsoid, respectively. With Ra1 < Ra, compare Theorem 1 and Theorem 2, we have the following proposition. Proposition 1. Given a linear perceptron function h = w1x1 +w2x2 +w3 that classifies A and B, and a sampling budget k. With representation sampling over Sa and Sb, let err(h, k) be the classification error rate with respect to h and k, the lower bounds of geodesic search with sphere and ellipsoid satisfy: lower[err(h, k)]Ellipsoid < lower[err(h, k)]Sphere." }, { "heading": "HIGHER PROBABILITY OF ACHIEVING A ZERO ERROR", "text": "Let Pr[err(h, k) = 0]Sphere and Pr[err(h, k) = 0]Ellipsoid be the probabilities of achieving a zero generalization error of geodesic search with sphere and ellipsoid, respectively. Their relationship is presented in Proposition 2. Proposition 2. Given a linear perceptron function h = w1x1 +w2x2 +w3 that classifies A and B, and a sampling budget k. With representation sampling over Ea and Eb, let err(h, k) be the classification error rate with respect to h and k, the probabilities of geodesic search with sphere and ellipsoid satisfy: Pr[err(h, k) = 0]Ellipsoid > Pr[err(h, k) = 0]Sphere.\nWith these theoretical results, geometric interpretation of geodesic search over ellipsoid is more effective than sphere due to tighter lower error bound and higher probability to achieve zero error. Theorems 6 and 7 of Appendix D then present a high-dimensional generalization for the above theoretical results in terms of the volume functions of sphere and ellipsoid." }, { "heading": "C. EXPERIMENTS", "text": "" }, { "heading": "C.1 BASELINES", "text": "To evaluate the performance of GBALD, several typical baselines from the latest deep AL literatures are selected.\n• Bayesian active learning by disagreement (BALD) (Houlsby et al., 2011). It has been introduced in Section 3.\n• Maximize Variation Ratio (Var) (Gal et al., 2017). The algorithm chooses the unlabeled data that maximizes its variation ratio of the probability:\nx∗ = arg max x∈Du {1 −max y∈Y Pr(y∣, x,D0))}. (15)\n• Maximize Entropy (Entropy) (Gal et al., 2017). The algorithm chooses the unlabeled data that maximizes the predictive entropy:\nx∗ = arg max x∈Du { − ∑ y∈Y Pr(y∣x,D0))log(Pr(y∣x,D0))}. (16)\nThe parameter settings of Eq. (5) are R0 = 2.0e + 3 and η =0.9. Accuracy of each acquired dataset size of the experiments are averaged over 3 runs." }, { "heading": "C.2 ACTIVE ACQUISITIONS WITH REPEATED SAMPLES", "text": "Repeatedly collecting samples in the establishment of a database is very common. Those repeated samples may be continuously evaluated as the primary acquisitions of AL due to the lack of one or more kinds of class labels. Meanwhile, this situation may lead the evaluation of the model uncertainty to fall into repeated acquisitions. To respond this collecting situation, we compare the acquisition performance of BALD, Var, and GBALD using 5,000 and 10,000 repeated samples from the first 5,000 and 10,000 unlabeled data of SVHN, respectively. In addition, the unsupervised algorithms which do not interact with the network architecture, such as k-medoids and k-centers, have been shown that they cannot accelerate the training in terms of the experiment results of Section 5.3. Thus, we are no longer studying their performance. The network architecture still follows the settings of the MLP as Section 5.3.\nThe acquisition results over the repeated SVHN datasets are presented in Figure 7. The batch sizes of the compared baselines are 100, where GBALD ranks 300 acquisitions to select 100 data for the training, i.e. b = 300, b′ = 100. The mean±std values of these baselines of the breakpoints (i.e. {0,100,200, ...,10000}) are reported in Table 4. Results demonstrate that GBALD shows slighter perturbations on repeated samples than Var and BALD because it draws the core-set from the input distribution as the initial acquisition, leading small probability to sample from one or more fixed class. In GBALD, the informative acquisitions constrained with geometric representations further scatter the acquisitions spread in different classes. However, Var and BALD algorithms have no particular schemes against the repeated acquisitions. The maximizer on the model uncertainty may be repeatedly produced by those repeated samples. In additional, the unsupervised algorithms such as k-medoids and k-centers don not have these limitations, but cannot accelerate the training since there has no interactions with the network architecture." }, { "heading": "BALD,Batchsize=100, 0 Receptions", "text": "0 2000 4000 6000 8000 10000\nAcquired dataset size\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nA c c u\nra c y\nVar,Batchsize=100, 0 Noises Var,Batchsize=100, 5000 Noises Var,Batchsize=100, 100000 Noises\n(a) Var\n0 2000 4000 6000 8000 10000\nAcquired dataset size\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nA c c u ra\nc y\nBALD,Batchsize=100, 0 Noises BALD,Batchsize=100, 5000 Noises BALD,Batchsize=100, 100000 Noises\n(b) BALD\n0 2000 4000 6000 8000 10000\nAcquired dataset size\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nA c c u ra\nc y\nGBALD,Batchsize=300, 0 Noises GBALD,Batchsize=300, 5000 Noises GBALD,Batchsize=300, 100000 Noises\n(c) GBALD\nFigure 8: Active noisy acquisitions on SVHN with 5,000 and 10,000 noisy labels." }, { "heading": "C.3 ACTIVE ACQUISITIONS WITH NOISY SAMPLES", "text": "Noisy labels (Golovin et al., 2010) (Han et al., 2018) are inevitable due to human errors in data annotation. Training on noisy labels, the neural network model will degenerate its inherent properties. To assess the perturbations of the above acquisition algorithms against noisy labels, we organize the following experiment scenarios: we select the first 5,000 and 10,000 samples respectively from the unlabeled data pool of the MNIST dataset and reset their labels by shifting {‘0’,‘1’,...,‘8’} to {‘1’,‘2’,...,‘9’}, respectively. The network architecture follows MLP of Section 5.3. The selected baselines are Var and BALD.\nFigure 7 presents the acquisition results of those baseline with noisy labels. The batch sizes of the compared baselines are 100, where GBALD ranks 300 acquisitions to select 100 data for the training, i.e. b = 300, b′ = 100. Table 5 presents the mean±std values of the breakpoints (i.e. {0,100,200, ...,10000}) over learning curves of Figure 8. The results further show that GBALD has smaller noisy perturbations than other baselines. For Var and BALD, model uncertainty leads high probabilities to sample those noisy data due to their greatly updating on the model." }, { "heading": "C.5 ACCELERATION OF ACCURACY", "text": "Accelerations of accuracy i.e. the first-orders of breakpoints of the learning curve, describe the efficiency of the active acquisition loops. Different to the accuracy curves, the acceleration curve reflects how active acquisitions help the convergence of the interacting DNN model.\nWe thus firstly present the acceleration curves of different baselines on MNIST, SVHN, and CIFAR10 datasets following the experiments of Section 5.3. The acceleration curves of active acquisitions are drawn in Figure 9. Observing those acceleration curves of different algorithms clearly finds that, GBALD always keeps higher accelerations of accuracy than the other baselines against the three benchmark datasets. This revels the reason of why GBALD can derive more informative and representative data to maximally update the DNN model.\nThe acceleration curves of active acquisitions with repeated samples of Appendix C.2 are presented in Figure 10. As the shown in this figure, GBALD presents slighter perturbations to the number of repeated samples than that of Var and BALD due to its effective ranking scheme on optimizing model uncertainty of DNN. The acceleration curves of active noisy acquisitions of Section Appendix C.3 are drawn in Figure 11. Compared to Figure 7, it presents more intuitive descriptions for the noisy perturbations to different baselines. With horizontal comparisons to acceleration curves of Var and BALD, our proposed GBALD has smaller noisy perturbations due to 1) the powerful core-set which properly captures the input distribution, and 2) highly representative and informative acquisitions of model uncertainty." }, { "heading": "BALD,Batchsize=100, 0 Receptions", "text": "" }, { "heading": "C.6 HYPERPARAMETER SETTINGS", "text": "What is the proper time to start active acquisitions using Eq. (14) in GBALD framework? Does the ratio of core-set and model uncertainty acquisitions affect the performance of GBALD?\nWe discuss the key hyperparameter of GBALD here: core-set size NM. Table 6 presents the relationship of accuracies and the sizes of core-set, where the start accuracy denotes the test accuracy over the initial core-set, and the ultimate accuracy denotes the test accuracy over up to Q = 20,000 training data. Let b = 1000, b′ = 500 in GBALD,NM be the number of the core-set size, the iteration budget A of GBALD then can be defined as A = (Q −NM)/b′. For example, if the number of the initial core-set labels are set as NM = 1,000, we have A = (Q −NM)/b′ ≈ 38; ifNM = 2,000, then A = (Q −NM)/b ′ ≈ 36.\nFrom Table 6, GBALD algorithm keep stable accuracies over the start, ultimate, and mean±std accuracies when there inputs more than 1,000 core-set labels. Therefore, drawing sufficient core-set labels using Eq. (10) to start the model uncertainty of Eq. (14) can maximize the performance of our GBALD framework.\nHyperparameter settings on batch returns b and bath outputs b′. Experiments of Sections 5.1 and 5.2 used different b and b′ to observe the parameter perturbations. No matter what the settings of b′ and b are, GBALD still outperforms BALD. For single acquisition of GBALD, we suggest b = 3 and b′ = 1. For bath acquisitions, the settings on b′ and b are user-defined according the time cost and hardware resources.\nHyperparameter setting on iteration budget A. Given the acquisition budget Q, let b′ be the number of the output returns at each loop,NM be the number of the core-set size, the iteration budget A of GBLAD then can be defined as A = (Q −NM)/b′.\nOther hyperparameter settings. Eq. (5) has one parameter R0 which describes the geometry prior from probability. The default radius of the intern balls R0 is used to legalize the prior and has no further influences on Eq. (10). It is set as R0 = 2.0e + 3 for those three image datasets. Ellipsoid\ngeodesic is adjusted by η which controls how far of the updates of core-set to the boundaries of the distributions. It is set as η = 0.9 in this paper.\nC.7 TWO-SIDED t-TEST\nWe present two-sided (two-tailed) t-test for the learning curves of Figure 5. Different to the mean± std of Table 1, t-test can enlarge the significant difference of those baselines. In typical t-test, the two groups of observations usually require a degree of freedom smaller than 30. However, the numbers of breakpoints of MNIST, SVHN, and CIFAR10 are 61, 101, and 201, respectively, thereby holding a degree of freedom of 60, 100, 200, respectively. It is thus we introduce t-test score to directly compare the significant difference of pairwise baselines.\nt-test score between any pair group of breakpoints are defined as follows. Let B1 = {α1, α2, ..., αn} and B2 = {β1, β2, ..., βn}, there exists t-score of\nt − score = √ n µ\nσ ,\nwhere µ = 1 n ∑ n i=1(αi − βi), and σ = √ 1 n−1 ∑ n i=1(αi − βi − µ) 2.\nIn two-sided t-test, B1 beats B2 on breakpoints αi and βi satisfying a condition of t − score > ν; B2 beats B1 on breakpoints αi and βi satisfying a condition of t − score < −ν, where ν denotes the hypothesized criterion with a given confidence risk. Following (Ash et al., 2019), we add a penalty of 1\ne to each pair of breakpoints, which further enlarges their differences in the aggregated penalty matrix, where e denotes the number of B1 beats B2 on all breakpoints. All penalty values finally calculate their L1 expressions.\nFigure 12 presents the penalty matrix over learning curves of Figure 5. Column-wise values at the bottom of each matrix show the overall performance of the compared baselines. As the shown results, GBALD has significant performances than that of the other baselines over the three datasets. Especially for SVHN, it has superior performance." }, { "heading": "D. PROOFS", "text": "We firstly present the generalization errors of k = 3 of AL with sphere as a case study. The assumption of the generalization analysis is described in Figure 13.\nD.1 CASE STUDY OF GENERALIZATION ANALYSIS OF err(h,3) OF GEODESIC SEARCH WITH" }, { "heading": "SPHERE", "text": "Theorem 5. Given a linear perceptron function h = w1x1 +w2x2 +w3 that classifies A and B, and a sampling budget k. With representation sampling over Sa and Sb, the minimum distance to the boundaries of SA and SB of that representation data are defined as da and db, respectively. Let err(h, k) be the classification error rate with respect to h and k, π\nϕ = arcsinRa−da Ra , we have an\ninequality of error:\nmin ⎧⎪⎪ ⎨ ⎪⎪⎩ (2Ra + t)(Ra − t) 2 4R3a + 4R 3 b , (2Rb + t ′)(Rb − t ′)2 4R3b + 4R 3 a ⎫⎪⎪ ⎬ ⎪⎪⎭ < err(h,3) < 0.3334,\nwhere t = R 2 a\n3 +\n3\n√\n− µ 2π +\n√ µ2\n4π2 − π3R3a 27π3 + 3\n√\n− µ 2π −\n√ µ2\n4π2 − π3R3a 27π3 , µ = ( 2 9 − 1 ϕ cosπ ϕ )πR3a − 4πR3b 9 ,\nt′ = R2b 3 + 3\n√\n− µ′\n2π +\n√ µ′2\n4π2 − π3R3 b 27π3 + 3\n√\n− µ′\n2π −\n√ µ′2\n4π2 − π3R3 b 27π3 , and µ′ = ( 2 9 − 1 ϕ cosπ ϕ )πR3b − 4πR3a 9 .\nProof. Given the unseen acquisitions of {qa, qb, q∗}, where qa ∈ A, qb ∈ B, and q∗ ∈ A or q∗ ∈ B is uncertain. However, the position of q∗ largely decides h. Therefore, the proof studies the error bounds highly related to q∗ in terms of two cases: Ra ≥ Rb and Ra < Rb.\n1) If Ra ≥ Rb, q∗ ∈ A. Estimating the position of q∗ starts from the analysis on qa. Given the volume function Vol(⋅) over the 3-D geometry, we know: Vol(A) = 4π\n3 R3a and Vol(B) = 4π 3 R3b . Given k = 3\nover Sa and Sb, we define the minimum distance of qa to the boundary of A as da. Let Sa be cut off by a cross section h′, where C be the cut and D be the spherical cap1 of the half-sphere (see Figure 12) that satisfy\nVol(C) = 2\n3 πR3a −Vol(D) =\n4π(R3a +R 3 b)\n9 , (19)\nand the volume of D is\nVol(D) = π( √ R2a − (Ra − da) 2) 2 (Ra − da) + ∫\n2π √ R2a−(Ra−da)2\n0\narcsinRa−da Ra\n2π πR2a dx\n= π(R2a − (Ra − da) 2 )(Ra − da) +\narcsinRa−da Ra\n2π πR2a(2π\n√ R2a − (Ra − da) 2)\n= π(R2a − (Ra − da) 2 )(Ra − da) + πR 2 aarcsin Ra − da Ra √ R2a − (Ra − da) 2.\n(20)\n1https://en.wikipedia.org/wiki/Spherical_cap\nLet π ϕ = arcsinRa−da Ra , Eq. (20) can be written as\nVol(D) = π(R2a − (Ra − da) 2 )(Ra − da) + πR3a ϕ cos π ϕ . (21)\nIntroducing Eq. (21) to Eq. (19), we have\n( 2 3 − 1 ϕ cos π ϕ )πR3a − π(R 2 a − (Ra − da) 2 )(Ra − da) =\n4π(R3a +R 3 b)\n9 . (22)\nLet t = Ra − da, Eq. (22) can be rewritten as\nπt3 − πR2at + ( 2 9 − 1 ϕ cos π ϕ )πR3a − 4πR3b 9 = 0. (23)\nTo simplify Eq. (23), let µ = ( 2 9 − 1 ϕ cosπ ϕ )πR3a − 4πR3b 9\n, Eq. (23) then can be written as πt3 − πR2at + µ = 0. (24)\nThe positive solution of t can be\nt = R2a 3 + 3\n¿ Á ÁÀ\n− µ\n2π +\n√ µ2\n4π2 − π3R3a 27π3 + 3\n¿ Á ÁÀ\n− µ\n2π −\n√ µ2\n4π2 − π3R3a 27π3 . (25)\nBased on Eq. (19), we know\nVol(D) = 2\n3 πR3a −\n4π(R3a +R 3 b)\n9\n= 2\n9 πR3a −\n4 9 πR3b > 0.\n(26)\nThus, 3 √ 2Rb < Ra. We next prove q∗ ∈ A. Based on Eq. (26), we know\nπR3b < 1\n2 πR3a. (27)\nThen, the following inequalities hold: 1)2πR3b < πR 3 a, 2) 2 3 πR3b < 1 3 πR3a, and 3) 2 3 πR3b + 1 3 πR3b < 1 3 πR3a + 1 3 πR3b . Finally, we have\nπR3b < 1\n3 (πR3a + πR 3 b). (28)\nTherefore, Vol(B) < 1 3 (Vol(A) + Vol(B)). We thus know: 1) qa ∈ A and it is with a minimum distance da to the boundary of Sa, 2) qb ∈ B, and 3) q∗ ∈ A. Therefore, class B can be deemed as having a very high probability to achieve a nearly zero generalization error and the position of q∗ largely decides the upper bound of the generalization error of h.\nIn Sa that covers class A, the nearly optimal error region can be bounded as the spherical cap of Sa with a volume constraint of Vol(A) −Vol(C). We thence have an inequality of\nVol(A) −Vol(C) Vol(A) +Vol(B) < err(h,3) < 1 3 . (29)\nWe next calculate the volume of the spherical cap:\nVol(A) −Vol(C) = ∫ Ra−da\n−Ra πx2d y\n= π∫ Ra\nRa−da R2a − y 2d y\n= 4\n3 πR3a − πd 2 a(Ra − da 3 ).\n(30)\nEq. (29) then is rewritten as 4 3 πR3a − π 3 (3Ra − da)d 2 a\n4 3 πR3a + 4 3 πR3b\n< err(h,3) < 0.3334, (31)\nThen, we have the error bound of 4R3a − (3Ra − da)d 2 a\n4R3a + 4R 3 b\n< err(h,3) < 0.3334. (32)\nIntroducing da = Ra − t, Eq. (32) is written as 4R3a − (2Ra + t)(Ra − t) 2\n4R3a + 4R 3 b\n< err(h,3) < 0.3334. (33)\n2) With another assumption of Ra < Rb, we follow the same proof skills of Ra ≥ Rb and know 4R3b−(3Rb−db)d 2 b\n4R3 b +4R3a\n< err(h,3) < 0.3334, i.e. where db = Rb−t′ and t′ = R2b 3 + 3\n√\n− µ′\n2π +\n√ µ′2\n4π2 − π3R3 b 27π3 +\n3\n√\n− µ′\n2π −\n√ µ′2\n4π2 − π3R3 b 27π3 , and µ′ = ( 2 9 − 1 ϕ cosπ ϕ )πR3b − 4πR3a 9 .\nWe thus conclude that min ⎧⎪⎪ ⎨ ⎪⎪⎩ 4R3a−(2Ra+t)(Ra−t) 2 4R3a+4R3b , 4R3b−(2Rb+t ′)(Rb−t′)2 4R3 b +4R3a ⎫⎪⎪ ⎬ ⎪⎪⎭ < err(h,3) < 0.3334." }, { "heading": "D.2 PROOF OF THEOREM 1", "text": "We next present the generalization errors against an agnostic sampling budget k following the above proof technique.\nProof. The proof studies two cases: Ra ≥ Rb and Ra < Rb. 1) If Ra ≥ Rb, we estimate the optimal position of qa that satisfies qa ∈ A. Given the volume function Vol(⋅) over the 3-D geometry, we know: Vol(A) = 4π\n3 R3a and Vol(B) = 4π 3 R3b . Assume qa be the nearest representative data to the\nboundary of Sa, qb be the nearest representative data to the boundary of Sb, and q∗ be the nearest representative data to h either in Sa or Sb. Given the minimum distance of qa to the boundary of A as da. Let Sa be cut off by a cross section h′, where C be the cut and D be the spherical cap of the half-sphere that satisfy\nVol(C) = 2\n3 πR3a −Vol(D) =\n4π(R3a +R 3 b)\n3k , (34)\nand the volume of D is\nVol(D) = π( √ R2a − (Ra − da) 2) 2 (Ra − da) + ∫\n2π √ R2a−(Ra−da)2\n0\narcsinRa−da Ra\n2π πR2a dx\n= π(R2a − (Ra − da) 2 )(Ra − da) +\narcsinRa−da Ra\n2π πR2a(2π\n√ R2a − (Ra − da) 2)\n= π(R2a − (Ra − da) 2 )(Ra − da) + πR 2 aarcsin Ra − da Ra √ R2a − (Ra − da) 2.\n(35)\nLet π ϕ = arcsinRa−da Ra , Eq. (36) can be written as\nVol(D) = π(R2a − (Ra − da) 2 )(Ra − da) + πR3a ϕ cos π ϕ . (36)\nIntroducing Eq. (36) to Eq. (34), we have\n( 2 3 − 1 ϕ cos π ϕ )πR3a − π(R 2 a − (Ra − da) 2 )(Ra − da) =\n4π(R3a +R 3 b)\n3k . (37)\nLet tk = Ra − da, we know\nπt3 − πR2at + ( 2k − 4 3k − 1 ϕ cos π ϕ )πR3a − 4πR3b 3k = 0. (38)\nTo simplify Eq. (38), let µk = ( 2k−43k − 1 ϕ cosπ ϕ )πR3a − 4πR3b 3k\n, Eq. (38) then can be written as πt3 − πR2at + µk = 0. (39)\nThe positive solution of tk can be\ntk = R2a 3 + 3\n¿ Á ÁÀ\n− µk 2π + √ µ2k 4π2 − π3R3a 27π3 + 3\n¿ Á ÁÀ\n− µk 2π − √ µ2k 4π2 − π3R3a 27π3 . (40)\nBased on Eq. (35), we know\nVol(D) = 2\n3 πR3a −\n4π(R3a +R 3 b)\n3k\n= 2k − 4\n3k πR3a −\n4\n3k πR3b > 0.\n(41)\nThus, 3 √\n2 k−2Rb < Ra. We next prove q ∗ ∈ A. According to Eq. (41), we know\nπR3b < k − 2\n2 πR3a. (42)\nThen, the following inequalities hold: 1) 2 k−2πR 3 b < πR 3 a, 2) 2 (k−2)kπR 3 b < 1 k πR3a, and 3) 2 (k−2)kπR 3 b+ k2−2k−2 (k−2)k πR 3 b < 1 k πR3a + k2−2k−2 (k−2)k πR 3 b . Finally, we have:\nπR3b\n< 1\nk πR3a +\nk2 − 2k − 2\n(k − 2)k πR3b\n= 1\nk π(R3a +R 2 b) −\n2\n(k − 2)k πR3a\n< 1\nk π(R3a +R 3 b).\n(43)\nTherefore, Vol(B) < 1 k (Vol(A) + Vol(B)). We thus know: 1) qa ∈ A and it is with a minimum distance da to the boundary of Sa, 2) qb ∈ B, and 3) q∗ ∈ A. Therefore, class B can be deemed as having a very high probability to achieve a zero generalization error and the position of q∗ largely decides the upper bound of the generalization error of h.\nIn Sa that covers class A, the nearly optimal error region can be bounded as Vol(A) −Vol(C). We then have the inequality of\nVol(A) −Vol(C) Vol(A) +Vol(B) < err(h, k) < 1 k . (44)\nBased on the volume equation of the spherical cap in Eq. (30), we have 4 3 πR3a − π 3 (3Ra − da)d 2 a\n4 3 πR3a + 4 3 πR3b\n< err(h, k) < 1\nk . (45)\nThen, we have the error bound of 4R3a − (3Ra − da)d 2 a\n4R3a + 4R 3 b\n< err(h, k) < 1\nk . (46)\nIntroducing da = Ra − tk, Eq. (46) is written as 4R3a − (2Ra + t)(Ra − tk) 2\n4R3a + 4R 3 b\n< err(h, k) < 1\nk . (47)\n2) With another assumption of Ra < Rb, we follow the same proof skills of Ra ≥ Rb and know 4R3b−(3Rb−db)d 2 b\n4R3 b +4R3a\n< err(h, k) < 1 k , where db = Rb − t′k and t ′ k = R2b 3 + 3\n√\n− µ′ k\n2π +\n√ µ′2 k\n4π2 − π3R3 b 27π3 +\n3\n√\n− µ′ k\n2π −\n√ µ′2 k\n4π2 − π3R3 b 27π3 , and µ′k = ( 2k−4 3k − 1 ϕ cosπ ϕ )πR3b − 4πR3a 3k .\nWe thus conclude that min ⎧⎪⎪ ⎨ ⎪⎪⎩ 4R3a−(2Ra+tk)(Ra−tk) 2 4R3a+4R3b , 4R3b(2Rb+t ′ k)(Rb−t ′ k) 2 4R3 b +4R3a ⎫⎪⎪ ⎬ ⎪⎪⎭ < err(h, k) < 1 k ." }, { "heading": "D.3 PROOF OF THEOREM 2", "text": "Proof. Given class A and B are tightly covered by ellipsoid Ea and Eb in a three-dimensional geometry. Let Ra1 be the polar radius of Ea, {Ra2 ,Ra3} be the equatorial radii of Ea, Rb1 be polar radius of Eb, and {Rb2 ,Rb3} be the equatorial radii of Eb. Based on Eq. (10), we know Rai < Ra,Rbi < Rb,∀i, where Ra and Rb are the radii of the spheres over the class A and B, respectively. We follow the same proof technique of Theorem 1 to present the generalization errors of AL with ellipsoid.\nThe proof studies two cases: Ra1 ≥ Rb and Ra1 < Rb1 . 1) If Ra1 ≥ Rb1 , q ∗ ∈ A. Given the volume function Vol(⋅) over the 3-D geometry, we know: Vol(A) = 4π 3 Ra1Ra2Ra3 and Vol(B) = 4π 3 Rb1Rb2Rb3 . Given the minimum distance of qa to the boundary of A as da. Let Ea by cut off by a cross section h′, where C be the cut and D be the ellipsoid cap of the half-ellipsoid that satisfy\nVol(C) = 2\n3 πR1aR 2 aR 3 a −Vol(D) =\n4π(Ra1Ra2Ra3 +Rb1Rb2Rb3)\n3k , (48)\nand the volume of D is approximated as\nVol(D) ≈ π( √ R2a1 − (Ra1 − da) 2) 2 (Ra1 − da) + ∫ πRa2Ra3\n0\narcsin Ra1−da Ra1\n2π πR2a1 dx\n= π(R2a1 − (Ra1 − da) 2 )(Ra1 − da) +\narcsin Ra1−da Ra\n2π πR2a1(πRa2Ra3)\n= π(R2a1 − (Ra1 − da) 2 )(Ra1 − da) +\n1 2 πR2a1Ra2Ra3arcsin Ra1 − da Ra1 .\n(49)\nLet π ϕ = arcsin Ra1−da Ra1 , Eq. (49) can be written as\nVol(D) = π(R2a1 − (Ra1 − da) 2 )(Ra1 − da) +\nπ2 2ϕ R2a1Ra2Ra3 . (50)\nIntroducing Eq. (50) to Eq. (48), we have\n( 2 3 − πRa1 2ϕ )πRa1Ra2Ra3 − π(R 2 a1 − (Ra1 − da) 2 )(Ra1 − da) = 4π(Ra1Ra2Ra3 +Rb1Rb2Rb3) 3k .\n(51)\nLet λk = Ra1 − da, we know\nπλ3k − πR 2 aλk + (\n2k − 4 3k − πRa1 2ϕ )πRa1Ra2Ra3 − 4πRb1Rb2Rb3 3k = 0. (52)\nTo simplify Eq. (52), let σk = ( 2k−43k − πRa1 2ϕ )πRa1Ra2Ra3 − 4πRb1Rb2Rb3 3k , Eq. (52) then can be written as πλ3k − πR 2 a1λk + σk = 0. (53)\nThe positive solution of λk can be\nλk = R2a1 3 + 3\n¿ Á ÁÀ\n− σk 2π + √ σ2k 4π2 − π3R3a1 27π3 + 3\n¿ Á ÁÀ\n− σk 2π − √ σ2k 4π2 − π3R3a1 27π3 . (54)\nThe remaining proof process follows Eq. (40) to Eq. (46) of Theorem 1. We thus conclude that\nmin ⎧⎪⎪ ⎨ ⎪⎪⎩ 4∏iRai − (2Ra1 + λk)(Ra1 − λk) 2 4∏iRai + 4∏iRbi , 4∏iRbi − (2Rb1 + λ ′ k)(Rb1 − λ ′ k) 2 4∏iRbi + 4∏iRai ⎫⎪⎪ ⎬ ⎪⎪⎭ < err(h, k) < 1 k ,\n(55)\nwhere i = 1,2,3, λ′k = R2b1 3 + 3\n√\n− σk 2π +\n√ σ′2 k\n4π2 − π3R3 b1 27π3 + 3\n√\n− σk 2π −\n√ σ2 k\n4π2 − π3R3a1 27π3 , and σ′k = ( 2k−4 3k −\nπRb1 2ϕ )πRb1Rb2Rb3 − 4πRa1Ra2Ra3 3k . In a simple way, Ra1Ra2Ra3 and Rb1Rb2Rb3 can be written as ∏iRai and∏iRbi , i=1,2,3, respectively." }, { "heading": "D.4 PROOF OF THEOREM 3", "text": "Proof. In clustering stability, γ-tube structure that surrounds the cluster boundary largely decides the performance of a learning algorithm. Definition of γ-tube is as follows.\nDefinition 1. γ-tube Tubeγ(f) is a set of points distributed in the boundary of the cluster. Tubeγ(f) ∶= {x ∈X ∣`(x,B(f)) ≤ γ}, (56)\nwhere X is a noise-free cluster with n samples, B(f) ∶= {x ∈ X,f is discontinuous at x }, f is a clustering function, and `(⋅, ⋅) denotes the distance function.\nFollowing this conclusion, representation data can achieve the optimal generalization error if they are spread over the tube structure. Let γ = da, the probability of achieving a nearly zero generalization error can be expressed as the volume ration of γ-tube and Sa:\nPr[err(h, k) = 0]Sphere = Vol(Tubeγ)\nVol(Sa)\n=\n4 3 πR3a − 4 3 (Ra − da) 3\n4 3 πR3a\n= 1 − t3k R3a ,\n(57)\nwhere tk keeps consistent with Eq. (40). With the initial sampling from the tube structure of class A, the subsequent acquisitions of AL would be updated from the tube structure of class B. If the initial sampling comes from the tube structure of B, the next acquisition must be updated from the tube structure of A. With the updated acquisitions spread over the tube structures of both classes, h is easy to achieve a nearly zero error. Then Theorem 3 is as stated." }, { "heading": "D.5 PROOF OF THEOREM 4", "text": "Proof. Following the proof technique of Theorem 3, volume of the tube is redefined as Vol(Tubeγ) = 4 3 πRa1Ra2Ra3 . Then, we know\nPr[err(h, k) = 0]Ellipsoid = Vol(Tubeγ)\nVol(Ea)\n=\n4 3 πRa1Ra2Ra3 − 4 3 (Ra1 − da)(Ra2 − da)(Ra3 − da) 4 3 πRa1Ra2Ra3\n= 1 − λk1λk2λk3 Ra1Ra2Ra3 ,\n(58)\nwhere λki = R2ai 3 + 3\n√\n− σki 2π +\n√ σ2 ki\n4π2 − π3R3ai 27π3 + 3\n√\n− σki 2π −\n√ σ2 ki\n4π2 − π3R3ai 27π3 , and σki = ( 2k−4 3k −\nπRai 2ϕ )πRa1Ra2Ra3 − 4πRb1Rb2Rb3 3k , i = 1,2,3.\nTheorem 4 then is as stated." }, { "heading": "D.6 PROOF OF PROPOSITION 1", "text": "Proof. Let Cubea tightly covers Sa with a side length of 2Ra, and Cube ′ a tightly covers the cut C, following theorem 1, we know\nerr(h, k) > Vol(A) −Vol(C) Vol(A) > Cubea −Cube\n′ a\nCubea . (59)\nThen, we know\nerr(h, k) > πR3a − πR 2 ada\nπR3a\n= 1 − da Ra .\n(60)\nMeanwhile, let Cubee tightly covers Ea with a side length of 2Ra1 , Cubee ′ tightly covers C, following theorem 1, we know\nerr(h, k) > Vol(A) −Vol(C) Vol(A) > Cubee −Cube\n′ e\nCubee . (61)\nThen, we know\nerr(h, k) > πRa1Ra2Ra3 − πdaRa2Ra3\nπRa1Ra2Ra3\n= 1 − da Ra1 .\n(62)\nSince Ra1 < Ra, we know 1− da Ra > 1− da Ra1 . It is thus the lower bound of AL with ellipsoid is tighter than AL with sphere. Then, Proposition 1 holds." }, { "heading": "D.7 PROOF OF PROPOSITION 2", "text": "Proof. Following the proofs of Theorem 3:\nPr[err(h, k) = 0]Sphere = 1 − t3k R3a\n= 1 − (Ra − da)\n3\nR3a\n= 1 − ⎛\n⎝ 1 − da Ra ⎞ ⎠\n3\n.\n(63)\nFollowing the proofs Theorem 4:\nPr[err(h, k) = 0]Ellipsoid = 1 − λk1λk2λk3\nR3a1\n= 1 − (Ra1 − da)(Ra2 − da)(Ra3 − da)\nR3a1\n> 1 − ⎛\n⎝ 1 − da Ra1 ⎞ ⎠\n3\n.\n(64)\nBased on Proposition 1, 1 − da Ra > 1 − da Ra1 , therefore Pr[err(h, k) = 0]Sphere < Pr[err(h, k) = 0]Ellipsoid. Then, Proposition 2 is as stated.\nD.8 LOWER DIMENSIONAL GENERALIZATION OF THE d-DIMENSIONAL GEOMETRY\nWith the above theoretical results, we next present a connection between 3-D geometry and ddimensional geometry. The major technique is to prove that the volume of the 3-D geometry is a lower dimensional generalization of the d-dimensional geometry. It then can make all proof process from Theorems 1 to 5 hold in d-dimensional geometry.\nTheorem 6. Let Vold be the volume of a d-dimensional geometry, Vol3(Sa) is a low-dimensional generalization of Vold(Sa).\nProof. Given Sa over class A is defined with x21 + x 2 2 + x 2 3 = R 2 a. Let x 2 1 + x 2 2 = r 2 a be its 2-D generalization of Sa, assume that x2 be a variable parameter in this 2-D generalization formula, the “volume” (2-D volume is the area of the geometry object) of it can be expressed as\nVol2(Sa) = ∫ ra −ra 2 √ r2a − x 2 2dx2. (65)\nLet ϑ be an angle variable that satisfies x2 = rasin(ϑ), we know dx2 = racos(ϑ)dϑ. Then, Eq. (65) is rewritten as\nVol2(Sa) = ∫ π/2\n−π/2 2r2acos 2 (ϑ)dϑ\n= ∫\nπ/2\n0 4r2acos 2 (ϑ)dϑ.\n(66)\nFor a 3-D geometry, for the variable x3, it is over a cross-section which is a 2-dimensional ball (circle), where the radius of the ball can be expressed as racos(ϑ), s.t. ϑ ∈ [0, π]. Particularly, let Vol3(Sa) be the volume of Sa with 3 dimensions, the volume of this 3-dimensional sphere then can be written as\nVol3(Sa) = ∫ π/2\n0 2Vol2(racos(ϑ))ra(cos(ϑ))dϑ. (67)\nWith Eq. (67), volume of a d-dimensional geometry can be expressed as the integral over the (d-1)-dimensional cross-section of Sa\nVolm(Sa) = ∫ π/2\n0 2Volm-1(racos(ϑ))ra(cos(ϑ))dϑ, (68)\nwhere Volm-1 denotes the volume of (m-1)-dimensional generalization geometry of Sa.\nBased on Eq. (68), we know Vol3 can be written as\nVol3(Sa) = ∫ π/2\n0 2Vol2(Racos(ϑ))ra(cos(ϑ\n′ ))dϑ (69)\nIntroducing Eq. (66) into Eq. (69), we have\nVol3(Sa) = ∫ π/2\n0 2Vol2(Racos(ϑ\n′ ))ra(cos(ϑ ′ ))dϑ′\n= ∫\nπ/2\n0 ∫\nπ/2\n0 8(racos(ϑ\n′ ) 2cos2(ϑ′)ra(cos(ϑ))dϑ ′dϑ\n= 4\n3 πR2a s.t.Ra = ra.\n(70)\nTherefore, the generalization analysis results of the 3-D geometry still can hold in the high dimensional geometry. Then,\nTheorem 7. Let Vold be the volume of a d-dimensional geometry, Vol3(Ea) is a low-dimensional generalization of Vold(Ea).\nProof. The integral of Eq. (67) also can be adopted into the volume of Vol3(Ea) by transforming the area i.e. Vol2(Sa) into Vol2(Ea). Then, Eq. (68) follows this transform." } ]
2,020
null
SP:09bbd1a342033a65e751a8878c23e3fa6facc636
[ "The authors propose a convolution as a message passing of node features over edges where messages are aggregated weighted by a \"direction\" edge field. Furthermore, the authors propose to use the gradients of Laplace eigenfunctions as direction fields. Presumably, the aggregation is done with different direction fields derived from the Laplace eigenfunctions with lowest eigenvalues, which are then linearly combined with learnable parameters. Doing so allows their graph network to behave more like a conventional CNN, in which the kernels have different parameters for signals from different directions. The authors achieve good results on several benchmarks. Furthermore, the authors prove that their method reduces to a conventional CNN on a rectangular grid and have theoretical results that suggest that their method suffers less from the \"over-smoothing\" and \"over-squashing\" problems." ]
In order to overcome the expressive limitations of graph neural networks (GNNs), we propose the first method that exploits vector flows over graphs to develop globally consistent directional and asymmetric aggregation functions. We show that our directional graph networks (DGNs) generalize convolutional neural networks (CNNs) when applied on a grid. Whereas recent theoretical works focus on understanding local neighbourhoods, local structures and local isomorphism with no global information flow, our novel theoretical framework allows directional convolutional kernels in any graph. First, by defining a vector field in the graph, we develop a method of applying directional derivatives and smoothing by projecting node-specific messages into the field. Then we propose the use of the Laplacian eigenvectors as such vector field, and we show that the method generalizes CNNs on an n-dimensional grid, and is provably more discriminative than standard GNNs regarding the Weisfeiler-Lehman 1-WL test. Finally, we bring the power of CNN data augmentation to graphs by providing a means of doing reflection, rotation and distortion on the underlying directional field. We evaluate our method on different standard benchmarks and see a relative error reduction of 8% on the CIFAR10 graph dataset and 11% to 32% on the molecular ZINC dataset. An important outcome of this work is that it enables to translate any physical or biological problems with intrinsic directional axes into a graph network formalism with an embedded directional field.
[]
[ { "authors": [ "Uri Alon", "Eran Yahav" ], "title": "On the bottleneck of graph neural networks and its practical implications", "venue": null, "year": 2020 }, { "authors": [ "Xavier Bresson", "Thomas Laurent" ], "title": "Residual gated graph convnets", "venue": "arXiv preprint arXiv:1711.07553,", "year": 2017 }, { "authors": [ "Michael M. Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: going beyond euclidean data", "venue": "doi: 10.1109/MSP.2017.2693418. URL http://arxiv.org/abs/1611.08097", "year": 2017 }, { "authors": [ "Fan Chung", "S.T. Yau" ], "title": "Discrete green’s functions", "venue": "ISSN 00973165. doi: 10.1006/jcta.2000.3094. URL http://www.sciencedirect.com/science/ article/pii/S0097316500930942", "year": 2000 }, { "authors": [ "F.R.K. Chung", "F.C. Graham" ], "title": "CBMS Conference on Recent Advances in Spectral Graph Theory, National Science Foundation (U.S.), American Mathematical Society, and Conference Board of the Mathematical Sciences", "venue": "Spectral Graph Theory. CBMS Regional Conference Series. Conference Board of the mathematical sciences,", "year": 1997 }, { "authors": [ "Gabriele Corso", "Luca Cavalleri", "Dominique Beaini", "Pietro Liò", "Petar" ], "title": "Veličković. Principal neighbourhood aggregation for graph nets", "venue": "arXiv preprint arXiv:2004.05718,", "year": 2020 }, { "authors": [ "Vishwaraj Doshi", "Do Young Eun" ], "title": "Fiedler vector approximation via interacting random walks", "venue": null, "year": 2000 }, { "authors": [ "Vijay Prakash Dwivedi", "Chaitanya K Joshi", "Thomas Laurent", "Yoshua Bengio", "Xavier Bresson" ], "title": "Benchmarking graph neural networks", "venue": "arXiv preprint arXiv:2003.00982,", "year": 2020 }, { "authors": [ "Miroslav Fiedler" ], "title": "Algebraic connectivity of graphs", "venue": "Czechoslovak Mathematical Journal,", "year": 1973 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "William L. Hamilton" ], "title": "Graph Representation Learning", "venue": null, "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Weihua Hu", "Matthias Fey", "Marinka Zitnik", "Yuxiao Dong", "Hongyu Ren", "Bowen Liu", "Michele Catasta", "Jure Leskovec" ], "title": "Open graph benchmark: Datasets for machine learning on graphs", "venue": null, "year": 2005 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Md Amirul Islam", "Sen Jia", "Neil D.B. Bruce" ], "title": "How much position information do convolutional neural networks encode? 2020", "venue": "URL http://arxiv.org/abs/2001.08248", "year": 2001 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Junction tree variational autoencoder for molecular graph generation", "venue": null, "year": 2018 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Johannes Klicpera", "Janek Groß", "Stephan Günnemann" ], "title": "Directional message passing for molecular graphs. 2019", "venue": "URL https://openreview.net/forum?id=B1eWbxStPH", "year": 2019 }, { "authors": [ "Boris Knyazev", "Graham W Taylor", "Mohamed Amer" ], "title": "Understanding attention and generalization in graph neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Risi Kondor", "Hy Truong Son", "Horace Pan", "Brandon Anderson", "Shubhendu Trivedi" ], "title": "Covariant compositional networks for learning graphs", "venue": "arXiv preprint arXiv:1801.02144,", "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Cornelius Lanczos" ], "title": "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. United States Governm", "venue": null, "year": 1950 }, { "authors": [ "Ron Levie", "Federico Monti", "Xavier Bresson", "Michael M. Bronstein" ], "title": "CayleyNets: Graph convolutional neural networks with complex rational spectral filters. 2018", "venue": "URL http: //arxiv.org/abs/1705.07664", "year": 2018 }, { "authors": [ "B. Levy" ], "title": "Laplace-beltrami eigenfunctions towards an algorithm that ”understands", "venue": "IEEE International Conference on Shape Modeling and Applications 2006 (SMI’06), pp", "year": 2006 }, { "authors": [ "Sitao Luan", "Mingde Zhao", "Xiao-Wen Chang", "Doina Precup" ], "title": "Break the ceiling: Stronger multiscale deep graph convolutional networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Nadav Shamir", "Yaron Lipman" ], "title": "Invariant and equivariant graph networks", "venue": "arXiv preprint arXiv:1812.09902,", "year": 2018 }, { "authors": [ "Federico Monti", "Davide Boscaini", "Jonathan Masci", "Emanuele Rodola", "Jan Svoboda", "Michael M Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Sarah O’Gara", "Kevin McGuinness" ], "title": "Comparing data augmentation strategies for deep image classification. 2019", "venue": "doi: http://doi.org10.21427/148b-ar75. URL https://arrow.tudublin", "year": 2019 }, { "authors": [ "Chris Olah", "Nick Cammarata", "Ludwig Schubert", "Gabriel Goh", "Michael Petrov", "Shan Carter" ], "title": "An overview of early vision in InceptionV1", "venue": "ISSN 24760757. doi: 10.23915/distill.00024.002. URL https://distill.pub/2020/circuits/ early-vision", "year": 2020 }, { "authors": [ "Hao Peng", "Jianxin Li", "Qiran Gong", "Senzhang Wang", "Yuanxing Ning", "Philip S. Yu" ], "title": "Graph convolutional neural networks via motif-based attention", "venue": null, "year": 2019 }, { "authors": [ "Yu Rong", "Wenbing Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "DropEdge: Towards deep graph convolutional networks on node classification", "venue": null, "year": 2020 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Lio", "Yoshua" ], "title": "Under review as a conference paper at ICLR", "venue": null, "year": 2021 }, { "authors": [ "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural", "venue": null, "year": 1904 }, { "authors": [ "Hu" ], "title": "BENCHMARKS AND DATASETS We use a variety of benchmarks proposed by Dwivedi et al", "venue": null, "year": 2020 }, { "authors": [ "Knyazev" ], "title": "2019), and results in a different number of super-pixels per graph", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "One of the most important distinctions between convolutional neural networks (CNNs) and graph neural networks (GNNs) is that CNNs allow for any convolutional kernel, while most GNN methods are limited to symmetric kernels (also called isotropic kernels in the literature) (Kipf & Welling, 2016; Xu et al., 2018a; Gilmer et al., 2017). There are some implementation of asymmetric kernels using gated mechanisms (Bresson & Laurent, 2017; Veličković et al., 2017), motif attention (Peng et al., 2019), edge features (Gilmer et al., 2017) or by using the 3D structure of molecules for message passing (Klicpera et al., 2019).\nHowever, to the best of our knowledge, there are currently no methods that allow asymmetric graph kernels that are dependent on the full graph structure or directional flows. They either depend on local structures or local features. This is in opposition to images which exhibit canonical directions: the horizontal and vertical axes. The absence of an analogous concept in graphs makes it difficult to define directional message passing and to produce an analogue of the directional frequency filters (or Gabor filters) widely present in image processing (Olah et al., 2020).\nWe propose a novel idea for GNNs: use vector fields in the graph to define directions for the propagation of information, with an overview of the paper presented in 1. Hence, the aggregation or message passing will be projected onto these directions so that the contribution of each neighbouring node nv will be weighted by its alignment with the vector fields at the receiving node nu. This enables our method to propagate information via directional derivatives or smoothing of the features.\nWe also explore using the gradients of the low-frequency eigenvectors of the Laplacian of the graph φk, since they exhibit interesting properties (Bronstein et al., 2017; Chung et al., 1997). In particular, they can be used to define optimal partitions of the nodes in a graph, to give a natural ordering (Levy, 2006), and to find the dominant directions of the graph diffusion process (Chung & Yau, 2000). Further, we show that they generalize the horizontal and vertical directional flows in a grid (see\nfigure 2), allowing them to guide the aggregation and mimic the asymmetric and directional kernels present in computer vision. In fact, we demonstrate mathematically that our work generalizes CNNs by reproducing all convolutional kernels of radius R in an n-dimensional grid, while also bringing the powerful data augmentation capabilities of reflection, rotation or distortion of the directions.\nWe further show that our directional graph network (DGN) model theoretically and empirically allows for efficient message passing across distant communities, which reduces the well-known problem of over-smoothing, and aligns well with the need of independent aggregation rules (Corso et al., 2020). Alternative methods reduce the impact of over-smoothing by using skip connections (Luan et al., 2019), global pooling (Alon & Yahav, 2020), or randomly dropping edges during training time (Rong et al., 2020), but without solving the underlying problem. In fact, we also prove that DGN is more discriminative than standard GNNs in regards to the Weisfeiler-Lehman 1-WL test, showing that the reduction of over-smoothing is accompanied by an increase of expressiveness.\nOur method distinguishes itself from other spectral GNNs since the literature usually uses the low frequencies to estimate local Fourier transforms in the graph (Levie et al., 2018; Xu et al., 2019). Instead, we do not try to approximate the Fourier transform, but only to define a directional flow at each node and guide the aggregation." }, { "heading": "2 THEORETICAL DEVELOPMENT", "text": "" }, { "heading": "2.1 INTUITIVE OVERVIEW", "text": "One of the biggest limitations of current GNN methods compared to CNNs is the inability to do message passing in a specific direction such as the horizontal one in a grid graph. In fact, it is difficult to define directions or coordinates based solely on the shape of the graph.\nThe lack of directions strongly limits the discriminative abilities of GNNs to understand local structures and simple feature transformations. Most GNNs are invariant to the permutation of the neighbours’ features, so the nodes’ received signal is not influenced by swapping the features of 2 neighbours. Therefore, several layers in a deep network will be employed to understand these simple changes instead of being used for higher level features, thus over-squashing the message sent between 2 distant nodes (Alon & Yahav, 2020).\nIn this work, one of the main contributions is the realisation that low-frequency eigenvectors of the Laplacian can overcome this limitation by providing a variety of intuitive directional flows. As a first example, taking a grid-shaped graph of sizeN×M with N2 < M < N , we find that the eigenvector\nassociated to the smallest non-zero eigenvalue increases in the direction of the width N and the second one increases in the direction of the height M . This property generalizes to n-dimensional grids and motivated the use of gradients of eigenvectors as preferred directions for general graphs.\nWe validated this intuition by looking at the flow of the gradient of the eigenvectors for a variety of graphs, as shown in figure 2. For example, in the Minnesota map, the first 3 non-constant eigenvectors produce logical directions, namely South/North, suburb/city, and West/East.\nAnother important contribution also noted in figure 2 is the ability to define any kind of direction based on prior knowledge of the problem. Hence, instead of relying on eigenvectors to find directions in a map, we can simply use the cardinal directions or the rush-hour traffic flow." }, { "heading": "2.2 VECTOR FIELDS IN A GRAPH", "text": "Based on a recent review from Bronstein et al. (2017), this section presents the ideas of differential geometry applied to graphs, with the goal of finding proper definitions of scalar products, gradients and directional derivatives.\nLet G = (V,E) be a graph with V the set of vertices and E ⊂ V × V the set of edges. The graph is undirected meaning that (i, j) ∈ E iff (j, i) ∈ E. Define the vector spaces L2(V ) and L2(E) as the set of maps V → R and E → R with x,y ∈ L2(V ) and F ,H ∈ L2(E) and scalar products\n〈x,y〉L2(V ) := ∑ i∈V xiyi , 〈F ,H〉L2(E) := ∑ (i,j)∈E F(i,j)H(i,j) (1)\nThink of E as the “tangent space” to V and of L2(E) as the set of “vector fields” on the space V with each row Fi,: representing a vector at the i-th node. Define the pointwise scalar product as the map L2(E)×L2(E)→ L2(V ) taking 2 vector fields and returning their inner product at each point of V , at the node i is defined by equation 2.\n〈F ,H〉i := ∑\nj:(i,j)∈E\nFi,jHi,j (2)\nIn equation 3, we define the gradient ∇ as a mapping L2(V ) → L2(E) and the divergence div as a mapping L2(E)→ L2(V ), thus leading to an analogue of the directional derivative in equation 4.\n(∇x)(i,j) := x(j)− x(i) , (divF )i := ∑\nj:(i,j)∈E\nF(i,j) (3)\nDefinition 1. The directional derivative of the function x on the graph G in the direction of the vector field F̂ where each vector is of unit-norm is\nDF̂x(i) := 〈∇x, F̂ 〉i = ∑\nj:(i,j)∈E\n(x(j)− x(i))F̂i,j (4)\n|F | will denote the absolute value of F and ||Fi,:||Lp the Lp-norm of the i-th row of F . We also define the forward/backward directions as the positive/negative parts of the field F±." }, { "heading": "2.3 DIRECTIONAL SMOOTHING AND DERIVATIVES", "text": "Next, we show how the vector field F is used to guide the graph aggregation by projecting the incoming messages. Specifically, we define the weighted aggregation matrices Bav and Bdx that allow to compute the directional smoothing and directional derivative of the node features.\nThe directional average matrixBav is the weighted aggregation matrix such that all weights are positives and all rows have an L1-norm equal to 1, as shown in equation 5 and theorem 2.1, with a proof in the appendix C.1.\nBav(F )i,: = |Fi,:|\n||Fi,:||L1 + (5)\nThe variable is an arbitrarily small positive number used to avoid floating-point errors. The L1norm denominator is a local row-wise normalization. The aggregator works by assigning a large weight to the elements in the forward or backward direction of the field, while assigning a small weight to the other elements, with a total weight of 1. Theorem 2.1 (Directional smoothing). The operation y = Bavx is the directional average of x, in the sense that yu is the mean of xv , weighted by the direction and amplitude of F .\nThe directional derivative matrix Bdx is defined in (6) and theorem 2.2, with the proof in appendix C.2. Again, the denominator is a local row-wise normalization but can be replaced by a global normalization. diag(a) is a square, diagonal matrix with diagonal entries given by a. The aggregator works by subtracting the projected forward message by the backward message (similar to a center derivative), with an additional diagonal term to balance both directions.\nBdx(F )i,: = F̂i,: − diag (∑\nj\nF̂:,j ) i,:\nF̂i,: =\n( Fi,:\n||Fi,:||L1 +\n) (6)\nTheorem 2.2 (Directional derivative). Suppose F̂ have rows of unit L1 norm. The operation y = Bdx(F̂ )x is the centered directional derivative of x in the direction of F , in the sense of equation 4, i.e.\ny = DF̂x = ( F̂ − diag (∑ j F̂:,j )) x\nThese aggregators are directional, interpretable and complementary, making them ideal choices for GNNs. We discuss the choice of aggregators in more details in appendix A, while also providing alternative aggregation matrices such as the center-balanced smoothing, the forward-copy, the phantom zero-padding, and the hardening of the aggregators using softmax/argmax on the field. We further provide a visual interpretation of theBav andBdx aggregators in figure 3. Interestingly, we also note in appendix A.1 thatBav andBdx yield respectively the mean and Laplacian aggregations when F is a vector field such that all entries are constant Fij = ±C. 𝑭𝑣,𝑢3 𝑭𝑣,𝑢1 𝑭𝑣,𝑢2 Direc�onal smoothing aggrega�on 𝑩𝑎𝑣 𝑭 𝒙 Direc�onal deriva�ve aggrega�on 𝑩𝑑𝑥 𝑭 𝒙Graph features focused on the neighbourhood of 𝒏𝒗\n𝑣: Node receiving the message𝑢1,2,3: Neighbouring node𝒙𝑢 : Feature at node 𝑢𝑭𝑣,𝑢 : Direc�onal vector field between the node 𝑣 and 𝑢 Weighted forward deriva�ve with 𝑢1 Weighted backwardderiva�ve with 𝑢2 Weighted backwardderiva�ve with 𝑢3+ +Sum of the absolute weights\nFigure 3: Illustration of how the directional aggregation works at a node nv , with the arrows representing the direction and intensity of the field F ." }, { "heading": "2.4 GRADIENT OF THE EIGENVECTORS AS INTERPRETABLE VECTOR FIELDS", "text": "In this section we give theoretical support for the choice of gradients of the eigenfunctions of the Laplacian as sensible vectors along which to do directional message passing since they are interpretable and allow to reduce the over-smoothing.\nAs usual the combinatorial, degree-normalized and symmetric normalized Laplacian are defined as\nL = D −A , Lnorm = D−1L , Lsym = D− 1 2LD− 1 2 (7)\nThe problems of over-smoothing and over-squashing are critical issues in GNNs (Alon & Yahav, 2020; Hamilton, 2020). In most GNN models, node representations become over-smoothed after several rounds of message passing (i.e., convolutions), as the representations tend to reach a meanfield equilibrium equivalent to the stationary distribution of a random walk (Hamilton, 2020). Oversmoothing is also related to the problem of over-squashing, which reflects the inability for GNNs to propagate informative signals between distant nodes (Alon & Yahav, 2020) and is a major bottleneck to training deep GNN models (Xu et al., 2019). Both problems are related to the fact that the influence of one node’s input on the final representation of another node in a GNN is determined by the likelihood of the two nodes co-occurring on a truncated random walk (Xu et al., 2018b).\nWe show in theorem 2.3 (proved in appendix C.3) that by passing information in the direction of φ1, the eigenvector associated to the lowest non-trivial frequency of Lnorm, DGNs can efficiently share information between the farthest nodes of the graph, when using the K-walk distance to measure the difficulty of passing information. Thus, DGNs provide a natural way to address both the oversmoothing and over-squashing problems: they can efficiently propagate messages between distant nodes and in a direction that counteracts over-smoothing. Definition 2 (K-walk distance). The K-walk distance dK(vi, vj) on a graph is the average number of times vi is hit in a K step random walk starting from vj . Theorem 2.3 (K-Gradient of the low-frequency eigenvectors). Let λi and φi be the eigenvalues and eigenvectors of the normalized Laplacian of a connected graph Lnorm and let a, b = arg max1≤i,j≤n{dK(vi, vj)} be the nodes that have highest K-walk distance. Let m = arg min1≤i≤n(φ1)i and M = arg max1≤i≤n(φ1)i, then dK(vm, vM ) − dK(va, vb) has order O(1− λ2).\nAs another point of view into the problem of oversmoothing, consider the hitting time Q(x, y) defined as the expected number of steps in a random walk starting from node x ending in node y with the probability transition P (x, y) = 1dx . In appendix C.4 we give an informal argument supporting the following conjecture. Definition 3 (Gradient step). Suppose the two neighboring nodes x and z are such that φ(z)−φ(x) is maximal among the neighbors of x, then we will say z is obtained from x by taking a step in the direction of the gradient∇φ. Conjecture 2.4 (Gradient steps reduce expected hitting time). Suppose that x, y are uniformly distributed random nodes such that φi(x) < φi(y). Let z be the node obtained from x by taking one step in the direction of∇φi, then the expected hitting time is decreased proportionally to λ−1i and\nEx,y[Q(z, y)] ≤ Ex,y[Q(x, y)]\nThe next two corollaries follow from theorem 2.3 (and also conjecture 2.4 if it is true). Corollary 2.5 (Reduces over-squashing). Following the direction of ∇φ1 is an efficient way of passing information between the farthest nodes of the graph (in terms of the K-walk distance). Corollary 2.6 (Reduces over-smoothing). Following the direction of∇φ1 allows the influence distribution between node representations to be decorrelated from random-walk hitting times (assuming the definition of influence introduced in Xu et al. (2018b)).\nOur method also aligns perfectly with a recent proof that multiple independent aggregators are needed to distinguish neighbourhoods of nodes with continuous features (Corso et al., 2020).\nWhen using eigenvectors of the Laplacian φi to define directions in a graph, we need to keep in mind that there is never a single eigenvector associated to an eigenvalue, but a whole eigenspace.\nFor instance, a pair of eigenvalues can have a multiplicity of 2 meaning that they can be generated by different pairs of orthogonal eigenvectors. For an eigenvalue of multiplicity 1, there are always two unit norm eigenvectors of opposite sign, which poses a problem during the directional aggregation. We can make a choice of sign and later take the absolute value (i.e. Bav in equation 5). An alternative is to take a sample of orthonormal basis of the eigenspace and use each choice to augment the training (see section 2.8). Although multiplicities higher than one do happen for low-frequencies (square grids have a multiplicity 2 for λ1) this is not common in “real-world graphs”; we found no λ1 multiplicity greater than 1 on the ZINC and PATTERN datasets (see appendix B.4). Further, although all φ are orthogonal, their gradients, used to define directions, are not always locally orthogonal (e.g. there are many horizontal flows in the grid). This last concern is left to be addressed in future work." }, { "heading": "2.5 GENERALIZATION OF THE CONVOLUTION ON A GRID", "text": "In this section we show that our method generalizes CNNs by allowing to define any radius-R convolutional kernels in grid-shaped graphs. The radius-R kernel at node u is a convolutional kernel that takes the weighted sum of all nodes v at a distance d(u, v) ≤ R. Consider the lattice graph Γ of size N1 × N2 × ... × Nn where each vertices are connected to their direct non-diagonal neighbour. We know from Lemma C.1 that, for each dimension, there is an eigenvector that is only a function of this specific dimension. For example, the lowest frequency eigenvectorφ1 always flows in the direction of the longest length. Hence, the Laplacian eigenvectors of the grid can play a role analogous to the axes in Euclidean space, as shown in figure 2.\nWith this knowledge, we show in theorem 2.7 (proven in C.7), that we can generalize all convolutional kernels in an n-dimensional grid. This is a strong result since it demonstrates that our DGN framework generalizes CNNs when applied on a grid, thus closing the gap between GNNs and the highly successful CNNs on image tasks. Theorem 2.7 (Generalization radius-R convolutional kernel in a lattice). For an n-dimensional lattice, any convolutional kernel of radius R can be realized by a linear combination of directional aggregation matrices and their compositions.\nAs an example, figure 4 shows how a linear combination of the first and m-th aggregators B(∇φ1,m) realize a kernel on an N ×M grid, where m = dN/Me and N > M ." }, { "heading": "2.6 EXTENDING THE RADIUS OF THE AGGREGATION KERNEL", "text": "Having aggregation kernels for neighbours of distance 2 or 3 is important to improve the expressiveness of GNNs, their ability to understand patterns, and to reduce the number of layers required. However, the lack of directions in GNNs strongly limits the radius of the kernels since, given a graph of regular degree d, a mean/sum aggregation at a radius-R will result in a heavy over-squashing of dR messages. Using the directional fields, we can enumerate different paths, thus assigning a different weight for different R-distant neighbours. This method, proposed in appendix A.7, avoids the over-squashing, but empirical results are left for future work." }, { "heading": "2.7 COMPARISON WITH WEISFEILER-LEHMAN (WL) TEST", "text": "We also compare the expressiveness of the Directional Graph Networks with the classical WL graph isomorphism test which is often used to classify the expressivity of graph neural networks (Xu et al., 2018a). In theorem 2.8 (proven in appendix C.8) we prove that DGNs are capable of distinguishing pairs of graphs that the 1-WL test (and so ordinary GNNs) cannot differentiate.\nTheorem 2.8 (Comparison with 1-WL test). DGNs using the mean aggregator, any directional aggregator of the first eigenvector and injective degree-scalers are strictly more powerful than the 1-WL test." }, { "heading": "2.8 DATA AUGMENTATION", "text": "Another important result is that the directions in the graph allow to replicate some of the most common data augmentation techniques used in computer vision, namely reflection, rotation and distortion. The main difference is that, instead of modifying the image (such as a 5◦ rotation), the proposed transformation is applied on the vector field defining the aggregation kernel (thus rotating the kernel by−5◦ without changing the image). This offers the advantage of avoiding to pre-process the data since the augmentation is done directly on the kernel at each iteration of the training.\nThe simplest augmentation is the vector field flipping, which is done changing the sign of the field F , as stated in definition 4. This changes the sign ofBdx, but leavesBav unchanged. Definition 4 (Reflection of the vector field). For a vector field F , the reflected field is −F .\nLet F1,F2 be vector fields in a graph, with F̂1 and F̂2 being the field normalized such that each row has a unitary L2-norm. Define the angle vector α by 〈(F̂1)i,:, (F̂2)i,:〉 = cos(αi). The vector field F̂⊥2 is the normalized component of F̂2 perpendicular to F̂1. The equation below defines F̂ ⊥ 2 . The next equation defines the angle\n(F̂⊥2 )i,: = (F̂2 − 〈F̂1, F̂2〉F̂1)i,: ||(F̂2 − 〈F̂1, F̂2〉F̂1)i,:||\nNotice that we then have the decomposition (F̂2)i,: = cos(αi)(F̂1)i,: + sin(αi)(F̂⊥2 )i,:.\nDefinition 5 (Rotation of the vector fields). For F̂1 and F̂2 non-colinear vector fields with each vector of unitary length, their rotation by the angle θ in the plane formed by {F̂1, F̂2} is\nF̂ θ1 = F̂1diag(cos θ)+ F̂ ⊥ 2 diag(sin θ) , F̂ θ 2 = F̂1diag(cos(θ+α))+ F̂ ⊥ 2 diag(sin(θ+α)) (8)\nFinally, the following augmentation has a similar effect to a wave distortion applied on images. Definition 6 (Random distortion of the vector field). For vector field F and anti-symmetric random noise matrixR, its randomly distorted field is F ′ = F +R ◦A." }, { "heading": "3 IMPLEMENTATION", "text": "We implemented the models using the DGL and PyTorch libraries and we provide the code at the address https://anonymous.4open.science/r/a752e2b1-22e3-40ce-851c-a564073e1fca/. We test our method on standard benchmarks from Dwivedi et al. (2020) and Hu et al. (2020), namely ZINC, CIFAR10, PATTERN and MolHIV with more details on the datasets and how we enforce a fair comparison in appendix B.1.\nFor the empirical experiments we inserted our proposed aggregation method in two different type of message passing architecture used in the literature: a simple one similar to the one present in GCN (equation 9a) (Kipf & Welling, 2016) and a more complex and general one typical of MPNN (9b) (Gilmer et al., 2017) with or without edge features eji. Hence, the time complexity O(Em) is identical to the PNA (Corso et al., 2020), where E is the number of edges and m the number of aggregators, with an additional O(Ek) to pre-compute the k-first eigenvectors, as explained in the appendix B.2.\nX (t+1) i = U ( ⊕ (j,i)∈E X (t) j ) (9a) X(t+1)i = U ( X (t) i , ⊕ (j,i)∈E M ( X (t) i , X (t) j , eji︸︷︷︸\noptional\n)) (9b)\nwhere ⊕\nis an operator which concatenates the results of multiple aggregators, X is the node features, M is a linear transformation and U a multiple layer perceptron.\nWe tested the directional aggregators across the datasets using the gradient of the first k eigenvectors ∇φ1,...,k as the underlying vector fields. Here, k is a hyperparameter, usually 1 or 2, but could be bigger for high-dimensional graphs. To deal with the arbitrary sign of the eigenvectors, we take the absolute value of the result of equation 6, making it invariant to a reflection of the field. In case of a disconnected graph, φi is the i-th eigenvector of each connected component. Despite the numerous aggregators proposed in appendix A, onlyBdx andBav are tested empirically." }, { "heading": "4 RESULTS AND DISCUSSION", "text": "Directional aggregation Using the benchmarks introduced in section 3, we present in figure 5 a fair comparison of various aggregation strategies using the same parameter budget and hyperparameters. We see a consistent boost in the performance for simple, complex and complex with edges models using directional aggregators compared to the mean-aggregator baseline.\nIn particular, we see a significant improvement in ZINC and MolHIV using the directional aggregators. We believe this is due to the capacity to move efficiently messages across opposite parts of the molecule and to better understand the role of atom pairs. Further, the thesis that DGNs can bridge the gap between CNNs and GNNs is supported by the clear improvements on CIFAR10 over the baselines. This contrasts with the positional encoding which showed no clear improvement.\nWith our theoretical analysis in mind, we expected to perform well on PATTERN since the flow of the first eigenvectors are meaningful directions in a stochastic block model and passing messages using those directions allows the network to efficiently detect the two communities. The results match our expectations, outperforming all the previous models.\nComparison to the literature In order to compare our model with the literature, we fine-tuned it on the various datasets and we report its performance in figure 6. We observe that DGN provides significant improvement across all benchmarks, highlighting the importance of anisotropic kernels. In the work by Dwivedi et al. (2020), they proposed the use of positional encoding of the eigenvectors in node features, but these bring significant improvement when many eigenvectors and high network depths are used. Our results outperform them with fewer parameters, less depth, and only 1-2 eigenvectors, further motivating their use as directional flows instead of positional encoding.\nData augmentation To evaluate the effectiveness of the proposed augmentation, we trained the models on a reduced version of the CIFAR10 dataset. The results in figure 7 show clearly a higher expressive power of the dx aggregator, enabling it to fit well the training data. For a small dataset, this comes at the cost of overfitting and a reduced test-set performance, but we observe that randomly rotating or distorting the kernels counteracts the overfitting and improves the generalization.\nAs expected, the performance decreases when the rotation or distortion is too high since the augmented graph changes too much. In computer vision images similar to CIFAR10 are usually rotated by less than 30◦ (Shorten & Khoshgoftaar; O’Gara & McGuinness, 2019). Further, due to the constant number of parameters across models, less parameters are attributed to the mean aggregation in\nthe directional models, thus it cannot fit well the data when the rotation/distortion is too strong since the directions are less informative. We expect large models to perform better at high angles." }, { "heading": "5 CONCLUSION", "text": "The proposed DGN method allows to solve many problems of GNNs, including the lack of anisotropy, the low expressiveness, the over-smoothing and over-squashing. For the first time in graph networks, we generalize the directional properties of CNNs and their data augmentation capabilities. Based on an intuitive idea and backed by a set of strong theoretical and empirical results, we believe this work will give rise to a new family of directional GNNs. Future work can focus on the implementation of radius-R kernels and improving the choice of multiple orthogonal directions.\nBroader Impact This work will extend the usability of graph networks to all problems with physically defined directions, thus making GNN a new laboratory for physics, material science and biology. In fact, the anisotropy present in a wide variety of systems could be expressed as vector fields (spinor, tensor) compatible with the DGN framework, without the need of eigenvectors. One example is magnetic anisotropicity in metals, alloys and also in molecules such as benzene ring, alkene, carbonyl, alkyne that are easier or harder to magnetise depending on the directions or which way the object is rotated. Other examples are the response of materials to high electromagnetic fields (e.g. to study material responses at terahertz frequency); all kind of field propagation in crystals lattices (vibrations, heat, shear and frictional force, young modulus, light refraction, birefringence); multi-body or liquid motion; traffic modelling; and design of novel materials and constrained structures. This also enables GNNs to be used for virtual prototyping systems since the added directional constraints could improve the analysis of a product’s functionality, manufacturing and behavior." }, { "heading": "AUTHOR CONTRIBUTIONS", "text": "Anonymous" }, { "heading": "ACKNOWLEDGMENTS", "text": "Anonymous" }, { "heading": "A APPENDIX - CHOICES OF DIRECTIONAL AGGREGATORS", "text": "This appendix helps understand the choice of Bav and Bdx in section 2.3 and presents different directional aggregators that can be used as an alternative to the ones proposed.\nA simple alternative to the directional smoothing and directional derivative operator is to simply take the forward/backward values according to the underlying positive/negative parts of the field F , since it can effectively replicate them. However, there are many advantage of using Bav,dx. First, one can decide to use either of them and still have an interpretable aggregation with half the parameters. Then, we also notice that Bav,dx regularize the parameter by forcing the network to take both forward and backward neighbours into account at each time, and avoids one of the neighbours becoming too important. Lastly, they are robust to a change of sign of the eigenvectors since Bav is sign invariant and Bdx will only change the sign of the results, which is not the case for forward/backward aggregations." }, { "heading": "A.1 RETRIEVING THE MEAN AND LAPLACIAN AGGREGATIONS", "text": "It is interesting to note that we can recover simple aggregators from the aggregation matrices Bav(F ) and Bdx(F ). Let F be a vector field such that all edges are equally weighted Fij = ±C for all edges (i, j). Then, the aggregatorBav is equivalent to a mean aggregation:\nBav(F )x = D −1Ax\nUnder the condition Fij = C, the differential aggregator is equivalent to a Laplacian operator L normalized using the degreeD\nBdx(CA)x = D −1(A−D)x = −D−1Lx" }, { "heading": "A.2 GLOBAL FIELD NORMALIZATION", "text": "The proposed aggregators are defined with a row-wise normalized field\nF̂i,: = Fi,:\n||Fi,:||LP\nmeaning that all the vectors are of unit-norm and the aggregation/message passing is done only according to the direction of the vectors, not their amplitude. However, it is also possible to do a global normalization of the field F by taking a matrix-norm instead of a vector-norm. Doing so will modulate the aggregation by the amplitude of the field at each node. One needs to be careful since a global normalization might be very sensitive to the number of nodes in the graph." }, { "heading": "A.3 CENTER-BALANCED AGGREGATORS", "text": "A problem arises in the aggregators Bdx and Bav proposed in equations 5 and 6 when there is an imbalance between the positive and negative terms of F±. In that case, one of the directions overtakes the other in terms of associated weights.\nAn alternative is also to normalize the forward and backward directions separately, to avoid having either the backward or forward direction dominating the message.\nBav−center(F )i,: = F ′+i,: + F ′− i,:\n||F ′+i,j + F ′− i,j ||L1\n, F ′±i,: = |F±i,: |\n||F±i,: ||L1 + (10)\nThe same idea can be applied to the derivative aggregator equation 11 where the positive and negative parts of the field F± are normalized separately to allow to project both the forward and backward messages into a vector field of unit-norm. F+ is the out-going field at each node and is used for the forward direction, while F− is the in-going field used for the backward direction. By averaging the forward and backward derivatives, the proposed matrix Bdx-center represents the centered derivative matrix.\nBdx-center(F )i,: = F ′ i,: − diag ∑ j F ′:,j i,: , F ′i,: = 1 2 F + i,:\n||F+i,:||L1 + ︸ ︷︷ ︸ forward field\n+ F−i,:\n||F−i,: ||L1 + ︸ ︷︷ ︸ backward field\n (11)" }, { "heading": "A.4 HARDENING THE AGGREGATORS", "text": "The aggregation matrices that we proposed, mainly Bdx and Bav depend on a smooth vector field F . At any given node, the aggregation will take a weighted sum of the neighbours in relation to the direction of F . Hence, if the field Fv at a node v is diagonal in the sense that it gives a non-zero weight to many neighbours, then the aggregator will compute a weighted average of the neighbours.\nAlthough there are clearly good reasons to have this weighted-average behaviour, it is not necessarily desired in every problem. For example, if we want to move a single node across the graph, this behaviour will smooth the node at every step. Instead, we propose below to soften and harden the aggregations by forcing the field into making a decision on the direction it takes.\nSoft hardening the aggregation is possible by using a softmax with a temperature T on each row to obtain the field Fsofthard.\n(Fsofthard)i,: = sign(Fi,:)softmax(T |Fi,:|) (12)\nHardening the aggregation is possible by using an infinite temperature, which changes the softmax functions into argmax. In this specific case, the node with the highest component of the field will be copied, while all other nodes will be ignored.\n(Fhard)i,: = sign(Fi,:)argmax(|Fi,:|) (13)\nAn alternative to the aggregators above is to take the softmin/argmin of the negative part and the softmax/argmax of the positive part." }, { "heading": "A.5 FORWARD AND BACKWARD COPY", "text": "The aggregation matrices Bav and Bdx have the nice property that if the field is flipped (change of sign), the aggregation gives the same result, except for the sign of Bdx. However, there are cases where we want to propagate information in the forward direction of the field, without smoothing it with the backward direction. In this case, we can define the strictly forward and strictly backward fields below, and use them directly with the aggregation matrices.\nFforward = F + , Fbackward = F − (14)\nFurther, we can use the hardened fields in order to define a forward copy and backward copy, which will simply copy the node in the direction of the highest field component.\nFforward copy = F + hard , Fbackward copy = F − hard (15)" }, { "heading": "A.6 PHANTOM ZERO-PADDING", "text": "Some recent work in computer vision has shown the importance of zero-padding to improve CNNs by allowing the network to understand it’s position relative to the border (Islam et al., 2020). In contrast, using boundary conditions or reflection padding makes the network completely blind to positional information. In this section, we show that we can mimic the zero-padding in the direction of the field F for both aggregation matricesBav andBdx.\nStarting with theBav matrix, in the case of a missing neighbour in the forward/backward direction, the matrix will compensate by adding more weights to the other direction, due to the denominator which performs a normalization. Instead, we would need the matrix to consider both directions separately so that a missing direction would result in zero padding. Hence, we define Bav,0pad below, where either the F+ or F− will be 0 on a boundary with strictly in-going/out-going field.\n(Bav,0pad)i,: = 1\n2\n( |F+i,:|\n||F+i,:||L1 + + |F−i,: | ||F−i,: ||L1 +\n) (16)\nFollowing the same argument, we define Bdx,0pad below, where either the forward or backward term is ignored. The diagonal term is also removed at the boundary so that the result is a center derivative equal to the subtraction of the forward term with the 0-term on the back (or vice-versa), instead of a forward derivative.\nBdx−0pad(F )i,: = F ′+i,: if ∑ j F ′− i,j = 0 F ′−i,: if ∑ j F ′+ i,j = 0\n1 2 ( F ′+i,: + F ′− i,: − diag (∑ j F ′+ :,j + F ′− :,j ) i,: ) , otherwise\nF ′+i,: = F+i,:\n||F+i,:||L1 + F ′−i,: =\nF−i,:\n||F−i,: ||L1 +\n(17)" }, { "heading": "A.7 EXTENDING THE RADIUS OF THE AGGREGATION KERNEL", "text": "We aim at providing a general radius-R kernelBR that assigns different weights to different subsets of nodes nu at a distance R from the center node nv .\nFirst, we decompose the matrix B(F ) into positive and negative parts B±(F ) representing the forward and backward steps aggregation in the field F .\nB(F ) = B+(F )−B−(F ) (18)\nThus, defining B±fb(F )i,: = F±i,:\n||Fi,:||Lp , we can find different aggregation matrices by using different combinations of walks of radius R. First demonstrated for a grid in theorem 2.7, we generalize it in equation 19 for any graph G. Definition 7 (General radius R n-directional kernel). Let Sn be the group of permutations over n elements with a set of directional fields Fi.\nBR := ∑\nV={v1,v2,...,vn}∈Nn ||V ||L1≤R, −R≤vi≤R︸ ︷︷ ︸\nAny choice of walk V with at mostR steps using all combinations of v1, v2, ..., vn\n∑ σ∈Sn︸︷︷︸ optional\npermutations\naV N∏ j=1 (B sgn(vσ(j)) fb (Fσ(j))) |vσ(j)|\n︸ ︷︷ ︸ Aggregator following the steps V , permuted by Sn\n(19)\nIn this equation, n is the number of directional fields and R is the desired radius. V represents all the choices of walk {v1, v2, ..., vn} in the direction of the fields {F1,F2, ...,Fn}. For example, V = {3, 1, 0,−2} has a radius R = 6, with 3 steps forward of F1, 1 step forward of F2, and 2 steps backward of F4. The sign of eachB±fb is dependant to the sign of vσ(j), and the power |vσ(j)| is the\nnumber of aggregation steps in the directional field Fσ(j). The full equation is thus the combination of all possible choices of paths across the set of fields Fi, with all possible permutations. Note that we are restricting the sum to vi having only a possible sign; although matrices don’t commute, we avoid choosing different signs since it will likely self-intersect a lower radius walk. The permutations σ are required since, for example, the path up→ left is different (in a general graph) than the path left→ up.\nThis matrix BR has a total of ∑R r=0(2n) r = (2n) R+1−1\n2n−1 parameters, with a high redundancy since some permutations might be very similar, e.g. for a grid graph we have that up → left is identical to left → up. Hence, we can replace the permutation Sn by a reverse ordering, meaning that ∏N j Bj = BN ...B2B1. Doing so does not perfectly generalize the radius-R kernel for all graphs, but it generalizes it on a grid and significantly reduces the number of parameters to∑R r=0 ∑min(n,r) l=1 2 r ( n l )( r−1 l−1 ) ." }, { "heading": "B APPENDIX - IMPLEMENTATION DETAILS", "text": "" }, { "heading": "B.1 BENCHMARKS AND DATASETS", "text": "We use a variety of benchmarks proposed by Dwivedi et al. (2020) and Hu et al. (2020) to test the empirical performance of our proposed methods. In particular, to have a wide variety of graphs and tasks we chose:\n1. ZINC, a graph regression dataset from molecular chemistry. The task is to predict a score that is a subtraction of computed properties logP − SA, with logP being the computed octanol-water partition coefficient, and SA being the synthetic accessibility score (Jin et al., 2018).\n2. CIFAR10, a graph classification dataset from computer vision (Krizhevsky, 2009). The task is to classify the images into 10 different classes, with a total of 5000 training image per class and 1000 test image per class. Each image has 32× 32 pixels, but the pixels have been clustered into a graph of ∼ 100 super-pixels. Each super-pixel becomes a node in an almost grid-shaped graph, with 8 edges per node. The clustering uses the code from Knyazev et al. (2019), and results in a different number of super-pixels per graph.\n3. PATTERN, a node classification synthetic benchmark generated with Stochastic Block Models, which are widely used to model communities in social networks. The task is to classify the nodes into 2 communities and it tests the fundamental ability of recognizing specific predetermined subgraphs.\n4. MolHIV, a graph classification benchmark from molecular chemistry. The task is to predict whether a molecule inhibits HIV virus replication or not. The molecules in the training, validation and test sets are divided using a scaffold splitting procedure that splits the molecules based on their two-dimensional structural frameworks.\nOur goal is to provide a fair comparison to demonstrate the capacity of our proposed aggregators. Therefore, we compare the various methods on both types of architectures using the same hyperparameters tuned in previous works (Corso et al., 2020) for similar networks. The models vary exclusively in the aggregation method and the width of the architectures to keep a set parameter budget.\nIn CIFAR10 it is impossible to numerically compute a deterministic vector field with eigenvectors due to the multiplicity of λ1 being greater than 1. This is caused by the symmetry of the square image, and is extremely rare in real-world graphs. Therefore, we used as underlying vector field the gradient of the coordinates of the image. Note that these directions are provided in the nodes’ features in the dataset and available to all models, that they are co-linear to the eigenvectors of the grid as per lemma C.1, and that they mimic the inductive bias in CNNs.\nB.2 IMPLEMENTATION AND COMPUTATIONAL COMPLEXITY\nUnlike several more expressive graph networks (Kondor et al., 2018; Maron et al., 2018), our method does not require a computational complexity superlinear with the size of the graph. The calculation\nof the first k eigenvectors during pretraining, done using Lanczos method (Lanczos, 1950) and the sparse module of Scipy, has a time complexity of O(Ek) where E is the number of edges. During training the complexity is equivalent to a m-aggregator GNN O(Em) (Corso et al., 2020) for the aggregation and O(Nm) for the MLP.\nTo all the architectures we added residual connections (He et al., 2016), batch normalization (Ioffe & Szegedy, 2015) and graph size normalization (Dwivedi et al., 2020).\nFor all the datasets with non-regular graphs, we combine the various aggregators with logarithmic degree-scalers as in Corso et al. (2020).\nAn important thing to note is that, for dynamic graphs, the eigenvectors need to be re-computed dynamically with the changing edges. Fortunately, there are random walk based algorithms that can estimate φ1 quickly, especially for small changes to the graph (Doshi & Eun, 2000). In the current empirical results, we do not work with dynamic graphs." }, { "heading": "B.3 RUNNING TIME", "text": "The precomputation of the first four eigenvectors for all the graphs in the datasets takes 38s for ZINC, 96s for PATTERN and 120s for MolHIV on CPU. Table 1 shows the average running time on GPU for all the various model from figure 5. On average, the epoch running time is 16% slower for the DGN compared to the mean aggregation, but a faster convergence for DGN means that the total training time is on average 8% faster for DGN." }, { "heading": "B.4 EIGENVECTOR MULTIPLICITY", "text": "The possibility to define equivariant directions using the low-frequency Laplacian eigenvectors is subject to the uniqueness of those vectors. When the dimension of the eigenspaces associated with the lowest eigenvalues is 1, the eigenvectors are defined up to a constant factor. In section 2.4, we propose the use of unit vector normalization and an absolute value to eliminate the scale and sign ambiguity. When the dimension of those eigenspaces is greater than 1, it is not possible to define equivariant directions using the eigenvectors.\nFortunately, it is very rare for the Laplacian matrix to have repeated eigenvalues in real-world datasets. We validate this claim by looking at ZINC and PATTERN datasets where we found no graphs with repeated Fiedler vector and only one graph out of 26k with multiplicity of the second eigenvector greater than 1.\nWhen facing a graph that presents repeated Laplacian eigenvalues, we propose to randomly shuffle, during training time, different eigenvectors randomly sampled in the eigenspace. This technique will act as a data augmentation of the graph during training time allowing the network to train with multiple directions at the same time." }, { "heading": "C APPENDIX - MATHEMATICAL PROOFS", "text": "" }, { "heading": "C.1 PROOF FOR THEOREM 2.1 (DIRECTIONAL SMOOTHING)", "text": "The operation y = Bavx is the directional average of x, in the sense that yu is the mean of xv , weighted by the direction and amplitude of F .\nProof. This should be a simple proof, that if we want a weighted average of our neighbours, we simply need to multiply the weights by each neighbour, and divide by the sum of the weights. Of course, the weights should be positive." }, { "heading": "C.2 PROOF FOR THEOREM 2.2 (DIRECTIONAL DERIVATIVE)", "text": "Suppose F̂ have rows of unit L1 norm. The operation y = Bdx(F̂ )x is the centered directional derivative of x in the direction of F , in the sense of equation 4, i.e.\ny = DF̂x = ( F̂ − diag (∑ j F̂:,j )) x\nProof. Since F rows have unit L1 norm, F̂ = F . The i-th coordinate of the vector( F − diag (∑ j F:,j )) x is\nFx− diag ∑\nj\nF x i = ∑ j Fi,jx(j)− ∑ j Fi,j x(i) =\n∑ j:(i,j)∈E (x(j)− x(i))Fi,j\n= DF x(i)" }, { "heading": "C.3 PROOF FOR THEOREM 2.3 (K-GRADIENT OF THE LOW-FREQUENCY EIGENVECTORS)", "text": "Let λi and φi be the eigenvalues and eigenvectors of the normalized Laplacian of a connected graph Lnorm and let a, b = arg max1≤i,j≤n{dK(vi, vj)} be the nodes that have highest K-walk distance. Let m = arg min1≤i≤n(φ1)i and M = arg max1≤i≤n(φ1)i, then dK(vm, vM ) − dK(va, vb) has order O(1− λ2).\nProof. For this theorem, we use the indices i = 0, ..., (N − 1), sorted such that λi ≤ λi+1. Hence, λ0 = 0 and λ1 is the first non-trivial eigenvalue.\nFirst we need the following proposition:\nProposition 1 (K-walk distance matrix). The K-walk distance matrix P associated with a graph is the matrix such that (P )i,j = dK(vi, vj) can be written as ∑K p=1W\np, where W = D−1A is the random walk matrix.\nLet’s defineW = D−1A the random walk matrix of the graph.\nFirst, we are going to show that W is jointly diagonalizable with Lnorm and we are going to relate its eigenvectors φ′i and its eigenvalues λ ′ i with the ones ofW .\nIndeed,Lsym is a symmetric real matrix which is semi-positive definite diagonalizable by the spectral theorem. Since the matrix Lnorm is similar toD 1 2LnormD − 12 =D− 1 2LD− 1 2 = Lsym and the matrix of similarity isD 1 2 , a positive definite matrix, Lnorm is diagonalizable and semi-positive definite.\nBy\nLnorm = D −1L = D−1(L+D −D) = I +D−1(L−D) = I −D−1A = I −W\nthe random walk matrix is jointly diagonalizable with the random walk Laplacian. Also their eigenvalues and eigenvectors are related to each other by φi = φ′n−1−i and λ ′ i = 1− λn−1−i\nMoreover, the constant eigenvector associated with eigenvalue 0 of the Random walk Laplacian, is the eigenvector associated with the highest eigenvalue of the Random walk matrix and by the formula obtained, λ′n−1 = 1− λ0 = 1 Now, we are going to approximate the K-walk distance matrix P using the 2 eigenvectors of the Random walk matrix associated with the highest eigenvalues.\nBy Proposition 1 we have that P = ∑K p=1W p, which can be written as\nK∑ p=1 ( n−1∑ i=0 φ′iφ ′T i (λ ′ i)) p = K∑ p=1 n−1∑ i=0 φ′iφ ′T i (λ ′ i) p\nby eigen-decomposition.\nSince λn−1−i = 1− λ′i and λ2 λ1, we have that λ′n−2 λ′n−3, hence we can approximate\nP = K∑ p=1 ( n−1∑ i=0 φ′iφ ′ i(λ ′ i) p) ≈ K∑ p=1 ( n−1∑ i=n−2 φ′iφ ′T i (λ ′ i) p) +O(λ′n−3) =\n= K∑ p=1 ( 1∑ i=0 φiφ T i (1− λi)p) +O(1− λ2) = K∑ p=1 (φ0φ T 0 + φ1φ T 1 (1− λ1)p) +O(1− λ2)\n= Kφ0φ T 0 + κφ1φ T 1 +O(1− λ2) where κ = ∑K p=1(1− λ1)p is a positive constant.\nNow we are going to show that the farthest nodes with respect to the K-walk distance are the ones associated with the highest and lowest value of φ1.\nIndeed if we want to choose i, j to be at the farthest distance we need to minimise\n(P )i,j = (Kφ0φ T 0 + κφ1φ T 1 )i,j =\nK n + κφ1(i)φ1(j)\nwhich is minimum when φ1(i)φ1(j) is minimum.\nWe are going to show that exist p, q such that φ1(p) < 0,φ1(q) > 0. Since the eigenvector is nonzero, without loss of generality assume φ1(0) 6= 0. Since φ0 and φ1 are eigenvectors associated with different eigenvalues of a real symmetric matrix, they are orthogonal:\nn−1∑ i=0 φ0(i) · φ1(i) = 0\nand since φ0 is constant the previous equation leads to\nn−1∑ i=0 φ1(i) = 0⇐⇒ φ1(0) = − n−1∑ i=1 φ1(i)\nIf such p, q didn’t exist then we would get that ∀i, j φ1(i) ·φ1(j) ≥ 0, hence multiplying both sides of the previous equation by φ1(0) we get\nφ1(0) 2 = − n−1∑ i=1 φ1(i) · φ1(0)⇒ φ1(0)2 ≤ 0\nWhich is a contradiction since by assumption φ1(0) > 0; hence exist p, q such that φ1(p) < 0,φ1(q) > 0.\nSince φ1 attains both positive and negative values, the quantity φ1(i)φ1(j) is minimised when it has negative sign and highest absolute value, hence when i, j are associated with the negative and positive values with the highest absolute value: the lowest and the highest value of φ1. Hence, dK(vM , vm)− dK(va, vb) = O(1− λ2)\nC.4 INFORMAL ARGUMENT IN SUPPORT OF CONJECTURE 2.4 (GRADIENT STEPS REDUCE EXPECTED HITTING TIME)\nSuppose that x, y are uniformly distributed random nodes such that φi(x) < φi(y). Let z be the node obtained from x by taking one step in the direction of ∇φi, then the expected hitting time is decreased proportionally to λ−1i and\nEx,y[Q(z, y)] ≤ Ex,y[Q(x, y)]\nAs a reminder, the definition of a gradient step is given in the definition 3, copied below.\nSuppose the two neighboring nodes x and z are such that φ(z) − φ(x) is maximal among the neighbors of x, then we will say z is obtained from x by taking a step in the direction of the gradient ∇φ. In (Chung & S.T.Yau, 2000), it is shown the hitting time Q(x, y) is given by the equation\nQ(x, y) = vol\n( G(y, y)\ndy − G(x, y) dx ) With λk and φk being the k-th eigenvalues and eigenvectors of the symmetric normalized Laplacian Lsym, vol the sum of the degrees of all nodes, dx the degree of node x and G Green’s function for the graph\nG(x, y) = d 1 2 x d −1 2 y ∑ k>0 1 λk φk(x)φk(y)\nSince the sign of the eigenvector is not deterministic, the choice φi(x) < φi(y) is used to simplify the argument without having to consider the change in sign.\nSupposing λ1 λ2, the first term of the sum ofG has much more weight than the following terms. With z obtained from x by taking a step in the direction of the gradient of φ1 we have\nφ1(z)− φ1(x) > 0\nWe want to show that the following inequality holds\nEx,y(Q(z, y)) < Ex,y(Q(x, y))\nthis is equivalent to the following inequality\nEx,y[G(z, y)] > Ex,y[G(x, y)]\nBy the hypothesis λ1 λ2, we can approximate G(x, y) ∼ d 1 2 x d −1 2 y\n1 λ1 φ1(x)φ1(y) so the last\ninequality is equivalent to Ex,y [ d 1 2 z d −1 2 y 1\nλ1 φ1(z)φ1(y)\n] > Ex,y [ d 1 2 x d −1 2 y 1\nλ1 φ1(x)φ1(y) ] Removing all equal terms from both sides, the inequality is equivalent to\nEx,y [ d 1 2 z φ1(z) ] > Ex,y [ d 1 2 xφ1(x) ] But showing this last inequality is not easy. We know that φ1(z) > φ1(x) and from the choice of z being a step in the direction of∇φ1, we know it is less likely to be on the border of the graph so we believe E(dz) ≥ E(dx). Thus we also believe that the conjecture should hold in general. We believe this should be true even without the assumption on λ1 and λ2 and for more eigenvectors than φ1." }, { "heading": "C.5 PROOF FOR LEMMA C.1 (COSINE EIGENVECTORS)", "text": "Consider the lattice graph Γ of size N1 ×N2 × ...×Nn, that has vertices ∏ i=1,...,n{1, ..., Ni} and the vertices (xi)i=1,...,n and (yi)i=1,...,n are connected by an edge iff |xi − yi| = 1 for one index i and 0 for all other indices. Note that there are no diagonal edges in the lattice. The eigenvector of the Laplacian of the grid L(Γ) are given by φj .\nLemma C.1 (Cosine eigenvectors). The Laplacian of Γ has an eigenvalue 2− 2 cos ( π Ni ) with the associated eigenvector φj that depends only the variable in the i-th dimension and is constant in all others, with φj = 1N1 ⊗ 1N2 ⊗ ...⊗ x1,Ni ⊗ ...⊗ 1Nn , and x1,Ni(j) = cos ( πj n − π 2n\n) Proof. First, recall the well known result that the path graph on N vertices PN has eigenvalues\nλk = 2− 2 cos ( πk\nn ) with associated eigenvector xk with i-th coordinate\nxk(i) = cos\n( πki\nn + πk 2n\n)\nThe Cartesian product of two graphs G = (VG, EG) and H = (VH , EH) is defined as G × H = (VG×H , EG×H) with VG×H = VG × VH and ((u1, u2), ((v1, v2)) ∈ EG×H iff either u1 = v1 and (u2, v2) ∈ EH or (u1, v1) ∈ VG and u2 = v2. It is shown in (Fiedler, 1973) that if (µi)i=1,...,m and (λj)j=1,...,n are the eigenvalues of G and H respectively, then the eigenvalues of the Cartesian product graphG×H are µi+λj for all possible eigenvalues µi and λj . Also, the eigenvectors associated to the eigenvalue µi + λj are ui ⊗ vj with ui an eigenvector of the Laplacian of G associated to the eigenvalue µi and vj an eigenvector of the Laplacian of H associated to the eigenvalue λj .\nFinally, noticing that a lattice of shape N1 × N2 × ... × Nn is really the Cartesian product of path graphs of length N1 up to Nn, we conclude that there are eigenvalues 2 − 2 cos ( π Ni ) . Denoting by 1Nj the vector in R Nj with only ones as coordinates, then the eigenvector associated to the\neigenvalue 2− 2 cos ( π Ni ) is\n1N1 ⊗ 1N2 ⊗ ...⊗ x1,Ni ⊗ ...⊗ 1Nn\nwhere x1,Ni is the eigenvector of the Laplacian of PNi associated to its first non-zero eigenvalue. 2− 2 cos ( π Ni ) ." }, { "heading": "C.6 RADIUS 1 CONVOLUTION KERNELS IN A GRID", "text": "In this section we show any radius 1 convolution kernel can be obtained as a linear combination of the Bdx(∇φi) and Bav(∇φi) matrices for the right choice of Laplacian eigenvectors φi. First we show this can be done for 1-d convolution kernels. Theorem C.2. On a path graph, any 1D convolution kernel of size 3 k is a linear combination of the aggregatorsBav,Bdx and the identity I .\nProof. Recall from the previous proof that the first non zero eigenvalue of the path graph PN has associated eigenvector φ1(i) = cos(πiN − π 2N ). Since this is a monotone decreasing function in i, the i-th row of ∇φ1 will be (0, ..., 0, si−1, 0,−si+1, 0, ..., 0)\nwith si−1 and si+1 > 0. We are trying to solve\n(aBav + bBdx + cId)i,: = (0, ..., 0, x, y, z, 0, ..., 0)\nwith x, y, z, in positions i− 1, i and i+ 1. This simplifies to solving\na 1 ‖s‖L1 |s|+ b 1 ‖s‖L2 s+ c(0, 1, 0) = (x, y, z)\nwith s = (si−1, 0,−si+1), which always has a solution because si−1, si+1 > 0.\nTheorem C.3 (Generalization radius-1 convolutional kernel in a grid). Let Γ be the n-dimensional lattice as above and let φj be the eigenvectors of the Laplacian of the lattice as in theorem C.1. Then any radius 1 kernel k on Γ is a linear combination of the aggregators Bav(φi),Bdx(φi) and I .\nProof. This is a direct consequence of C.2 obtained by adding n 1-dimensional kernels, with each kernel being in a different axis of the grid as per Lemma C.1. See figure 4 for a visual example in 2D.\nC.7 PROOF FOR THEOREM 2.7 (GENERALIZATION RADIUS-R CONVOLUTIONAL KERNEL IN A LATTICE)\nFor an n-dimensional lattice, any convolutional kernel of radius R can be realized by a linear combination of directional aggregation matrices and their compositions.\nProof. For clarity, we first do the 2 dimensional case for a radius 2, then extended to the general case. Let k be the radius 2 kernel on a grid represented by the matrix\na5×5 = 0 0 a−2,0 0 0 0 a−1,−1 a−1,0 a−1,1 0\na0,−2 a0,−1 a0,0 a0,1 a0,2 0 a1,−1 a1,0 a1,1 0 0 0 a2,0 0 0 since we supposed the N1 × N2 grid was such that N1 > N2, by theorem C.1, we have that φ1 is depending only in the first variable x1 and is monotone in x1. Recall from C.1 that\nφ1(i) = cos\n( πi\nN1 +\nπ\n2N1 ) The vector N1π ∇ arccos(φ1) will be denoted by F1 in the rest. Notice all entries of F1 are 0 or ±1. Denote by F2 the gradient vector N2π ∇ arccos(φk) where φk is the eigenvector given by theorem C.1 that is depending only in the second variable x2 and is monotone in x1 and recall\nφk(i) = cos\n( πi\nN2 +\nπ\n2N2\n)\nFor a matrix B, let B± the positive/negative parts of B, ie matrices with positive entries such that B = B+ −B−. LetBr1 be a matrix representing the radius 1 kernel with weights\na3×3 =\n( 0 a−1,0 0\na0,−1 a0,0 a0,1 0 a1,0 0\n)\nThe matrix Br1 can be obtained by theorem C.3. Then the radius 2 kernel k is defined by all the possible combinations of 2 positive/negative steps, plus the initial radius-1 kernel.\nBr2 = ∑\n−2≤i,j≤2 |i|+|j|=2\n( ai,j(F sgn(i) 1 ) |i|(F sgn(j) 2 ) |j| )\n︸ ︷︷ ︸ Any combination of 2 steps + Br1︸︷︷︸ all possible single-steps\nwith sgn the sign function sgn(i) = + if i ≥ 0 and − if i < 0. The matrix Br2 then realises the kernel a5×5.\nWe can further extend the above construction to N dimension grids and radius R kernels k\n∑ V={v1,v2,...,vN}∈Nn\n||V ||L1≤R −R≤vi≤R︸ ︷︷ ︸\nAny choice of walk V with at mostR-steps\naV\nN∏ j=1 (F sgn(vj) j ) |vj |\n︸ ︷︷ ︸ Aggregator following the steps defined in V\nwith Fj = Nj π ∇ arccosφj ,φj the eigenvector with lowest eigenvalue only dependent on the j-th variable and given in theorem C.1 and ∏\nis the matrix multiplication. V represents all the choices of walk {v1, v2, ..., vn} in the direction of the fields {F1,F2, ...,Fn}. For example, V = {3, 1, 0,−2} has a radius R = 6, with 3 steps forward of F1, 1 step forward of F2, and 2 steps backward of F4." }, { "heading": "C.8 PROOF FOR THEOREM 2.8 (COMPARISON WITH 1-WL TEST)", "text": "DGNs using the mean aggregator, any directional aggregator of the first eigenvector and injective degree-scalers are strictly more powerful than the 1-WL test.\nProof. We will show that (1) DGNs are at least as powerful as the 1-WL test and (2) there is a pair of graphs which are not distinguishable by the 1-WL test which DGNs can discriminate.\nSince the DGNs include the mean aggregator combined with at least an injective degree-scaler, Corso et al. (2020) show that the resulting architecture is at least as powerful as the 1-WL test.\nThen, to show that the DGNs are strictly more powerful than the 1-WL test it suffices to provide an example of a pair of graphs that DGNs can differentiate and 1-WL cannot. Such a pair of graphs is illustrated in figure 8.\nThe 1-WL test (as any MPNN with, for example, sum aggregator) will always have the same features for all the nodes labelled with a and for all the nodes labelled with b and, therefore, will classify the graphs as isomorphic. DGNs, via the directional smoothing or directional derivative aggregators based on the first eigenvector of the Laplacian matrix, will update the features of the a nodes differently in the two graphs (figure 8 presents also the aggregation functions) and will, therefore, be capable of distinguishing them." } ]
2,020
null
SP:540d8c615b5193239aa43717de8cacc749ccc4c6
[ "The authors describe a method for representing a continuous signal by a pulse code, in a manner inspired by auditory processing in the brain. The resulting framework is somewhat like matching pursuit except that filters are run a single time in a causal manner to find the spike times (which would be faster than MP), and then a N*N least squares problem is solved (which makes it slower). The authors claim that their method will perfectly reconstruct signals of finite innovation rate, however there appear to be mathematical errors in the proof." ]
In many animal sensory pathways, the transformation from external stimuli to spike trains is essentially deterministic. In this context, a new mathematical framework for coding and reconstruction, based on a biologically plausible model of the spiking neuron, is presented. The framework considers encoding of a signal through spike trains generated by an ensemble of neurons via a standard convolve-thenthreshold mechanism, albeit with a wide variety of convolution kernels. Neurons are distinguished by their convolution kernels and threshold values. Reconstruction is posited as a convex optimization minimizing energy. Formal conditions under which perfect and approximate reconstruction of the signal from the spike trains is possible are then identified. Coding experiments on a large audio dataset are presented to demonstrate the strength of the framework.
[]
[ { "authors": [ "Horace B Barlow" ], "title": "Possible principles underlying the transformations of sensory messages", "venue": "Sensory Communication,", "year": 1961 }, { "authors": [ "Stephen Boyd", "Leon Chua" ], "title": "Fading memory and the problem of approximating nonlinear operators with volterra series", "venue": "IEEE Transactions on circuits and systems,", "year": 2021 }, { "authors": [ "E.J. Candes", "J. Romberg", "T. Tao" ], "title": "Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information", "venue": "IEEE Transactions on Information Theory,", "year": 2006 }, { "authors": [ "Dmitri B. Chklovskii", "Daniel Soudry" ], "title": "Neuronal spike generation mechanism as an oversampling, noise-shaping a-to-d converter", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Peter Dayan", "L.F. Abbott" ], "title": "Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems", "venue": null, "year": 2005 }, { "authors": [ "P. Földiák" ], "title": "Forming sparse representations by local anti-hebbian learning", "venue": "Biological Cybernetics,", "year": 1990 }, { "authors": [ "Eduardo Fonseca", "Manoj Plakal", "Frederic Font", "Daniel P.W. Ellis", "Xavier Favory", "Jordi Pons", "Xavier Serra" ], "title": "General-purpose tagging of freesound audio with audioset labels: Task description", "venue": null, "year": 2018 }, { "authors": [ "G. Gallego", "T. Delbruck", "G. Orchard", "C. Bartolozzi", "B. Taba", "A. Censi", "S. Leutenegger", "A. Davison", "J. Conradt", "K. Daniilidis", "D. Scaramuzza" ], "title": "Event-based vision: A survey", "venue": "IEEE Transactions on Pattern Analysis & Machine Intelligence,", "year": 1939 }, { "authors": [ "J.F. Gemmeke", "D.P.W. Ellis", "D. Freedman", "A. Jansen", "W. Lawrence", "R.C. Moore", "M. Plakal", "M. Ritter" ], "title": "Audio set: An ontology and human-labeled dataset for audio events", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2017 }, { "authors": [ "Daniel Graham", "David Field" ], "title": "Sparse coding in the neocortex", "venue": "Evolution of Nervous Systems,", "year": 2007 }, { "authors": [ "A.A. Lazar", "L.T. Toth" ], "title": "Time encoding and perfect recovery of bandlimited signals", "venue": "IEEE International Conference on Acoustics, Speech, and Signal Processing,", "year": 2003 }, { "authors": [ "S. Mallat", "Z. Zhang" ], "title": "Matching pursuits with time-frequency dictionaries", "venue": "IEEE Trans. Signal Process.,", "year": 1993 }, { "authors": [ "David J. Olshausen", "Bruno A", "Field" ], "title": "Emergence of simple-cell receptive field properties by learning a sparse code for natural images", "venue": "Nature,", "year": 1996 }, { "authors": [ "R. Patterson", "Ian Nimmo-Smith", "J. Holdsworth", "P. Rice" ], "title": "An efficient auditory filterbank based on the gammatone function", "venue": null, "year": 1988 }, { "authors": [ "Bernhard Schölkopf", "Ralf Herbrich", "Alex J Smola" ], "title": "A generalized representer theorem", "venue": "In International Conference on Computational Learning Theory,", "year": 2001 }, { "authors": [ "Malcolm Slaney" ], "title": "An efficient implementation of the patterson-holdsworth auditory filter bank", "venue": null, "year": 2000 }, { "authors": [ "L.R. Squire" ], "title": "Fundamental Neuroscience", "venue": "Academic Press/Elsevier,", "year": 2008 }, { "authors": [ "M. Vetterli", "P. Marziliano", "T. Blu" ], "title": "Sampling signals with finite rate of innovation", "venue": "IEEE Transactions on Signal Processing,", "year": 2002 } ]
[ { "heading": null, "text": "In many animal sensory pathways, the transformation from external stimuli to spike trains is essentially deterministic. In this context, a new mathematical framework for coding and reconstruction, based on a biologically plausible model of the spiking neuron, is presented. The framework considers encoding of a signal through spike trains generated by an ensemble of neurons via a standard convolve-thenthreshold mechanism, albeit with a wide variety of convolution kernels. Neurons are distinguished by their convolution kernels and threshold values. Reconstruction is posited as a convex optimization minimizing energy. Formal conditions under which perfect and approximate reconstruction of the signal from the spike trains is possible are then identified. Coding experiments on a large audio dataset are presented to demonstrate the strength of the framework." }, { "heading": "1 INTRODUCTION", "text": "In biological systems, sensory stimuli is communicated to the brain primarily via ensembles of discrete events that are spatiotemporally compact electrical disturbances generated by neurons, otherwise known as spikes. Spike train representation of signals, when sparse, are not only intrinsically energy efficient, but can also facilitate downstream computation(6; 10). In their seminal work, Olshausen and Field (13) showed how efficient codes can arise from learning sparse representations of natural stimulus statistics, resulting in striking similarities with observed biological receptive fields. (19) developed a biophysically motivated spiking neural network which for the first time predicted the full diversity of V1 simple cell receptive field shapes when trained on natural images. Although these results signify substantial progress, an effective end to end signal processing framework that deterministically represents signals via spike train ensembles is yet to be laid out. Here we present a new framework for coding and reconstruction leveraging a biologically plausible coding mechanism which is a superset of the standard leaky integrate-and-fire neuron model (5).\nOur proposed framework identifies reconstruction guarantees for a very general class of signals—those with finite rate of innovation (18)—as shown in our perfect and approximate reconstruction theorems. Most other classes, e.g. bandlimited signals, are subsets of this class. The proposed technique first formulates reconstruction as an optimization that minimizes the energy of the reconstructed signal subject to consistency with the spike train, and then solves it in closed form. We then identify a general class of signals for which reconstruction is provably perfect under certain ideal conditions. Subsequently, we present a mathematical bound on the error of an approximate reconstruction when the model deviates from those ideal conditions. Finally, we present simulation experiments coding for a large dataset of audio signals that demonstrate the efficacy of the framework. In a separate set of experiments on a smaller subset of audio signals we compare our framework with existing sparse coding algorithms viz matching pursuit and orthogonal matching pursuit, establishing the strength of our technique.\nThe remainder of the paper is structured as follows. In Sections 2 and 3 we introduce the coding and decoding frameworks. Section 4 identifies the class of signals for which perfect reconstruction is achievable if certain ideal conditions are met. In Section 5 we discuss how in practice those ideal conditions can be approached and provide a mathematical bound for approximate reconstruction. Simulation results are presented in Section 6. We conclude in Section 8." }, { "heading": "2 CODING", "text": "The general class of deterministic mappings (i.e., the set of all nonlinear operators) from continuous time signals to spike trains is difficult to characterize because the space of all spike trains does not lend itself to a natural topology that is universally embraced. The result is that simple characterizations, such as the set of all continuous operators, can not be posited in a manner that has general consensus. To resolve this issue, we take a cue from biological systems. In most animal sensory pathways, external stimulus passes through a series of transformations before being turned into spike trains(17). For example, visual signal in the retina is processed by multiple layers of non-spiking horizontal, amacrine and bipolar cells, before being converted into spike trains by the retinal ganglion cells. Accordingly, we can consider the set of transformations that pass via an intermediate continuous time signal which is then transformed into a spike train through a stereotyped mapping where spikes mark threshold crossings. The complexity of the operator now lies in the mapping from the continuous time input signal to the continuous time intermediate signal. Since any time invariant, continuous, nonlinear operator with fading memory can be approximated by a finite Volterra series operator(2), this general class of nonlinear operators from continuous time signals to spike trains can be modeled as the composition of a finite Volterra series operator and a neuronal thresholding operation to generate a spike train. Here, the simplest subclass of these transformations is considered: the case where the Volterra series operator has a single causal, bounded-time, linear term, the output of which is composed with a thresholding operation of a potentially time varying threshold. The overall operator from the input signal to the spike train remains nonlinear due to the thresholding operation. The code generated by an ensemble of such transformations, corresponding to an ensemble of spike trains, is explored.\nFormally, we assume the input signal X(t) to be a bounded square integrable function over the compact interval [0, T ] for some T ∈ R+, i.e., we are interested in the class of input signals F = {X(t)|X(t) ∈ L2[0, T ]}. Since the framework involves signal snippets of arbitrary length, this choice of T is without loss of generalization. We assume an ensemble of convolution kernels K = {Kj |j ∈ Z+, j ≤ n}, consisting of n kernels Kj , j = 1, . . . , n. We assume that Kj(t) is a continuous function on a bounded time interval [0, T ], i.e. ∀j ∈ {1, . . . , n},Kj(t) ∈ C[0, T ], T ∈ R+. Finally, we assume that Kj has a time varying threshold denoted by T j(t). The ensemble of convolution kernels K encodes a given input signal X(t) into a sequence of spikes {(ti,Kji)}, where the ith spike is produced by the jthi kernel Kji at time ti if and only if:∫ X(τ)Kji(ti − τ)dτ = T ji(ti) In our experiments a specific threshold function is assumed in which the time varying threshold T j(t) of the jth kernel remains constant at Cj until that kernel produces a spike, at which time an after-hyperpolarization potential (ahp) is introduced to raise the threshold to a high value M j Cj , which then drops back linearly to its original value within a refractory period δj . Stated formally,\nT j(t) = C j , t− δj > tjl (t)\nM j − (t−t j l (t))(M j−Cj) δj , t− δj ≤ tjl (t) (1)\nWhere tjl (t) denotes the time of the last spike generated by K j prior to time t." }, { "heading": "3 DECODING", "text": "How rich is the coding mechanism just described? We can investigate this question formally by positing a decoding module. The objective of the decoding module is to reconstruct the original signal from the encoded ensemble of spike trains. It is worthwhile to mention that to be able to communicate signals properly by our proposed framework, the decoding module needs to be designed in a manner so that it can operate solely on the spike train data handed over by the encoding module, without explicit access to the input signal itself. Considering the prospect of the invertibility of the coding scheme, we seek a signal that satisfies the same set of constraints as the original signal when generating all spikes apropos the set of kernels in ensemble K. Recognizing that such a signal might not be unique, we choose the reconstructed signal as the one with minimum L2-norm.\nFormally, the reconstruction (denoted by X∗(t)) of the input signal X(t) is formulated to be the solution to the optimization problem:\nX∗(t) = argmin X̃ ||X̃(t)||22\ns.t. ∫ X̃(τ)Kji(ti − τ)dτ = T ji(ti); 1 ≤ i ≤ N (2)\nwhere {(ti,Kji)|i ∈ {1, ..., N}} is the set of all spikes generated by the encoder. The choice of L2 minimization as the objective of the reconstruction problem—which is the linchpin of our framework, as demonstrated in the theorems—can only be weakly justified at the current juncture. The perfect reconstruction theorem that follows provides the strong justification. As it stands, the L2 minimization objective is in congruence with the dictum of energy efficiency in biological systems. The assumption is that, of all signals, the one with the minimum energy that is consistent with the spike trains is desirable. Additionally, an L2 minimization in the objective of (2) reduces the convex optimization problem to a solvable linear system of equations as shown in Lemmas 1 and 3. Later we shall show that L2-minimization has the surprising benefit of recovering the original signal perfectly under certain conditions." }, { "heading": "4 SIGNAL CLASS FOR PERFECT RECONSTRUCTION", "text": "To establish the effectiveness of the described coding-decoding model, we have to evaluate the accuracy of reconstruction over a class of input signals. We observe that in general the encoding of square integrable signals into spike trains is not a one-to-one map; the same set of spikes can be generated by different signals so as to result in the same convolved values at the spike times. Naturally, with a finite and fixed ensemble of kernelsK, one cannot achieve perfect reconstruction for the general class of signals F as defined in Section 2. We now restrict ourselves to a subset G of the original class F defined as G = {X(t)|X(t) ∈ F , X(t) = ∑N p=1 αpK\njp(tp − t), jp ∈ {1, ..., n}, αp ∈ R, tp ∈ R+, N ∈ Z+} and address the question of reconstruction accuracy. Essentially G consists of all linear combinations of arbitrarily shifted kernel functions. N is bounded above by the total number of spikes that the ensemble K can generate over [0, T ]. In the parlance of signal processing, G constitutes Finite rate of Innovation signals (18). For the class G the perfect reconstruction theorem is presented below. The theorem is proved with the help of three lemmas. Perfect Reconstruction Theorem: Let X(t) ∈ G be an input signal. Then for appropriately chosen time-varying thresholds of the kernels, the reconstruction, X∗(t), resulting from the proposed codingdecoding framework is accurate with respect to the L2 metric, i.e., ||X∗(t)−X(t)||2 = 0. Lemma 1: The solution X∗(t) to the reconstruction problem given by (2) can be written as:\nX∗(t) = N∑ i=1 αiK ji(ti − t) (3)\nwhere the coefficients αi ∈ R can be solved from a system of linear equations. Proof: An approach analogous to the Representer Theorem (15), splitting a putative solution to (2) into its within the span of the kernels component and a remnant orthogonal component, results in equation (3). In essence, the reconstructed signal X∗(t) becomes a summation of the kernels, shifted to their respective times of generation of spikes, scaled by appropriate coefficients. Plugging (3) into the constraints (2) gives:\n∀1≤i≤N ; ∫ N∑\nk=1\nαkK jk(tk − t)Kji(ti − t)dτ = T ji(ti)\nSetting bi = T ji(ti) and Pik = ∫ Kjk(tk − τ)Kji(ti − τ)dτ results in:\n∀1≤i≤N ; N∑ k=1 Pikαk = bi (4)\nEquation (4) defines a system of N equations in N unknowns of the form:\nPα = T (5) where α = 〈α1, ..., αN 〉T , T = 〈T j1(t1), ..., T jN (tN )〉T and P is an N ×N matrix with elements Pik = ∫ Kjk(tk − τ)Kji(ti − τ)dτ . Clearly P is the Gramian Matrix of the shifted kernels\n{Kji(ti − t)|i ∈ 1, 2, ..., N} in the Hilbert space with the standard inner product. It is well known that P is invertible if and only if {Kji(ti − t)|i ∈ 1, 2, ..., N} is a linearly independent set. If P is invertible α has a unique solution. If, on the other hand, P is not invertible, α has multiple solutions. However, as the next lemma shows, every such solution leads to the same reconstruction X∗(t), and hence any value of α that satisfies 5 can be chosen. We note in passing that in our experiments we have used the least square solution. 2 Import: The goal of the optimization problem is to find the best object in the feasible set. However, the application of the Representer Theorem converts the constraints into a determined system of unknowns and equations, turning the focus onto the feasible set, effectively changing the optimization problem into a solvable system that results in a closed form solution for the αi’s. This implies that instead of solving (2), we can solve for the reconstruction from X∗(t) = ∑N i=1 αiK\nji(ti− t), where αi is the i-th element of α = P−1T . Here, P−1 represents either the inverse or the Moore-Penrose inverse, as the case may be. Lemma 2: Let equation 5 resulting from the optimization problem 2 have multiple solutions. Consider any two different solutions for α, namely α1 and α2, and hence the corresponding reconstructions are given by X1(t) = ∑N i=1 α1iK ji(ti − t) and X2(t) = ∑N i=1 α2iK\nji(ti − t), respectively. Then X1 = X2. Proof: The proof of this lemma follows from the existence of a unique function in the Hilbert Space spanned by {Kji(ti − t)|i ∈ 1, 2, ..., N} that satisfies the constraint of equation 2. The details of the proof is furnished in the appendix A. Import: Lemma 2 essentially establishes the uniqueness of solution to the optimization problem formulated in 2 as any solution to equation 5. The proof follows from the fact that the reconstruction is in the span of the shifted kernels {Kji(ti − t)|i ∈ 1, 2, ..., N} and the inner products of the reconstruction with each of Kji(ti − t) is given (by the spike constraints of 2). Such a reconstruction must be unique in the subspace S. Lemma 3: Let X∗(t) be the reconstruction of an input signal X(t) and {(ti,Kji)}Ni=1 be the set of spikes generated. Then, for any arbitrary signal X̃(t) within the span of {Kji(ti − t)|i ∈ {1, 2, ..., N}}, i.e., the set of shifted kernels at respective spike times, given by X̃(t) =∑N i=1 aiK\nji(ti − t) the following inequality holds: ||X(t)−X∗(t)|| ≤ ||X(t)− X̃(t)|| Proof:\n||X(t)− X̃(t)|| = ||X(t)−X∗(t)︸ ︷︷ ︸ A +X∗(t)− X̃(t)︸ ︷︷ ︸ B || First, 〈A,Kji(ti − t)〉 = 〈X(t),Kji(ti − t)〉 − 〈X∗(t),Kji(ti − t)〉,∀i ∈ {1, 2, .., N} = T ji(ti)− T ji(ti) = 0 (Using the constraints in (2) & (2))\nSecond, 〈A,B〉 = 〈A, N∑ i=1 (αi − ai)Kji(ti − t)〉 (By Lemma 1 X∗(t) = N∑ i=1 αiK ji(ti − t))\n= N∑ i=1 (αi − ai)〈A,Kji(ti − t)〉 = 0\n=⇒ ||X(t)− X̃(t)||2 = ||A+B||2 = ||A||2 + ||B||2 ≥ ||A||2 = ||X(t)−X∗(t)||2\n=⇒ ||X(t)− X̃(t)|| ≥ ||X(t)−X∗(t)|| 2\nImport: The implication of the above lemma is quite remarkable. The objective defined in (2) chooses a signal with minimum energy satisfying the constraints, deemed the reconstructed signal. However as the lemma demonstrates, this signal also has the minimum error with respect to the input signal in the span of the shifted kernels. This signifies that our choice of the objective in the decoding module not only draws from biologically motivated energy optimization principles, but also performs optimally in terms of reconstructing the original input signal within the span of the appropriately shifted spike generating kernels.\nCorollary: An important consequence of Lemma 3 is that additional spikes in the system do not worsen the reconstruction. For a given input signal X(t) if S1 and S2 are two sets of spike trains where S1 ⊂ S2, the second a superset of the first, then Lemma 3 implies that the reconstruction due to S2 is at least as good as the reconstruction due to S1 because the reconstruction due to S1 is in the span of the shifted kernel functions of S2 as S1 ⊂ S2. This immediately leads to the conclusion that\nfor a given input signal the more kernels we add to the ensemble the better the reconstruction. Proof of the Theorem: The proof of the theorem follows directly from Lemma 3. Since the input signalX(t) ∈ G, letX(t) be given by: X(t) = ∑N p=1 αpK\njp(tp−t) (αp ∈ R, tp ∈ R+, N ∈ Z+) Assume that the time varying thresholds of the kernels in our kernel ensemble K are set in such a manner that the following conditions are satisfied: 〈X(t),Kjp(tp− t)〉 = T jp(tp) ∀p ∈ {1, ..., N} i.e., each of the kernels Kjp at the very least produces a spike at time tp against X(t) (regardless of other spikes at other times). Clearly then X(t) lies in the span of the appropriately shifted response functions of the spike generating kernels. Applying Lemma 3 it follows that: ||X(t)−X∗(t)||2 ≤ ||X(t)−X(t)||2 = 0 2 Import: In addition to demonstrating the potency of the coding-decoding scheme, this theorem frames Barlow’s efficient coding hypothesis (1)—that the coding strategy of sensory neurons be adapted to the statistics of the stimuli—in mathematically concrete terms. Going by the theorem, the spike based encoding necessitates the signals to be in the span of the encoding kernels for perfect reconstruction. Inverting the argument, kernels must learn to adapt to the basis elements that generate the signal corpora for superior reconstruction." }, { "heading": "5 APPROXIMATE RECONSTRUCTION AND THE EFFECT OF AHP", "text": "The perfect reconstruction theorem stipulates the conditions under which exact recovery of a signal is feasible in the proposed framework. At first glance, it may seem challenging to meet these conditions for an arbitrary class of natural signals. The concern stems from two difficulties: firstly, given a fixed set of kernels, the input signal may not lie in the span of their arbitrary shifts, and secondly, we may not be able to generate spikes at the desired locations as postulated in the proof of the theorem.\nTo address these issues, we observe that our decoding model is a continuous transformation from the space of spike trains to L2-functions, in the sense that small changes in spike times or a slight mismatch of the spiking kernels from the components of the signal, bring about only small changes in the reconstruction. In what follows, we furnish an Approximate Reconstruction Theorem (C) that provides a bound on the reconstruction error under such deviations. To address the first problem, it is important to choose kernel functions appropriately so that they can represent the input signals reasonably well. One can leverage biological knowledge; for example, it is well-known that auditory filters are effectively modeled using gammatones (14). Hence our experiments in Section 6 on auditory signals were coded using gammatone kernels. Not surprisingly, the reconstructions were excellent. To alleviate the second problem, we observe that spikes can be produced reasonably close to the desired locations by setting a low baseline threshold and a small refractory period of the after-hyperpolarization potential (ahp) for each kernel, a technique that is guaranteed to give good results as is confirmed by our experiments in Section 6. The following lemma formalizes the notion of how lowering the threshold and the refractory period of a kernel helps in generating spikes at the desired locations. The lemma is followed by the Approximate Reconstruction Theorem. Lemma 4: Let X(t) be an input signal. Let Kp be a kernel for which we want to generate a spike at time tp. Let the inner product 〈X(t),Kp(tp − t)〉 = Ip. Then, if the baseline threshold of the kernel Kp is Cp ≤ Ip and the absolute refractory period is δ as modeled in Equation 1, the kernel Kp must produce a spike in the interval [tp − δ, tp] according to the threshold model defined in Equation 1. Proof: The proof of this lemma follows directly from the intermediate value theorem and is detailed in appendix B. Approximate Reconstruction Theorem: Let the following assumptions be true: • X(t), the input signal to the proposed framework, can be written as a linear combination of some component functions as- X(t) = ∑N i=1 αifpi(ti − t) where αi are bounded real coefficients, the component functions fpi(t) are chosen from a possibly infinite setH = {fi(t)|i ∈ Z+, ||fi(t)|| = 1} of functions of unit L2 norm with compact support, and the corresponding ti ∈ R+ are chosen to be bounded arbitrary time shifts of the component functions so that the overall signal has a compact support in [0, T ] for some T ∈ R+ and thus the input signal still belongs to the same class of signals F , as defined in section 2. • There is at least one kernel Kji from the bag of encoding kernels K, such that the L2-distance of fpi(t) from K\nji(t) is bounded. Formally, ∃ δ ∈ R+ s.t. ||fpi(t)−Kji(t)||2 < δ ∀ i ∈ {1, ..., N}. •When X(t) is encoded by the proposed framework, each one of these kernels Kji produce a spike at time t′i at threshold Ti such that |ti − t′i| < ∆ ∀i, for some ∆ ∈ R+. • Each kernel Kj ∈ K satisfies a Lipschitz type condition as follows:\n∃ C∈R s.t. ||Kj(t)−Kj(t−∆t)||2 ≤ C|∆t|, ∀∆t ∈ R, ∀j. • And lastly the shifted component functions satisfy a frame bound type of condition as follows:∑ k 6=i〈fpi(t− ti), fpk(t− tk)〉 ≤ η ∀ i ∈ {1, ..., N} Then, reconstruction X∗(t), resulting from the proposed framework, has a bounded noise to signal ratio. Specifically, the following inequality is satisfied:\n||X(t)−X∗(t)||22/||X(t)||22 ≤ (δ + C∆)(1+xmax)/(1−η)\nwhere xmax is a positive number ∈ [0, N − 1] that depends on the maximum overlap of the support of component functions fpi(t− ti). Proof: A detailed proof of this theorem is provided in the appendix C." }, { "heading": "6 EXPERIMENTS ON REAL SIGNALS", "text": "The proposed framework is general enough to apply to any class of signals. However, since the computational resources necessary to code and reconstruct video signals (function of three variablesx, y, t) would be sufficiently larger than audio signals (function of only one variable t), to demonstrate that the proposed framework can indeed be adopted in real engineering applications as a novel encoding scheme, we ran experiments on a repository of audio signals." }, { "heading": "6.1 DATASET", "text": "We chose the Freesound Dataset Kaggle 2018 (or FSDKaggle2018 for short), an audio dataset of natural sounds posted on Kaggle referred in (7), containing 18,873 audio files annotated with labels from Google’s AudioSet Ontology (9). For the purpose of the experiments, we ignored the labels and only focused on the sound data, since we were only interested in encoding and decoding the input signals. All audio samples in this dataset are provided as uncompressed PCM 16bit, 44.1kHz, mono audio files, with each file consisting of sound snippets of duration ranging between 300ms to 30s. In the experiment, we ran our proposed methodology over at least 1000 randomly chosen sound snippets from the samples in the dataset. For ease of computation, we kept the length of the input audio snippets to be relatively small (ideally of size less than 50ms), splitting longer signals. This choice of considering small snippets as input made the computation feasible on limited resource machines within reasonable time bounds by reducing the size of the P -matrix referred to in Equation 4. This choice is without loss of generalisation since for encoding signals of greater length, reconstruction using this framework can be done piece-wise: splitting a longer signal into smaller pieces, reconstructing piece-wise and finally stitching the reconstructed pieces together." }, { "heading": "6.2 SET OF KERNELS", "text": "The proposed encoding technique is operational on a set of kernels, as stated in Equation 2. The first order of business was therefore the choice of a suitable set of kernels for our experiments. Since gammatone filters are widely used as a reasonable model of cochlear filters in auditory systems (14), and mathematically, are fairly simple to represent—atn−1e−2πbt cos(2πft+φ)—in our experiments we chose a set of gammatone filters as our kernels (Figure 1). The implementation of the filterbank is similar to (16), and we used up to 2000 gammatone kernels whose center frequencies were uniformly spaced on the ERB scale between 20 Hz to 20 kHz. In all experiments, the kernels were normalized, and the baseline thresholds and the ahp parameters were kept the same across all kernels." }, { "heading": "6.3 RESULTS", "text": "Following the assertion of Lemma 4, in all experiments, the baseline threshold and the absolute refractory period were kept low enough so that for each sound snippet near perfect reconstructions could be obtained at a high spike rate. A typical value of the refractory period was ≈ 5ms and the baseline threshold value was kept as low as 10−3. As a consequence of the corollary to Lemma 3, additional spikes did not hurt reconstruction. Experiments were conducted with varying number of kernels. Once a reconstruction at a high spike rate was attained, a greedy technique that removed spikes in order of their impact on the reconstruction was instituted to get a compressed code for each snippet. Reconstructions were then recomputed with the fewer spikes as constraints. We\n0.00 0.05 0.10 0.15 0.20 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.00 0.05 0.10 0.000 0.025 0.050 0.075 0.00 0.05\nTime in ms\n(a) (b) (c) (d) (e)\nFigure 1: Five sample gammatone filters used as kernels with center frequencies located at approximately (a) 82 Hz, (b) 118 Hz, (c) 158 Hz, (d) 203 and (e) 253 Hz, respectively.\nAmplitude 0\n5\n10\n15\n20\n25\nTi m e in m\ns\n0 500 1000 1500 Kernel Indexes\n0 200 400 600 800 Kernel Indexes\n0 100 200 300 400 500 Kernel Indexes\n(a) (b) (c) (d) (e) (f) (g)\nFigure 2: Reconstruction of a sample snippet in an experiment with 2000 kernels. (a) the input snippet, extended with zero padding to accommodate future spikes; (b) Spike trains of all kernels obtained at a low threshold and refractory period displayed as a raster plot with time (y-axis) and index of the gammatone kernels in increasing order of center frequencies (x-axis), and (c) the resulting reconstruction. This is an almost perfect reconstruction with a 32.7DB SNR at 1146 kHz spike rate of the ensemble. Subsequently spikes were deleted greedily (see text) to obtain reconstructions at lower spike rates. (d) Resulting spike pattern of the ensemble at 88.4kHz and (e) resulting reconstruction with SNR 19.7DB. Likewise, (f) spike pattern of the ensemble at 17.64kHz and (g) resulting reconstruction with SNR of 9DB. Time scale on left apply to all plots. It is noteworthy that in (d)&(f) spikes of the higher frequency kernels ended up deleted in the culling process.\nshould emphasize here that soon as spikes are removed to get a compressed representation of the signal, signals are no longer encoded via a simple spike train representation which ideally should communicate only spike times and not their corresponding threshold values. In other words, a compressed signal representation in this scheme needs to communicate both the spike times and the threshold values because once spikes are culled the decoder cannot infer the threshold values from the spike times and the given threshold function of the neurons in equation 1. In that sense a compressed representation of a signal in this approach can be realized through marked spike trains that carry both time as well as threshold information rather than a true spike train based representation. Figure 2 demonstrates this process applied to a sample sound snippet through several stages of removal of spikes. This process was repeated over 1000 randomly chosen sound snippets from the dataset. Figure 3 displays the complete results of the experiment with ≈ 2000 gammatone kernels. As the\nfigure demonstrates, at high spike rates nearly perfect reconstructions were obtained consistently, and even though lowering the spike rate gradually increased noise, reasonable reconstructions could be obtained at low spike rates (≈ 15DB at 25kHz on average). Since each reconstruction is calculated by solving a system of linear equations involving a P -matrix whose dimension is O(N2) where N is the number of spikes under consideration, computation is fairly time consuming, and the choice of parameters, such as the length of the input snippets, the number of kernels or the threshold parameters were made to ensure feasibility of computation with available resources while maintaining efficacy of the overall reconstruction process.\nSince the proposed framework approximates a signal with a sparse linear combination of shifted kernels (as shown by signal class G in 4) and therefore has similarities to compressed sensing, another set of experiments were designed to compare the proposed framework with existing sparse coding techniques viz, Convolutional Matching Pursuit and Convolutional Orthogonal Matching Pursuit. Since the sparse coding techniques are computationally intensive over continuous signals, this set of experiments were restricted to only 10 gammatone kernels (for CMP and COMP all possible shifts of these 10 kernels were considered) and the experiments were run over 30 sound snippets. Our technique was applied as before, starting at a high spike rate with spikes culled gradually to achieve better compression. The results of comparison of average SNR values obtained by the techniques are shown in figure 4. As is evident, our technique in its simplest form does slightly better than COMP up until ≈ 50kHz beyond which COMP performs better. Our technique was ≈ 1.2 times faster than COMP on an i7 hexa-core processor for these experiments and should naturally scale much better than COMP since the proposed technique does not involve repeated computation of inner products. The implementation details of the experiment can be found in our simulation code available at: http://bitbucket.org/crystalonix/oldsensorycoding.git." }, { "heading": "7 RELATION TO PRIOR WORK", "text": "The problem of representing continuous time signals using ensembles of spike trains has a rich history both in the neuromorphic computing community as well as in computational neuroscience. Most such work rely on classical Nyquist–Shannon sampling theory wherein signals are assumed to be band-limited and reconstruction is realized through sinc filters, albeit via the spike trains. Among existing spike based coding techniques, (4) has explored the spike generating mechanism of the neuron as an oversampling, noise shaping analog-to-digital converter, and (11) represents signals via time encoding machines. Likewise, an image encoding technique has been discussed in (8) using an\nintegrate-threshold-reset framework that results in spike trains, which leverages differential pulse-code modulation (DPCM) at its core. In our case the input signals considered are elements of L2(R) with finite rate of innovation, the reconstruction error of which tends to zero as the signal approaches the span of appropriately chosen kernel functions which are again a generic class of continuous functions. Our work differs from existing approaches in that using our scheme signal reconstruction is realized via a sparse set of idealized spikes, whereas in the former case signals need to be sampled at a rate higher than the Nyquist rate and reconstruction implicitly relies on sinc interpolation. Since the class of signals G considered in our analysis takes the form X(t) = ∑N p=1 αpK\njp(tp − t), our problem formulation is comparable to that of convolution sparse coding or compressive sensing deconvolution, which, in general, is a hard problem and hence is solved under certain relaxed criteria (3) or by using certain greedy heuristics (12). Our framework provides a corresponding approximate solution to the general problem leveraging a biological thresholding scheme to produce spikes simultaneously at a high rate and then gradually removing unimportant spikes. The proposed technique, therefore, is a novel alternative to existing solutions." }, { "heading": "8 CONCLUSION", "text": "We have proposed a framework that codes for continuous time signals using an ensemble of spike trains in a manner that is very different from the pulse-density paradigm. The framework applies to all finite rate of innovation signals, which is a very large class that includes bandlimited signals. Although approximate reconstruction is computationally more expensive than interpolation with a sinc kernel (as in Nyquist Shannon), it is feasible, unlike in the case of compressed sensing where the generic case is NP-hard. Fortuitously, the system of linear equations is best solved using the conjugate gradient method since P is a symmetric positive semidefinite matrix. The excellent reconstruction results we have obtained with 2000 kernels—with no parameter tuning—is a testament to the potential of the technique. The human cochlear nerve, in comparison, contains axons of ≈ 50, 000 spiral ganglion cells (corresponding, therefore, to 50,000 kernels). As our theorems show, reconstruction with such a large set of kernels is guaranteed to be even better, albeit at a higher computational cost." }, { "heading": "A PROOF LEMMA 2:", "text": "Lemma 2: Let equation 5 resulting from the optimization problem 2 have multiple solutions. Consider any two different solutions for α, namely α1 and α2, and hence the corresponding reconstructions are given by X1(t) = ∑N i=1 α1iK ji(ti − t) and X2(t) = ∑N i=1 α2iK\nji(ti − t), respectively. Then X1 = X2. Proof: Let S be the subspace of L2-functions spanned by {Kji(ti − t)|i ∈ 1, 2, ..., N} with the standard inner product (by assumption each of {Kji(ti − t)|i ∈ 1, 2, ..., N} are L2-functions and hence S is a subspace of the larger space of all L2-functions). Clearly S is a Hilbert space with dim(S) ≤ N . Hence there exists {e1, ..., eM}, an orthonormal basis of S (where M ≤ N ). Assume that the hypothesis is false, i.e. X1 6= X2. This implies that ∃ ais and bis such that X1(t) = ∑M i=1 aiei and X2(t) = ∑M i=1 biei where not all ais are same as the corresponding bis. =⇒ ∃ k such that ak 6= bk =⇒ 〈X1(t), ek〉 = ak 6= bk = 〈X2(t), ek〉 But ek ∈ span({Kji(ti−t)|i ∈ 1, 2, ..., N}) =⇒ ∃{c1, ..., cN} such that ek = ∑N i=1 ciK\nji(ti−t) =⇒ 〈X1(t), ek〉 = ∑N i=1 ci〈X1(t),Kji(ti − t)〉 = ∑N i=1 ciT\nji(ti) Now, since X1(t), X2(t) are both solutions to the optimization problem 2 =⇒ 〈X1(t), ek〉 = ∑N i=1 ci〈X2(t),Kji(ti − t) = 〈X2(t), ek〉 =⇒ 〈X1(t), ek〉 = 〈X2(t), ek〉 -which contradicts the hypothesis. 2" }, { "heading": "B PROOF OF LEMMA 4", "text": "Lemma 4: Let X(t) be an input signal. Let Kp be a kernel for which we want to generate a spike at time tp. Let the inner product 〈X(t),Kp(tp − t)〉 = Ip. Then, if the baseline threshold of the kernel Kp is Cp ≤ Ip and the absolute refractory period is δ as modeled in Equation 1, the kernel Kp must produce a spike in the interval [tp−δ, tp] according to the threshold model defined in Equation 1.\nProof: The lemma is easily proved by contradiction. Assume that prior to and including time tp, the last spike produced by kernel Kp was at time tl. Also assume that tl < tp − δ so that there is no spike in the interval [tp − δ, tp]. Then by Equation 1 the threshold of kernel Kp at time tp is T p(tp) = Cp. But, 〈X(t),Kp(tp − t)〉 = Ip ≥ Cp. Furthermore, since Kp(t) is a continuous function, the convolution C(t) = ∫ X(τ)Kp(t− τ)dτ varies continuously. By Equation 1, as the ahp kicks up the threshold to an arbitrarily high value Mp at time tl and falls linearly to the value Cp before time tp, by the intermediate value theorem, the threshold must be crossed by the convolution C(t) after time tl and before time tp. Since tl was the last spike of kernel Kp before time tp, this is a contradiction. Hence, tl ≮ tp − δ, implying that there must a spike in the interval [tp − δ, tp]. 2" }, { "heading": "C PROOF OF APPROXIMATE RECONSTRUCTION THEOREM", "text": "Approximate Reconstruction Theorem: Let the following assumptions be true:\n• X(t), the input signal to the proposed framework, can be written as a linear combination of some component functions as below:\nX(t) = N∑ i=1 αifpi(ti − t)\nwhere αi are bounded real coefficients, the component functions fpi(t) are chosen from a possibly infinite set H = {fi(t)|i ∈ Z+, ||fi(t)|| = 1} of functions of unit L2 norm with compact support, and the corresponding ti ∈ R+ are chosen to be bounded arbitrary time shifts of the component functions so that overall signal has a compact support in [0, T ] for some T ∈ R+ and thus the input signal still belongs to the same class of signals F , as defined in section 2.\n• There is at least one kernel Kji from the bag of encoding kernels K, such that the L2distance of fpi(t) from K\nji(t) is bounded. Formally, ∃ δ ∈ R+ s.t. ||fpi(t)−Kji(t)||2 < δ ∀ i ∈ {1, ..., N}.\n• When X(t) is encoded by the proposed framework, each one of these kernels Kji produce a spike at time t′i at threshold Ti such that |ti − t′i| < ∆ ∀i, for some ∆ ∈ R+.\n• Each kernel Kj ∈ K satisfies a Lipschitz type of condition as follows: ∃ C∈R s.t. ||Kj(t)−Kj(t−∆t)||2 ≤ C|∆t|, ∀∆t ∈ R, ∀j.\n• And lastly the shifted component functions satisfy a frame bound type of condition as follows:∑ k 6=i〈fpi(t− ti), fpk(t− tk)〉 ≤ η ∀ i ∈ {1, ..., N}\nThen, reconstruction X∗(t), resulting from the proposed framework, has a bounded noise to signal ratio. Specifically, the following inequality is satisfied:\n||X(t)−X∗(t)||22/||X(t)||22 ≤ (δ + C∆)(1+xmax)/(1−η)\nwhere xmax is a positive number ∈ [0, N − 1] that depends on the maximum overlap of the support of component functions fpi(t− ti).\nProof of the Theorem: By hypothesis each kernel Kji produces a spike at time t′i ∀i ∈ {1, ..., N} . Let us call these spikes as fitting spikes. But the coding model might generate some other spikes against X(t) too. Other than the set of fitting spikes {(t′i,Kji)|i ∈ {1, ..., N}}, let {(t̃k,K j̃k)|k ∈ {1, ...,M}} denote those extra set of spikes that the coding model produces for input X(t) against the kernel bag K and call these extra spikes as spurious spikes. Here, M is the number of spurious spikes. By Lemma1 X∗(t) can be represented as below: X∗(t) = ∑N i=1 αiK ji(t′i − t) + ∑M k=1 α̃kK j̃k(t̃k − t)\nwhere αi and α̃k are real coefficients whose values can be formulated again from Lemma1. Let Ti be the thresholds at which kernel Kji produced the spike at time t′i as given in the hypothesis. Hence for generation of the fitting spikes the following condition must be satisfied:\n〈X(t),Kji(t′i − t)〉 = Ti∀i ∈ {1, 2, ..., N} (6)\nConsider a hypothetical signal Xhyp(t) defined by the equations below:\nXhyp(t) = N∑ i=1 aiK ji(t′i − t), ai ∈ R\ns.t.〈Xhyp(t),Kji(t′i − t)〉 = Ti,∀i (7)\nClearly this hypothetical signal Xhyp(t) can be deemed as if it is the reconstructed signal where we are only considering the fitting spikes and ignoring all spurious spikes. Since, Xhyp(t) lies in the span of the shifted kernels used in reconstruction of X(t) using Lemma 3 we may now write:\n||X(t)−Xhyp(t)|| ≥ ||X(t)−X∗(t)|| (8) ||X(t)−Xhyp(t)||22 = 〈X(t)−Xhyp(t), X(t)−Xhyp(t)〉 = 〈X(t)−Xhyp(t), X(t)〉 − 〈X(t)−Xhyp(t), Xhyp(t)〉 = 〈X(t)−Xhyp(t), X(t)〉 − ΣNi=1ai〈X(t)−Xhyp(t),Kji(t− t′i)〉 = ||X(t)||22 − 〈X(t), Xhyp(t)〉 (Since by construction〈Xhyp(t),Kji(t− t′i)〉 = Ti∀i ∈ {1...N}) = ΣNi=1Σ\nN k=1αiαk〈fi(t− ti), fk(t− tk)〉 − ΣNi=1ΣNk=1αiak〈fi(t− ti),Kjk(t− t′k))〉\n= αTFα− αTFKa (9) (denote a = [a1, a2, ..., aN ]T , α = [α1, α2, ..., αN ]T , F = [Fik]N×N , an N ×N matrix, where Fik = 〈fi(t− ti), fk(t− tk)〉 and FK = [(FK)ik]NXN where (FK)ik = 〈fi(t− ti),Kjk(t− t′k)〉)\nBut using the results of Lemma1 a can be written as:\na = P−1T where P = [Pik]NXN , Pik = 〈Kji(t− t′i),Kjk(t− t′k)〉 And, T = [Ti]N×1 where Ti = 〈X(t),Kji(t− t′i)〉 = ΣNk=1αk〈fk(t− tk),Kji(t− t′i)〉 = FTKα =⇒ a = P−1FTKα\nPlugging this expression of a in equations 9 we get,\n||X(t)−Xhyp(t)||22 = αTFα− αTFKP−1FTKα (10) But,(FK)ik = 〈fi(t− ti),Kjk(t− t′k)〉 = 〈Kji(t− t′i),Kjk(t− t′k)〉\n− 〈Kji(t− t′i)− fi(t− ti),Kjk(t− t′k)〉 = (P )ik − (EK)ik (11)\n(denoting EK = [(EK)ik]N×N , where (EK)ik = 〈Kji(t− t′i)− fi(t− ti),Kjk(t− t′k)〉) Also, (F )ik = 〈fi(t− ti), fk(t− tk)〉 = 〈fi(t− ti)−Kji(t− t′i) +Kji(t− t′i),\nfk(t− tk)−Kjk(t− t′k) +Kjk(t− t′k)〉 = (E)ik − (EK)ik − (EK)ki + (P )ik (12)\nCombining 10, 11 and 12 we get,\n||X(t)−Xhyp(t)||22 = αTFα− αTFKP−1FTKα = αTEα− αTEKα− αTETKα+ αTPα − αTPα+ αTEKα+ αTETKα− αTEKP−1ETKα = αTEα− αTEKP−1ETKα ≤ αTEα\n(Since, P is an SPD matrix, αTEKP−1ETKα > 0) (13)\nWe seek for a bound for the above expression. For that we observe the following:\n|(E)ik| = |〈fi(t− ti)−Kji(t− t ′ i), fk(t− tk)−Kjk(t− t ′ k)〉|\n= ||fi(t− ti)−Kji(t− t ′ i)||2||fk(t− tk)−Kjk(t− t ′\nk)||2.xik (where xik ∈ [0, 1]. We also note that xik is close to 0 when there is not much overlap in the support of the two components and their corresponding fitting kernels.)\n≤ (||(fi(t− ti)−Kji(t− ti)||+\n||Kji(t− ti)−Kji(t− t ′\ni))||). (||fk(t− tk)−Kjk(t− tk)||\n+ ||Kjk(t− tk)−Kjk(t− t ′\nk)||).xik =⇒ (E)ik ≤ xik.(δ + C∆)2 (14)\nUsing Gershgorin circle theorem, the maximum eigen value of E : Λmax(E) ≤ maxi((E)ii + Σk 6=i|(E)ik|)\n≤ (δ + C∆)2(xmax + 1) (Using 14) (15) (where xmax ∈ [0, N − 1] is a positive number that depends on the maximum overlap of the supports of the component signals and their fitting kernels.) Similarly, the minimum eigen value of F is: Λmin(F ) = mini((F )ii − Σi 6=k|〈fpi(t− ti), fpk(t− tk)〉|) ≥ 1− η (16) (By assumption Σi6=k| < fpi(t− ti), fpk(t− tk) > | ≤ η ) Combining the results from 13, 15 and 16 we get: ||X(t)−Xhyp(t)||2/||X(t)||2 ≤ αTEα/αTFα ≤ Λmax(E)/Λmin(F ) ≤ (δ + C∆)2(xmax + 1)/(1− η) (17)\nFinally using 8 we conclude, ||X(t)−X∗(t)||2/||X(t)||2 ≤ ||X(t)−Xhyp(t)||2/||X(t)||2\n≤ (δ + C∆)2(xmax + 1)/(1− η)" } ]
2,020
null
SP:725d036c0863e59f6bb0b0bb22cc0ad3a0988126
[ "Review: This paper studies how to improve contrastive divergence (CD) training of energy-based models (EBMs) by revisiting the gradient term neglected in the traditional CD learning. This paper also introduces some useful techniques, such as data augmentation, multi-scale energy design, and reservoir sampling to improve the training of energy-based model. Empirical studies are performed to validate the proposed learning strategy on the task of image generation, OOD detection, and compositional generation." ]
We propose several different techniques to improve contrastive divergence training of energy-based models (EBMs). We first show that a gradient term neglected in the popular contrastive divergence formulation is both tractable to estimate and is important to avoid training instabilities in previous models. We further highlight how data augmentation, multi-scale processing, and reservoir sampling can be used to improve model robustness and generation quality. Thirdly, we empirically evaluate stability of model architectures and show improved performance on a host of benchmarks and use cases, such as image generation, OOD detection, and compositional generation.
[]
[ { "authors": [ "Sergey Bartunov", "Jack W Rae", "Simon Osindero", "Timothy P Lillicrap" ], "title": "Meta-learning deep energy-based memory models", "venue": "arXiv preprint arXiv:1910.02720,", "year": 2019 }, { "authors": [ "Jan Beirlant", "E. Dudewicz", "L. Gyor", "E.C. Meulen" ], "title": "Nonparametric entropy estimation: An overview", "venue": "International Journal of Mathematical and Statistical Sciences,", "year": 1997 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeswar", "Sherjil Ozair", "Yoshua Bengio", "Aaron Courville", "R Devon Hjelm" ], "title": "Mine: mutual information neural estimation", "venue": "arXiv preprint arXiv:1801.04062,", "year": 2018 }, { "authors": [ "Ting Chen", "Xiaohua Zhai", "Marvin Ritter", "Mario Lucic", "Neil Houlsby" ], "title": "Self-supervised gans via auxiliary rotation loss", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Bo Dai", "Zhen Liu", "Hanjun Dai", "Niao He", "Arthur Gretton", "Le Song", "Dale Schuurmans" ], "title": "Exponential family estimation via adversarial dynamics embedding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yuntian Deng", "Anton Bakhtin", "Myle Ott", "Arthur Szlam", "Marc’Aurelio Ranzato" ], "title": "Residual energy-based models for text generation", "venue": "arXiv preprint arXiv:2004.11714,", "year": 2020 }, { "authors": [ "Yilun Du", "Igor Mordatch" ], "title": "Implicit generation and generalization in energy-based models", "venue": "arXiv preprint arXiv:1903.08689,", "year": 2019 }, { "authors": [ "Yilun Du", "Toru Lin", "Igor Mordatch" ], "title": "Model based planning with energy based models", "venue": "arXiv preprint arXiv:1909.06878,", "year": 2019 }, { "authors": [ "Yilun Du", "Shuang Li", "Igor Mordatch" ], "title": "Compositional visual generation with energy based models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Yilun Du", "Joshua Meier", "Jerry Ma", "Rob Fergus", "Alexander Rives" ], "title": "Energy-based models for atomic-resolution protein conformations", "venue": "arXiv preprint arXiv:2004.13167,", "year": 2020 }, { "authors": [ "Chelsea Finn", "Sergey Levine" ], "title": "Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm", "venue": null, "year": 2017 }, { "authors": [ "Ruiqi Gao", "Yang Lu", "Junpei Zhou", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Learning generative convnets via multi-grid modeling and sampling", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ruiqi Gao", "Erik Nijkamp", "Diederik P Kingma", "Zhen Xu", "Andrew M Dai", "Ying Nian Wu" ], "title": "Flow contrastive estimation of energy-based models", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Jörn-Henrik Jacobsen", "David Duvenaud", "Mohammad Norouzi", "Kevin Swersky" ], "title": "Your classifier is secretly an energy based model and you should treat it like one", "venue": null, "year": 1912 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Jorn-Henrik Jacobsen", "David Duvenaud", "Richard Zemel" ], "title": "Cutting out the middle-man: Training and evaluating energy-based models without sampling", "venue": "arXiv preprint arXiv:2002.05616,", "year": 2020 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron Courville" ], "title": "Improved training of wasserstein gans", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Haoran Tang", "Pieter Abbeel", "Sergey Levine" ], "title": "Reinforcement learning with deep energy-based policies", "venue": "arXiv preprint arXiv:1702.08165,", "year": 2017 }, { "authors": [ "Tian Han", "Erik Nijkamp", "Xiaolin Fang", "Mitch Hill", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Divergence triangle for joint training of generator model, energy-based model, and inferential model", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "arXiv preprint arXiv:1610.02136,", "year": 2016 }, { "authors": [ "Geoffrey E Hinton" ], "title": "Training products of experts by minimizing contrastive divergence", "venue": "Neural Comput.,", "year": 2002 }, { "authors": [ "Jonathan Ho", "Ajay Jain", "Pieter Abbeel" ], "title": "Denoising diffusion probabilistic models", "venue": "arXiv preprint arXiv:2006.11239,", "year": 2020 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Estimation of non-normalized statistical models by score matching", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "David Isele", "Akansel Cosgun" ], "title": "Selective experience replay for lifelong learning", "venue": "arXiv preprint arXiv:1802.10269,", "year": 2018 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Taesup Kim", "Yoshua Bengio" ], "title": "Deep directed generative models with energy-based probability estimation", "venue": "arXiv preprint arXiv:1606.03439,", "year": 2016 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "LF Kozachenko", "Nikolai N Leonenko" ], "title": "Sample estimate of the entropy of a random vector", "venue": "Problemy Peredachi Informatsii,", "year": 1987 }, { "authors": [ "Rithesh Kumar", "Anirudh Goyal", "Aaron Courville", "Yoshua Bengio" ], "title": "Maximum entropy generators for energy-based models", "venue": "arXiv preprint arXiv:1901.08508,", "year": 2019 }, { "authors": [ "Kwonjoon Lee", "Weijian Xu", "Fan Fan", "Zhuowen Tu" ], "title": "Wasserstein introspective neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Qiang Liu", "Dilin Wang" ], "title": "Learning deep energy models: Contrastive divergence vs. amortized mle", "venue": "arXiv preprint arXiv:1707.00797,", "year": 2017 }, { "authors": [ "Yang Liu", "Prajit Ramachandran", "Qiang Liu", "Jian Peng" ], "title": "Stein variational policy gradient", "venue": "arXiv preprint arXiv:1704.02399,", "year": 2017 }, { "authors": [ "Siwei Lyu" ], "title": "Unifying non-maximum likelihood learning objectives with minimum kl contraction", "venue": "In Advances in Neural Information Processing Systems, pp", "year": 2011 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Radford M Neal" ], "title": "Mcmc using hamiltonian dynamics", "venue": "Handbook of Markov Chain Monte Carlo,", "year": 2011 }, { "authors": [ "Erik Nijkamp", "Mitch Hill", "Tian Han", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "On the anatomy of mcmcbased maximum likelihood learning of energy-based models", "venue": "arXiv preprint arXiv:1903.12370,", "year": 2019 }, { "authors": [ "Erik Nijkamp", "Mitch Hill", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Learning non-convergent nonpersistent short-run mcmc toward energy-based model", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "David Rolnick", "Arun Ahuja", "Jonathan Schwarz", "Timothy Lillicrap", "Gregory Wayne" ], "title": "Experience replay for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Francisco JR Ruiz", "Michalis K Titsias" ], "title": "A contrastive divergence for combining variational inference and mcmc", "venue": "arXiv preprint arXiv:1905.04062,", "year": 2019 }, { "authors": [ "Ruslan Salakhutdinov", "Geoffrey E. Hinton" ], "title": "Deep boltzmann machines", "venue": "AISTATS, volume 5 of JMLR Proceedings,", "year": 2009 }, { "authors": [ "Saeed Saremi", "Arash Mehrjou", "Bernhard Schölkopf", "Aapo Hyvärinen" ], "title": "Deep energy estimator networks", "venue": "arXiv preprint arXiv:1805.08306,", "year": 2018 }, { "authors": [ "Benjamin Scellier", "Yoshua Bengio" ], "title": "Equilibrium propagation: Bridging the gap between energybased models and backpropagation", "venue": "Frontiers in computational neuroscience,", "year": 2017 }, { "authors": [ "Jascha Sohl-Dickstein", "Eric A Weiss", "Niru Maheswaranathan", "Surya Ganguli" ], "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "venue": "arXiv preprint arXiv:1503.03585,", "year": 2015 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Generative modeling by estimating gradients of the data distribution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yunfu Song", "Zhijian Ou" ], "title": "Learning neural random fields with inclusive auxiliary generators", "venue": "arXiv preprint arXiv:1806.00271,", "year": 2018 }, { "authors": [ "Tijmen Tieleman" ], "title": "Training restricted boltzmann machines using approximations to the likelihood gradient", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Alexandre B Tsybakov", "EC Van der Meulen" ], "title": "Root-n consistent estimators of entropy for densities with unbounded support", "venue": "Scandinavian Journal of Statistics,", "year": 1996 }, { "authors": [ "Aaron Van Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Ramakrishna Vedantam", "Ian Fischer", "Jonathan Huang", "Kevin Murphy" ], "title": "Generative models of visually grounded imagination", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Jianwen Xie", "Yang Lu", "Song-Chun Zhu", "Yingnian Wu" ], "title": "A theory of generative convnet", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jianwen Xie", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Synthesizing dynamic patterns by spatial-temporal generative convnet", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Jianwen Xie", "Yang Lu", "Ruiqi Gao", "Ying Nian Wu" ], "title": "Cooperative learning of energy-based model and latent variable model via mcmc teaching", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Jianwen Xie", "Zilong Zheng", "Ruiqi Gao", "Wenguan Wang", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Learning descriptor networks for 3d shape synthesis and analysis", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Kenny J Young", "Richard S Sutton", "Shuo Yang" ], "title": "Integrating episodic memory into a reinforcement learning agent using reservoir sampling", "venue": "arXiv preprint arXiv:1806.00540,", "year": 2018 }, { "authors": [ "Fisher Yu", "Ari Seff", "Yinda Zhang", "Shuran Song", "Thomas Funkhouser", "Jianxiong Xiao" ], "title": "Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop", "venue": "arXiv preprint arXiv:1506.03365,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Energy-Based models (EBMs) have received an influx of interest recently and have been applied to realistic image generation (Han et al., 2019; Du & Mordatch, 2019), 3D shapes synthesis (Xie et al., 2018b) , out of distribution and adversarial robustness (Lee et al., 2018; Du & Mordatch, 2019; Grathwohl et al., 2019), compositional generation (Hinton, 1999; Du et al., 2020a), memory modeling (Bartunov et al., 2019), text generation (Deng et al., 2020), video generation (Xie et al., 2017), reinforcement learning (Haarnoja et al., 2017; Du et al., 2019), protein design and folding (Ingraham et al.; Du et al., 2020b) and biologically-plausible training (Scellier & Bengio, 2017). Contrastive divergence is a popular and elegant procedure for training EBMs proposed by (Hinton, 2002) which lowers the energy of the training data and raises the energy of the sampled confabulations generated by the model. The model confabulations are generated via an MCMC process (commonly Gibbs sampling or Langevin dynamics), leveraging the extensive body of research on sampling and stochastic optimization. The appeal of contrastive divergence is its simplicity and extensibility. It does not require training additional auxiliary networks (Kim & Bengio, 2016; Dai et al., 2019) (which introduce additional tuning and balancing demands), and can be used to compose models zero-shot.\nDespite these advantages, training EBMs with contrastive divergence has been challenging due to training instabilities. Ensuring training stability required either combinations of spectral normalization and Langevin dynamics gradient clipping (Du & Mordatch, 2019), parameter tuning (Grathwohl et al., 2019), early stopping of MCMC chains (Nijkamp et al., 2019b), or avoiding the use of modern deep learning components, such as self-attention or layer normalization (Du & Mordatch, 2019). These requirements limit modeling power, prevent the compatibility with modern deep learning architectures, and prevent long-running training procedures required for scaling to larger datasets. With this work, we aim to maintain the simplicity and advantages of contrastive divergence training, while resolving stability issues and incorporating complementary deep learning advances.\nAn often overlooked detail of contrastive divergence formulation is that changes to the energy function change the MCMC samples, which introduces an additional gradient term in the objective function (see Section 2.1 for details). This term was claimed to be empirically negligible in the original formulation and is typically ignored (Hinton, 2002; Liu & Wang, 2017) or estimated via highvariance likelihood ratio approaches (Ruiz & Titsias, 2019). We show that this term can be efficiently estimated for continuous data via a combination of auto-differentiation and nearest-neighbor entropy estimators. We also empirically show that this term contributes significantly to the overall training gradient and has the effect of stabilizing training. It enables inclusion of self-attention blocks into network architectures, removes the need for capacity-limiting spectral normalization, and allows us to train the networks for longer periods. We do not introduce any new objectives or complexity - our procedure is simply a more complete form of the original formulation.\nWe further present techniques to improve mixing and mode exploration of MCMC transitions in contrastive divergence. We propose data augmentation as a useful tool to encourage mixing in MCMC by directly perturbing input images to related images. By incorporating data augmentation as semantically meaningful perturbations, we are able to greatly improve mixing and diversity of MCMC chains. We further propose to maintain a reservoir sample of past samples, improving the diversity of MCMC chain initialization in contrastive divergence. We also leverage compositionality of EBMs to evaluate an image sample at multiple image resolutions when computing energies. Such evaluation and coarse and fine scales leads to samples with greater spatial coherence, but leaves MCMC generation process unchanged. We note that such hierarchy does not require specialized mechanisms such as progressive refinement (Karras et al., 2017)\nOur contributions are as follows: firstly, we show that a gradient term neglected in the popular contrastive divergence formulation is both tractable to estimate and is important in avoiding training instabilities that previously limited applicability and scalability of energy-based models. Secondly, we highlight how data augmentation and multi-scale processing can be used to improve model robustness and generation quality. Thirdly, we empirically evaluate stability of model architectures and show improved performance on a host of benchmarks and use cases, such as image generation, OOD detection, and compositional generation." }, { "heading": "2 AN IMPROVED CONTRASTIVE DIVERGENCE FRAMEWORK FOR ENERGY BASED MODELS", "text": "Energy based models (EBMs) represent the likelihood of a probability distribution for x ∈ RD as pθ(x) = exp(−Eθ(x)) Z(θ) where the function Eθ(x) : R\nD → R, is known as the energy function, and Z(θ) = ∫ x\nexp−Eθ(x) is known as the partition function. Thus an EBM can be represented by an neural network that takes x as input and outputs a scalar.\nTraining an EBM through maximum likelihood (ML) is not straightforward, as Z(θ) cannot be reliably computed, since this involves integration over the entire input domain of x. However, the gradient of log-likelihood with respect to a data sample x can be represented as\n∂ log pθ(x)\n∂θ = −\n( ∂Eθ(x)\n∂θ − Epθ(x′)\n[ ∂Eθ(x ′)\n∂θ\n]) . (1)\nNote that Equation 1 is still not tractable, as it requires using Markov Chain Monte Carlo (MCMC) to draw samples from the model distribution pθ(x), which often takes exponentially long to mix. As a practical approximation to the above objective, (Hinton, 2002) proposes the contrastive divergence objective KL(p(x) || pθ(x))− KL(Πtθ(p(x)) || pθ(x)), (2) where Πθ represents a MCMC transition kernel for pθ, and Πtθ(p(x)) represents t sequential MCMC transitions starting from p(x). The above objective can be seen as an improvement operator, where KL(p(x) || pθ(x)) ≥ KL(Πtθ(p(x)) || pθ(x)), because Πθ is converging to equilibrium distribution pθ(x) (Lyu, 2011). Furthermore, the above objective is only zero (at its fixed point), when Πθ does not change the distribution of p(x), which corresponds to pθ(x) = p(x)." }, { "heading": "2.1 A MISSING TERM IN CONTRASTIVE DIVERGENCE", "text": "When taking the negative gradient of the contrastive divergence objective (Equation 2), we obtain the expression\n− ( Ep(x) [ ∂Eθ(x)\n∂θ\n] − Eqθ(x′)[ ∂Eθ(x ′)\n∂θ ] +\n∂q(x′)\n∂θ ∂KL(qθ(x) || pθ(x)) ∂qθ(x)\n) , (3)\nwhere for brevity, we summarize Πtθ(p(x)) = qθ(x). The first two terms are identical to those of Equation 1 and the third gradient term (which we refer to as the KL divergence term) corresponds to minimizing the divergence between qθ(x) and pθ(x). In practice, past contrastive divergence approaches have ignored the third gradient term, which was difficult to estimate and claimed to be empirically negligible (Hinton, 1999). These gradients correspond to a joint loss expression LFull, consisting of traditional contrastive loss LCD and a new loss expression LKL. Specifically, we have LFull = LCD + LKL where LCD is\nLCD = Ep(x)[Eθ(x)]− Estop gradient(qθ(x′))[Eθ(x ′)], (4)\nand the ignored KL divergence term corresponding to the loss LKL = Eqθ(x)[Estop gradient(θ)(x)] + Eqθ(x)[log(qθ(x))]. (5)\nDespite being difficult to estimate, we show that LKL is a useful tool for both speeding up and stabilizing training of EBMs. Figure 2 illustrates the overall effects of both losses. Equation 4 encourage the energy function to assign low energy to real samples and high energy for generated samples. However, only optimizing Equation 4 often leads to an adversarial mode where the energy function learns to simply generate an energy landscape that makes sampling difficult. The KL divergence term counteracts this effect, and encourages sampling to closely approximate the underlying distribution pθ(x), by encouraging samples to be both low energy under the energy function as well as diverse. Next, we discuss our approach towards estimating this KL divergence, and show that it significantly improves the stability when training EBMs." }, { "heading": "2.2 ESTIMATING THE MISSING GRADIENT TERM", "text": "Estimating LKL can further be decomposed into two separate objectives, minimizing the energy of samples from qθ(x), which we refer to as Lopt (Equation 6) and maximizing the entropy of samples from qθ(x) which we refer to as Lent (Equation 7).\nMinimizing Sampler Energy. To minimize the energy of samples from qθ(x) we can directly differentiate through both the energy function and MCMC sampling. We follow recent work in EBMs and utilize Langevin dynamics (Du & Mordatch, 2019; Nijkamp et al., 2019b; Grathwohl et al., 2019) for our MCMC transition kernel, and note that each step of Langevin sampling is fully differentiable with respect to underlying energy function parameters. Precisely, gradient of Lopt becomes\n∂Lopt ∂θ = Eqθ(x′0,x′1,...,x′t)\n[ ∂Estop gradient(θ)(x ′ t−1 −∇x′t−1Eθ(x ′ t−1) + ω)\n∂θ\n] , ω ∼ N (0, λ) (6)\nwhere x′i represents the i th step of Langevin sampling. To reduce to memory overhead of this differentiation procedure, we only differentiate through the last step of Langevin sampling (though we show it the appendix that leads to the same effect as differentiation through Langevin sampling).\nEntropy Estimation. To maximize the entropy of samples from qθ(x), we use a non-parametric nearest neighbor entropy estimator (Beirlant et al., 1997), which is shown to be mean square consistent (Kozachenko & Leonenko, 1987) with root-n convergence rate (Tsybakov & Van der Meulen, 1996). The entropy H of a distribution p(x) can be estimated through a set X = x1, x2, . . . , xn of n different points sampled from p(x) as H(pθ(x)) = 1n ∑n i=1 ln(n · NN(xi, X)) + O(1) where the function NN(xi, X) denotes the nearest neighbor distance of xi to any other data point in X . Based off the above entropy estimator, we write Lent as Lent = Eq(x)[log(NN(x, B))] (7) where we measure the nearest neighbor with respect to a set B of 1000 past samples from MCMC chains (see Section 2.5 for more details). We utilize L2 distance as the metric for computing nearest neighbors. Alternatively, Stein’s identity may also be used to estimate entropy, but this requires considering all samples, as opposed to the nearest, becoming computationally intractable. Our entropy estimator serves a simple, quick to compute estimator of entropy, that prevents sampling from collapsing. Empirically, we find that the combination of the above terms in LKL significantly improves both the stability and generation quality of EBMs, improving robustness across different model architectures." }, { "heading": "2.3 DATA AUGMENTATION TRANSITIONS", "text": "Langevin sampling, our MCMC transition kernel, is prone to falling into local probability modes (Neal, 2011). In the image domain, this manifests with sampling chains always converging to a fixed image (Du & Mordatch, 2019). A core difficulty is that distances between two qualitatively similar images can be significantly far away from each in input domain, on which sampling is applied. While LKL encourages different sampling chains to cover the model distribution, Langevin dynamics alone is not enough to encourage large jumps in finite number of steps. It is further beneficial to have an individual sampling chain have to ability mix between probability modes.\nTo encourage greater exploration between similar inputs in our model, we propose to augment chains of MCMC sampling with periodic data augmentation transitions that encourages movement between “similar” inputs. In particular, we utilize a combination of color, horizontal flip, rescaling, and Gaussian blur augmentations. Such combinations of augmentation has recently seen success applied in unsupervised learning (Chen et al., 2020). Specifically, during training time, we initialize MCMC sampling from a data augmentation applied to an input sampled from the buffer of past samples. At test time, during generation, we apply a random augmentation to the input after every 20 steps of Langevin sampling. We illustrate this process in the bottom of Figure 2. Data augmentation transitions are always taken." }, { "heading": "2.4 COMPOSITIONAL MULTI-SCALE GENERATION", "text": "To encourage energy functions to focus on features in both low and high resolutions, we define our energy function as the composition (sum) of a set of energy functions operating on different scales of an image, illustrated in Figure 3. Since the downsampling operation is fully differentiable, Langevin based sampling can be directly applied to the energy function. In our experiments, we utilize full, half, and quarter resolution image as input and show in the appendix that this improves the generation performance." }, { "heading": "2.5 RESERVOIR SAMPLING", "text": "To encourage qθ(x) to match pθ(x), MCMC steps in qθ(x) are often initialized from past samples from qθ(x) to enable more diverse mode exploration, a training objective known as persistent contrastive divergence (Tieleman, 2008). Du & Mordatch (2019) propose to implement sampling\nTable 1: Table of Inception and FID scores for generations of CIFAR-10, CelebA-HQ and LSUN bedroom scenes. * denotes our reimplementation of a SNGAN 128x128 model using the torch mimicry GAN library. All others numbers are taken directly from corresponding papers.\nModel Inception* FID\nCIFAR-10 Unconditional PixelCNN (Van Oord et al., 2016) 4.60 65.93 IGEBM (Du & Mordatch, 2019) 6.02 40.58 DCGAN (Radford et al., 2016) 6.40 37.11 WGAN + GP (Gulrajani et al., 2017) 6.50 36.4 Ours 7.58 35.4 SNGAN (Miyato et al., 2018) 8.22 21.7\nCelebA-HQ 128x128 Unconditional SNGAN* - 55.25 Ours - 35.06 SSGAN (Chen et al., 2019) - 24.36\nLSUN Bedroom 128x128 Unconditional SNGAN* - 125.53 Ours - 49.30\nFigure 5: Visualization of Langevin dynamics sampling chains on an EBM trained on CelebA-HQ 128x128. Samples travel between different modes of images. Each consecutive images represents 30 steps of sampling, with data augmentation transitions every 60 steps .\nfrom past samples by utilizing a replay buffer of samples from qθ(x) interspersed with samples initialized from random noise. By storing a large batch of past samples, the replay buffer is able to enforce diversity across chains. However, as samples are initialized from the replay buffer and added to the buffer again, the replay buffer becomes filled with a set of correlated samples from qθ(x) over time. To encourage a buffer distribution representative of all past samples, we instead use reservoir sampling technique over all past samples from qθ(x). This technique has previously been found helpful in balancing replay in reinforcement learning (Young et al., 2018; Isele & Cosgun, 2018; Rolnick et al., 2019). Under a reservoir sampling implementation, any sample from qθ(x) has an equal probability of being the reservoir buffer." }, { "heading": "3 EXPERIMENTS", "text": "We perform empirical experiments to validate the following set of questions: (1) What are the effects of each proposed component towards training EBMs? (2) Are our trained EBMs able to perform well on downstream applications of EBMs (generation, compositionality, out-of-distribution detection)? We provide ablations of each of our proposed components in the appendix." }, { "heading": "3.1 EXPERIMENTAL SETUP", "text": "We investigate the efficacy of our proposed approach. Models are trained using the Adam Optimizer (Kingma & Ba, 2015), on a single 32GB Volta GPU for CIFAR-10 for 1 day, and for 3 days on 8 32GB Volta GPUs for CelebaHQ, and LSUN datasets. We provide detailed training configuration details in the appendix.\nOur improvements are largely built on top of the EBMs training framework proposed in (Du & Mordatch, 2019). We use a buffer size of 10000, with a resampling rate of 5% with L2 regularization on output energies. Our approach is significantly more stable than IGEBM, allowing us to remove aspects of regularization in (Du & Mordatch, 2019). We remove the clipping of gradients in Langevin sampling as well as spectral normalization on the weights of the network. In addition, we add self-attention blocks and layer normalization blocks in residual networks of our trained models. In multi-scale architectures, we utilize 3 different resolutions of an image, the original image resolution, half the image resolution and a quarter the image resolution. We report detailed architectures in the appendix. When evaluating models, we utilize the EMA model with EMA weight of 0.999." }, { "heading": "3.2 IMAGE GENERATION", "text": "We evaluate our approach on CIFAR-10, LSUN bedroom (Yu et al., 2015), and CelebA-HQ (Karras et al., 2017) datasets and analyze our characteristics of our proposed framework. Additional analysis and ablations can be found in the appendix of the paper.\nImage Quality. We evaluate our approach on unconditional generation in Table 1. On CIFAR-10 we find that approach, while not being state-of-the-art, significantly outperforms past EBM approaches that based off implicit sampling from an energy landscape (with approximately the same number of parameters), and has performance in the range of recent GAN models on high resolution images. We further present example qualitative images from CelebA-HQ in Figure 11b and present qualitative images on other datasets in the appendix of the paper. We note that our reported SNGAN performance on CelebA-HQ and LSUN Bedroom use default hyperparameters from ImageNet models. Gaps in performance with our model are likely smaller with better dataset specific hyper-parameters.\nEffect of Data Augmentation. We evaluate the effect of data augmentation on sampling in EBMs. In Figure 5 we show that by combining Langevin sampling with data augmentation transitions, we are able to enable chains to mix across different images, whereas prior works have shown Langevin converging to fixed images. In Figure 6 we further show that given a fix random noise initialization, data augmentation transitions enable to reach a diverse number of different samples, while sampling without data augmentation transitions leads all chains to converge to the same face.\nMode Convergence. We further investigate high likelihood modes of our model. In Figure 8, we compare very low energy samples (obtained after running gradient descent 1000 steps on an energy function) for both our model with data augmentation and KL loss and a model without either term. Due to improved mode exploration, we find that low temperature samples under our model with data augmentation/KL loss reflect typical high likelihood ”modes” in the training dataset, while our baseline models converges to odd shapes, also noted in (Nijkamp et al., 2019a). In Figure 7, we quantitatively measure Inception scores as we run steps of gradient descent with or without data augmentation and KL loss. Our\nInception score decreases much more slowly, with some degree of degradation expected since low temperature samples have less diversity.\nStability/KL Loss EBMs are known to difficult to train, and to be sensitive to both the exact architecture and to various hyper-parameters. We found that the addition of a KL term into our training objective significantly improved the stability of EBM training, by encouraging the sampling distribution to match the model distribution. In Figure 9, we measure the energy difference between real and generated images over the course training when adding normalization and self-attention layers to models. We find that with Lkl, the energy difference between both is kept at 0, which indicates stable training. in contrast, without Lkl all models, with the exception of a model with spectral normalization, diverge to a negative energy different of −1, indicating training collapse. Furthermore, the use of spectral normalization by itself, albeit stable, precludes the addition of other modern network components such as layer normalization. The addition of the KL term itself is not too expensive, simply requiring an additional nearest neighbor computation during training, which can be relatively insignificant cost compared to the number of negative sampling steps used during training. With a intermediate number of negative sampling steps (60 steps) during training, adding the KL term roughly twice as slow as normal training. This difference is decreased with larger number of sampling steps. Please see the appendix for additional analysis of gradient magnitudes of Lkl and Lcd.\nAblations. We ablate each portion of our proposed approach in Table 2. We find that the KL loss is crucial to the stability of training an EBM, and find that additions such as a multi-scale architecture are not stable without the presence of a KL loss." }, { "heading": "3.3 COMPOSITIONALITY", "text": "Energy Based Models (EBMs) have the ability to compose with other models at generation time (Hinton, 1999; Haarnoja et al., 2017; Du et al., 2020a). We investigate to what extent EBMs trained under our new proposed framework can also exhibit compositionality. See (Du et al., 2020a) for a discussion of various compositional operators and applications in EBMs. In particular, we train independent EBMs E(x|c1), E(x|c2), E(x|c3), that learn conditional generative distribution of concept factors c such as facial expression or object position. We test to see if we can compose independent energy functions together to generate images with each concept factor simultaneously. We consider compositions the CelebA-HQ dataset, where we train independent energy functions of face attributes of age, gender, smiling, and wavy hair and a high resolution rendered of different objects rendered at different locations, where we train an energy function on size, position, rotation, and identity of the object.\nQualitative Results. We present qualitative results of compositions of energy functions in Figure 10. In both composition settings, our approach is able to successfully generate images with each of conditioned factors, while also being globally coherent. The left image shows that as we condition on factors of young, female, smiling, and wavy hair, images generation begins exhibiting each required feature. The right image similarly shows that as we condition on factors of size, type, position, and rotation, image generations begin to exhibit each conditioned attribute. We note that figures are visually consistent in terms of lighting, shadows and reflections. We note that generations of thee combination of different factors are only specified at generation time, with models being trained independently. Our results indicate that our framework for training EBMs is a promising direction for high resolution compositional visual generation. We further provide visualization of best comparative compositional model from (Vedantam et al., 2018) in the appendix and find that our approach significantly outperforms it." }, { "heading": "3.4 OUT OF DISTRIBUTION ROBUSTNESS", "text": "Energy Based Models (EBMs) have also been shown to exhibit robustness to both out-of-distribution and adversarial samples (Du & Mordatch, 2019; Grathwohl et al., 2019). We evaluate out-ofdistribution detection of our trained energy through log-likelihood using the evaluation metrics proposed in Hendrycks & Gimpel (2016). We similarly evaluate out-of-distribution detection of an unconditional CIFAR-10 model.\nResults. We present out-of-distribution results in Table 3, comparing with both other likelihood models and EBMs and using log-likelihood to detect outliers. We find that on the datasets we evaluate, our approach significantly outperforms other baselines, with the exception of CIFAR-10 interpolations. We note the JEM (Grathwohl et al., 2019) further requires supervised labels to train the energy function, which has to shown to improve out-of-distribution performance. We posit that by more efficiently exploring modes of the energy distribution at training time, we are able to reduce the spurious modes of the energy function and thus improve out-of-distribution performance." }, { "heading": "4 RELATED WORK", "text": "Our work is related to a large, growing body of work on different approaches for training EBMs. Our approach is based on contrastive divergence (Hinton, 2002), where an energy function is trained to contrast negative samples drawn from a model distribution and from real data. In recent years, such approaches have been applied to the image domain (Xie et al., 2016; Gao et al., 2018; Du & Mordatch, 2019; Nijkamp et al., 2019b; Grathwohl et al., 2019). (Gao et al., 2018) also proposes a multi-scale approach towards generating images from EBMs, but different from our work, uses each sub-scale EBM to initialize the generation of the next EBM. Our work builds on existing works towards contrastive divergence based training of EBMs, and presents improvements in generation and stability.\nA difficulty with contrastive divergence training is the difficulty of negative sample generation. To sidestep this issue, a separate line of work utilizes an auxiliary network to amortize the negative portions of the sampling procedure (Kim & Bengio, 2016; Kumar et al., 2019; Han et al., 2019; Xie et al., 2018a; Song & Ou, 2018). One line of work (Kim & Bengio, 2016; Kumar et al., 2019; Song & Ou, 2018), utilizes a separate generator network for negative image sample generations. In contrast, (Xie et al., 2018a), utilizes a generator to warm start generations for negative samples and (Han et al., 2019) minimizes a divergence triangle between three models. While such approaches enable better qualitative generation, they also lose some of the flexibility of the EBM formulation. For example, separate energy models can no longer be composed together for generation.\nIn addition, other approaches towards training EBMs seek instead to investigate separate objectives to train the EBM. One such approach is score matching, where the gradients of an energy function are trained to match the gradients of real data (Hyvärinen, 2005; Song & Ermon, 2019), with a related denoising (Sohl-Dickstein et al., 2015; Saremi et al., 2018; Ho et al., 2020) approach. Additional objectives include noise contrastive estimation (Gao et al., 2020) and learned Steins discrepancy (Grathwohl et al., 2020).\nMost prior work in contrastive divergence has ignored the KL term (Hinton, 1999; Salakhutdinov & Hinton, 2009). A notable exception is (Ruiz & Titsias, 2019), which obtains a similar KL divergence term to ours. Ruiz & Titsias (2019) use a high variance REINFORCE estimator to estimate the gradient of the KL term, while our approach relies on auto-differentiation and nearest neighbor entropy estimators. Differentiation through model generation procedures has previously been explored in other models (Finn & Levine, 2017; Metz et al., 2016). Other related entropy estimators include those based on Stein’s identity (Liu et al., 2017) and MINE (Belghazi et al., 2018). In contrast to these approaches, our entropy estimator relies only on nearest neighbor calculation, and does not require the training of an independent neural network." }, { "heading": "5 CONCLUSION", "text": "We propose a simple and general framework for improving generation and ease of training with energy based models. We show that the framework enables high resolution compositional image generation and out-of-distribution robustness. In the future, we are interested in further computational scaling of our framework, and applications to domains such as text and reasoning." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 MODEL ARCHITECTURES", "text": "We list model architectures used in our experiments in Figure 11. When training multi-scale energy functions, our final output energy function is the sum of energy functions applied to the full resolution image, half resolution image, and quarter resolution image. We use the architecture reported in Figure 11 for the full resolution image. The half-resolution models shares the architecture listed in Figure 11, but with all layers before and including the first down-sampled residual block removed. Similarily, the quarter resolution models share the architectures listed, but with all layers before two down-sampled residual blocks removed.\n3x3 conv2d, 64\nResBlock 64\nResBlock Down 64\nResBlock 64\nResBlock Down 64\nSelf Attention 64\nResBlock 128\nResBlock Down 128\nResBlock 256\nResBlock Down 256\nGlobal Mean Pooling\nDense→ 1\n(a) The model architecture used for CIFAR-10 experiments.\n3x3 conv2d, 64\nResBlock Down 64\nResBlock Down 128\nResBlock Down 128\nResBlock 256\nResBlock Down 256\nSelf Attention 512\nResBlock 512\nResBlock Down 512\nGlobal mean Pooling\nDense→ 1\n(b) The model architecture used for CelebA/LSUN room experiments.\nFigure 11: Architecture of models on different datasets." }, { "heading": "A.2 EXPERIMENT CONFIGURATIONS FOR DIFFERENT DATASETS", "text": "CIFAR-10 For CIFAR-10, we use 40 steps of Langevin sampling to generate a negative sample. The Langevin sampling step size is set to be 100, with Gaussian noise of magnitude 0.001 at each iteration. The data augmentation transform consists of color augmentation of strength 1.0 from (Chen et al., 2020), as a random horizontal crop, and a image resize between 0.3 and 1.0 and a Gaussian blur of 5 pixels.\nCelebA/LSUN Bed For CelebA and LSUN bed datasets, we use 40 steps of Langevin sampling to generate negative samples. The Langevin sampling step size is set to be 1000, with Gaussian noise of magnitude 0.001 applied at each iteration. The data augmentation transform consists of color augmentation of strength 0.5 from (Chen et al., 2020), as a random horizontal crop, and a image resize between 0.3 and 1.0 and a Gaussian blur of 11 pixels." }, { "heading": "A.3 COMPARISON OF CD/KL GRADIENT MAGNITUDES", "text": "We plot the overall gradient magnitudes of the contrastive divergence and KL loss terms during training of an EBM in Figure 12. We find that relative magnitude of both training remains constant across training, and that the gradient of the KL objective is non-negligible." }, { "heading": "A.4 ANALYSIS OF TRUNCATED LANGEVIN BACKPROPAGATION", "text": "To test the effect of truncating backpropogation through the KL loss to only one sampling step of Langevin sampling, we train two seperate models on MNIST, one with backpropogation through\nall Langevin steps, and one with backpropogation through only the last Langevin step. We obtain FIDs of 90.54 with backpropogation through only 1 step of Langevin sampling and FIDs of 94.85 with backpropogation through all steps of Langevin sampling. We present illustrations of samples generated with one step in Figure 13 and with all steps in Figure 14." }, { "heading": "A.5 ANALYSIS OF EFFECT OF KL LOSS ON MODE EXPLORATION", "text": "The KL loss adds an additional term to EBM training that encourages EBM training updates to maintain good mode coverage while optimizing the usual contrastive divergence objective. Thus the KL loss serves as a regularizer to prevent EBM sampling from collapsing. In the absence of the KL loss, EBM sampling always eventually collapses and generates samples in Figure 15" }, { "heading": "A.6 COMPARISON TO OTHER COMPOSITIONAL GENERATIVE MODELS", "text": "To our knowledge, there are relatively few other models that can compositionally combine, with the approach of JVAE (Vedantam et al., 2018) being the closest to our work. We provide comparisons in Figure 16. Our approach is significantly less blurry than JVAE." }, { "heading": "A.7 ADDITIONAL QUALITATIVE IMAGES", "text": "We present randomly generated qualitative images on the LSUN dataset in Figure 17 and the CIFAR10 dataset in Figure 18. In both setting, we find that unconditional images appear mostly globally coherent." } ]
2,020
null
SP:6d6e083899bc17a2733aa16efd259ad4ed2076d6
[ "This paper falls into a class of continual learning methods which accommodate for new tasks by expanding the network architecture, while freezing existing weights. This freezing trivially resolves forgetting. The (hard) problem of determining how to expand the network is tackled with reinforcement learning, largely building upon a previous approach (reinforced continual learning, RCL). Apart from some RL-related implementation choices that differ here, the main difference to RCL is that the present method learns a mask which determines which neurons to reuse, while RCL only uses RL to determine how many neurons to add. Experiments demonstrate that this allows reducing network size while significantly improving accuracy on Split CIFAR-100. The runtime is, however, increased here." ]
Continual learning with neural networks is an important learning framework in AI that aims to learn a sequence of tasks well. However, it is often confronted with three challenges: (1) overcome the catastrophic forgetting problem, (2) adapt the current network to new tasks, and meanwhile (3) control its model complexity. To reach these goals, we propose a novel approach named as Continual Learning with Efficient Architecture Search, or CLEAS in short. CLEAS works closely with neural architecture search (NAS) which leverages reinforcement learning techniques to search for the best neural architecture that fits a new task. In particular, we design a neuron-level NAS controller that decides which old neurons from previous tasks should be reused (knowledge transfer), and which new neurons should be added (to learn new knowledge). Such a fine-grained controller allows finding a very concise architecture that can fit each new task well. Meanwhile, since we do not alter the weights of the reused neurons, we perfectly memorize the knowledge learned from previous tasks. We evaluate CLEAS on numerous sequential classification tasks, and the results demonstrate that CLEAS outperforms other state-of-the-art alternative methods, achieving higher classification accuracy while using simpler neural architectures.
[]
[ { "authors": [ "Rahaf Aljundi", "Francesca Babiloni", "Mohamed Elhoseiny", "Marcus Rohrbach", "Tinne Tuytelaars" ], "title": "Memory aware synapses: Learning what (not) to forget", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Rahaf Aljundi", "Klaas Kelchtermans", "Tinne Tuytelaars" ], "title": "Task-free continual learning", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Han Cai", "Tianyao Chen", "Weinan Zhang", "Yong Yu", "Jun Wang" ], "title": "Efficient architecture search by network transformation", "venue": "In Thirty-Second AAAI conference on artificial intelligence,", "year": 2018 }, { "authors": [ "Tom Diethe", "Tom Borchert", "Eno Thereska", "Borja de Balle Pigem", "Neil Lawrence" ], "title": "Continual learning in practice", "venue": "arXiv preprint arXiv:1903.05202,", "year": 2019 }, { "authors": [ "Khurram Javed", "Martha White" ], "title": "Meta-learning representations for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2017 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Xialei Liu", "Marc Masana", "Luis Herranz", "Joost Van de Weijer", "Antonio M Lopez", "Andrew D Bagdanov" ], "title": "Rotate your networks: Better weight consolidation and less catastrophic forgetting", "venue": "24th International Conference on Pattern Recognition,", "year": 2018 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Vladimir Nekrasov", "Hao Chen", "Chunhua Shen", "Ian Reid" ], "title": "Architecture search of dynamic cells for semantic video segmentation", "venue": "In The IEEE Winter Conference on Applications of Computer Vision,", "year": 2020 }, { "authors": [ "Cuong V Nguyen", "Yingzhen Li", "Thang D Bui", "Richard E Turner" ], "title": "Variational continual learning", "venue": "arXiv preprint arXiv:1710.10628,", "year": 2017 }, { "authors": [ "German I Parisi", "Ronald Kemker", "Jose L Part", "Christopher Kanan", "Stefan Wermter" ], "title": "Continual lifelong learning with neural networks: A review", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameters sharing", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ilija Radosavovic", "Justin Johnson", "Saining Xie", "Wan-Yen Lo", "Piotr Dollár" ], "title": "On network design spaces for visual recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the AAAI conference on artificial intelligence,", "year": 2019 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander Kolesnikov", "Georg Sperl", "Christoph H Lampert" ], "title": "icarl: Incremental classifier and representation learning", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Anthony Robins" ], "title": "Catastrophic forgetting, rehearsal and pseudorehearsal", "venue": "Connection Science,", "year": 1995 }, { "authors": [ "David Rolnick", "Arun Ahuja", "Jonathan Schwarz", "Timothy Lillicrap", "Gregory Wayne" ], "title": "Experience replay for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sebastian Thrun" ], "title": "A lifelong learning perspective for mobile robot control", "venue": "In Intelligent Robots and Systems,", "year": 1995 }, { "authors": [ "Yujing Wang", "Yaming Yang", "Yiren Chen", "Jing Bai", "Ce Zhang", "Guinan Su", "Xiaoyu Kou", "Yunhai Tong", "Mao Yang", "Lidong Zhou" ], "title": "Textnas: A neural architecture search space tailored for text representation", "venue": "arXiv preprint arXiv:1912.10729,", "year": 2019 }, { "authors": [ "Ju Xu", "Zhanxing Zhu" ], "title": "Reinforced continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jaehong Yoon", "Eunho Yang", "Jeongtae Lee", "Sung Ju Hwang" ], "title": "Lifelong learning with dynamically expandable networks", "venue": "In International Conference on Learning Representation,", "year": 2018 }, { "authors": [ "Jie Zhang", "Junting Zhang", "Shalini Ghosh", "Dawei Li", "Jingwen Zhu", "Heming Zhang", "Yalin Wang" ], "title": "Regularize, expand and compress: Nonexpansive continual learning", "venue": "In The IEEE Winter Conference on Applications of Computer Vision,", "year": 2020 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Continual learning, or lifelong learning, refers to the ability of continually learning new tasks and also performing well on learned tasks. It has attracted enormous attention in AI as it mimics a human learning process - constantly acquiring and accumulating knowledge throughout their lifetime (Parisi et al., 2019). Continual learning often works with deep neural networks (Javed & White, 2019; Nguyen et al., 2017; Xu & Zhu, 2018) as the flexibility in a network design can effectively allow knowledge transfer and knowledge acquisition. However, continual learning with neural networks usually faces three challenges. The first one is to overcome the so-called catastrophic forgetting problem (Kirkpatrick et al., 2017), which states that the network may forget what has been learned on previous tasks. The second one is to effectively adapt the current network parameters or architecture to fit a new task, and the last one is to control the network size so as not to generate an overly complex network.\nIn continual learning, there are two main categories of strategies that attempt to solve the aforementioned challenges. The first category is to train all tasks within a network with fixed capacity. For example, (Rebuffi et al., 2017; Lopez-Paz & Ranzato, 2017; Aljundi et al., 2018) replay some old samples with the new task samples and then learn a new network from the combined training set. The drawback is that they typically require a memory system that stores past data. (Kirkpatrick et al., 2017; Liu et al., 2018) employ some regularization terms to prevent the re-optimized parameters from deviating too much from the previous ones. Approaches using fixed network architecture, however, cannot avoid a fundamental dilemma - they must either choose to retain good model performances on learned tasks, leaving little room for learning new tasks, or compromise the learned model performances to allow learning new tasks better.\nTo overcome such a dilemma, the second category is to expand the neural networks dynamically (Rusu et al., 2016; Yoon et al., 2018; Xu & Zhu, 2018). They typically fix the parameters of the old neurons (partially or fully) in order to eliminate the forgetting problem, and also permit adding new neurons to adapt to the learning of a new task. In general, expandable networks can achieve better model performances on all tasks than the non-expandable ones. However, a new issue appears: expandable\nnetworks can gradually become overly large or complex, which may break the limits of the available computing resources and/or lead to over-fitting.\nIn this paper, we aim to solve the continual learning problems by proposing a new approach that only requires minimal expansion of a network so as to achieve high model performances on both learned tasks and the new task. At the heart of our approach we leverage Neural Architecture Search (NAS) to find a very concise architecture to fit each new task. Most notably, we design NAS to provide a neuron-level control. That is, NAS selects two types of individual neurons to compose a new architecture: (1) a subset of the previous neurons that are most useful to modeling the new task; and (2) a minimal number of new neurons that should be added. Reusing part of the previous neurons allows efficient knowledge transfer; and adding new neurons provides additional room for learning new knowledge. Our approach is named as Continual Learning with Efficient Architecture Search, or CLEAS in short. Below are the main features and contributions of CLEAS.\n• CLEAS dynamically expands the network to adapt to the learning of new tasks and uses NAS to determine the new network architecture; • CLEAS achieves zero forgetting of the learned knowledge by keeping the parameters of the\nprevious architecture unchanged; • NAS used in CLEAS is able to provide a neuron-level control which expands the network\nminimally. This leads to an effective control of network complexity; • The RNN-based controller behind CLEAS is using an entire network configuration (with\nall neurons) as a state. This state definition deviates from the current practice in related problems that would define a state as an observation of a single neuron. Our state definition leads to improvements of 0.31%, 0.29% and 0.75% on three benchmark datasets. • If the network is a convolutional network (CNN), CLEAS can even decide the best filter size\nthat should be used in modeling the new task. The optimized filter size can further improve the model performance.\nWe start the rest of the paper by first reviewing the related work in Section 2. Then we detail our CLEAS design in Section 3. Experimental evaluations and the results are presented in Section 4." }, { "heading": "2 RELATED WORK", "text": "Continual Learning Continual learning is often considered as an online learning paradigm where new skills or knowledge are constantly acquired and accumulated. Recently, there are remarkable advances made in many applications based on continual learning: sequential task processing (Thrun, 1995), streaming data processing (Aljundi et al., 2019), self-management of resources (Parisi et al., 2019; Diethe et al., 2019), etc. A primary obstacle in continual learning, however, is the catastrophic forgetting problem and many previous works have attempted to alleviate it. We divide them into two categories depending on whether their networks are expandable.\nThe first category uses a large network with fixed capacity. These methods try to retain the learned knowledge by either replaying old samples (Rebuffi et al., 2017; Rolnick et al., 2019; Robins, 1995) or enforcing the learning with regularization terms (Kirkpatrick et al., 2017; Lopez-Paz & Ranzato, 2017; Liu et al., 2018; Zhang et al., 2020). Sample replaying typically requires a memory system which stores old data. When learning a new task, part of the old samples are selected and added to the training data. As for regularized learning, a representative approach is Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) which uses the Fisher information matrix to regularize the optimization parameters so that the important weights for previous tasks are not altered too much. Other methods like (Lopez-Paz & Ranzato, 2017; Liu et al., 2018; Zhang et al., 2020) also address the optimization direction of weights to prevent the network from forgetting the previously learned knowledge. The major limitation of using fixed networks is that it cannot properly balance the learned tasks and new tasks, resulting in either forgetting old knowledge or acquiring limited new knowledge.\nTo address the above issue, another stream of works propose to dynamically expand the network, providing more room for obtaining new knowledge. For example, Progressive Neural Network (PGN) (Rusu et al., 2016) allocates a fixed number of neurons and layers to the current model for a new task. Apparently, PGN may end up generating an overly complex network that has high redundancy and it can easily crash the underlying computing system that has only limited resources.\nAnother approach DEN (Dynamically Expandable Network) (Yoon et al., 2018) partially mitigates the issue of PGN by using group sparsity regularization techniques. It strategically selects some old neurons to retrain, and adds new neurons only when necessary. However, DEN can have the forgetting problem due to the retraining of old neurons. Another drawback is that DEN has very sensitive hyperparameters that need sophisticated tuning. Both of these algorithms only grow the network and do not have a neuron level control which is a significant departure from our work. Most recently, a novel method RCL (Reinforced Continual Learning) (Xu & Zhu, 2018) also employs NAS to expand the network and it can further decrease model complexity. The main difference between RCL and CLEAS is that RCL blindly reuses all the neurons from all of the previous tasks and only uses NAS to decide how many new neurons should be added. However, reusing all the old neurons has two problems. First, it creates a lot of redundancy in the new network and some old neurons may even be misleading and adversarial; second, excessively many old neurons reused in the new network can dominate its architecture, which may significantly limit the learning ability of the new network. Therefore, RCL does not really optimize the network architecture, thus it is unable to generate an efficient and effective network for learning a new task. By comparison, CLEAS designs a fine-grained NAS which provides neuron-level control. It optimizes every new architecture by determining whether to reuse each old neuron and how many new neurons should be added to each layer.\nNeural Architecture Search NAS is another promising research topic in the AI community. It employs reinforcement learning techniques to automatically search for a desired network architecture for modeling a specific task. For instance, Cai et al. (Cai et al., 2018) propose EAS to discover a superb architecture with a reinforced meta-controller that can grow the depth or width of a network; Zoph et al. (Zoph & Le, 2016) propose an RNN-based controller to generate the description of a network, and the controller is reinforced by the predicting accuracy of a candidate architecture. Pham et al. (Pham et al., 2018) propose an extension of NAS, namely ENAS, to speed up training processing by forcing all child networks to share weights. Apart from algorithms, NAS also has many valuable applications such as image classification (Real et al., 2019; Radosavovic et al., 2019), video segmentation (Nekrasov et al., 2020), text representation (Wang et al., 2019) and etc. Hence, NAS is a demonstrated powerful tool and it is especially useful in continual learning scenarios when one needs to determine what is a good architecture for the new task." }, { "heading": "3 METHODOLOGY", "text": "There are two components in the CLEAS framework: one is the task network that continually learns a sequence of tasks; and the other is controller network that dynamically expands the task network. The two components interact with each other under the reinforcement learning context - the task network sends the controller a reward signal which reflects the performance of the current architecture design; the controller updates its policy according to the reward, and then generates a new architecture for the task network to test its performance. Such interactions repeat until a good architecture is found. Figure 1 illustrates the overall structure of CLEAS. On the left is the task network, depicting an optimized architecture for task t− 1 (it is using gray and pink neurons) and a candidate architecture for task t. They share the same input neurons but use their own output neurons. Red circles are newly added neurons and pink ones are reused neurons from task t− 1 (or any previous task). To train the network, only the red weights that connect new-old or new-new neurons are optimized. On the right is the controller network which implements an RNN. It provides a neuron-level control to generate a description of the task network design. Each blue square is an RNN cell that decides to use or drop a certain neuron in the task network." }, { "heading": "3.1 NEURAL ARCHITECTURE SEARCH MODEL", "text": "Task Network The task network can be any neural network with expandable ability, for example, a fully connected network or a CNN, etc. We use the task network to model a sequence of tasks. Formally, suppose there are T tasks and each has a training set Dt = {(xi, yi)}Oti=1, a validation set Vt = {(xi, yi)}Mti=1 and a test set Tt = {(xi, yi)}Kti=1, for t = 1, 2, . . . , T . We denote by At the network architecture that is generated to model task t. Moreover, we denote At = (Nt,Wt) where Nt are the neurons or filters used in the network and Wt are the corresponding weights. We train the first task with a basic network A1 by solving the standard supervised learning problem\nW 1 = arg min W1\nL1(W1;D1), (1)\nwhere L1 is the loss function for the first task. For the optimization procedure, we use stochastic gradient descent (SGD) with a constant learning rate. The network is trained till the required number of epochs or convergence is reached.\nWhen task t (t > 1) arrives, for every task k < t we already know its optimized architecture Ak and parameters W k. Now we use the controller to decide a network architecture for task t. Consider a candidate network At = (Nt,Wt). There are two types of neurons in Nt: Noldt are used neurons from previous tasks and Nnewt = Nt \\ Noldt are the new neurons added. Based on this partition, the weights Wt can be also divided into two disjoint sets: W oldt are the weights that connect only used neurons, and Wnewt = Wt \\W oldt are the new weights that connect old-new or new-new neurons. Formally, Noldt = {n ∈ Nt | there exists k < t such that n ∈ Nk} and W oldt = {w ∈ Wt | there exists n1, n2 ∈ Noldt such that w connects n1, n2}. The training procedure for the new task is to only optimize the new weights Wnewt and leave W old t unchanged, equal to their previously optimized values W old\nt . Therefore, the optimization problem for the new task reads\nW new\nt = arg min Wnewt Lt(Wt|W oldt =W oldt ;Dt). (2)\nThen we set W t = (W old t ,W new\nt ). Finally, this candidate network At with optimized weights W t is tested on the validation set Vt. The corresponding accuracy and network complexity is used to compute a reward R (described in Section 3.2). The controller updates its policy based on R and generates a new candidate network A′t to repeat the above procedure. After enough such interactions, the candidate architecture that achieves the maximal reward is the optimal one for task t, i.e. At = (Nt,W t), where Nt finally denotes the neurons of the optimal architecture.\nController Network The goal of the controller is to provide a neuron-level control that can decide which old neurons from previous tasks can be reused, and how many new neurons should be added. In our actual implementation, we assume there is a large hyper-network for the controller to search for a task network. Suppose the hyper-network has l layers and each layer i has a maximum of ui neurons. Each neuron has two actions, either “drop” or “use” (more actions for CNN, to be described later). Thus, the search space for the controller would be 2n where n = ∑l i=1 ui is the total number of neurons. Apparently, it is infeasible to enumerate all the action combinations and determine the best one. To deal with this issue, we treat the action sequence as a fixed-length string a1:n = a1, a2, . . . , an that describes a task network. We design the controller as an LSTM network where each cell controls one ai in the hyper-network. Formally, we denote by π(a1:n|s1:n; θc) the policy function of the controller network as\nπ(a1:n|s1:n; θc) = P (a1:n|s1:n; θc) = n∏ i=1 P (ai|s1:i; θc) . (3)\nThe state s1:n is a sequence that represents one state; the output is the probability of a task network described by a1:n; and θc denotes the parameters of the controller network. At this point we note that our model is a departure from standard models where states are considered individual sj and an episode is comprised of s1:n. In our case we define s1:n as one state and episodes are created by starting with different initial states (described below).\nRecall that in Fig.1, the two components in CLEAS work with each other iteratively and there are H · U such iterations whereH is the number of episodes created and U the length of each episode.\nConsider an episode e = (s11:n, ā 1 1:n, R 1, s21:n, ā 2 1:n, R 2, . . . , sU1:n, ā U 1:n, R U , sU+11:n ). The initial state s11:n is either generated randomly or copied from the terminal state s U+1 1:n of the previous episode. The controller starts with some initial θc. For any u = 1, 2, . . . ,U , the controller generates the most probable task network specified by āu1:n from s u 1:n by following LSTM. To this end, we use the recursion auj = f(s u j , h u j−1) where h u j−1 is the hidden vector and f standard LSTM equations to generate au1:n from s u 1:n. Let us point out that our RNN application a u j = f(s u j , h u j−1) differs from the standard practice that uses auj = f(a u j−1, h u j−1). Action ā u 1:n is obtained from a u 1:n by selecting the maximum probability value for each j, 1 ≤ j ≤ n. The task trains this task network, evaluates it on the validation set and returns reward Ru. We then construct su+11:n from the previous neuron action āuj together with the layer index as b u+1 j for each 1 ≤ j ≤ n. More concretely, su+1j = āuj ⊕ buj where āuj , b u j have been one-hot encoded, and ⊕ is the concatenation operator. Finally, a new network architecture āu+11:n is generated from s u+1 1:n . At the end of each episode, the controller updates its parameter θc by a policy gradient algorithm. After all H · U total iterations, the task network that achieves the maximum reward is used for that task.\nThe choice for treating the state as s1:n and not sj has the following two motivations. In standard NAS type models after updating sj the network is retrained. This is intractable in our case as the number of neurons n is typically large. For this reason we want to train only once per s1:n. The second reason is related and stems from the fact that the reward is given only at the level of s1:n. For this reason it makes sense to have s1:n as the state. This selection also leads to computational improvements as is attested later in Section 4.\nCLEAS-C for CNN The design of CLEAS also works for CNN with fixed filter sizes where one filter corresponds to one neuron. However, we know that filter sizes in a CNN can have a huge impact on its classification accuracy. Therefore, we further improve CLEAS so that it can decide the best filter sizes for each task. In particular, we allow a new task to increase the filter size by one upon the previous task. For example, a filter size 3× 3 used in some convolutional layer in task t− 1 can become 4× 4 in the same layer in task t. Note that for one task all the filters in the same layer must use the same filter size, but different layers can use different filter sizes.\nWe name the new framework as CLEAS-C. There are two major modifications to CLEAS-C. First, the output actions in the controller are now encoded by 4 bits and their meanings are “only use,”“use & extend,”“only drop” and “drop & extend” (see Fig. 2). Note that the extend decision is made at the neuron layer, but there has to be only one decision at the layer level. To this end, we apply simple majority voting of all neurons at a layer to get the layer level decision. The other modification regards the training procedure of the task network. The only different case we should deal with is how to optimize a filter (e.g. 4× 4) that is extended from a previous smaller filter (e.g. 3× 3). Our solution is to preserve the optimized parameters that are associated with the original smaller filter (the 9 weights) and to only optimize the additional weights (the 16− 9 = 7 weights). The preserved weights are placed in the center of the larger filter, and the additional weights are initialized as the averages of their surrounding neighbor weights." }, { "heading": "3.2 TRAINING WITH REINFORCE", "text": "Lastly, we present the training procedure for the controller network. Note that each task t has an independent training process so we drop subscript t here. Within an episode, each action string au1:n represents a task architecture and after training gets a validation accuracyAu. In addition to accuracy,\nwe also penalize the expansion of the task network in the reward function, leading to the final reward\nRu = R(au1:n) = A(au1:n)− αC(au1:n) (4) where C is the number of newly added neurons, and α is a trade-off hyperparameter. With such episodes we train J(θc) = Ea1:n∼p(·|s1:n;θc)[R] (5) by using REINFORCE. We use an exponential moving average of the previous architecture accuracies as the baseline.\nWe summarize the key steps of CLEAS in Algorithm 1 whereH is the number of iterations, U is the length of episodes, and p is the exploration probability. We point out that we do not strictly follow the usual -greedy strategy; an exploration step consists of starting an epoch from a completely random state as opposed to perturbing an existing action.\nAlgorithm 1: CLEAS. Input: A sequence of tasks with training sets {D1,D2, ...,DT }, validation sets {V1,V2, ...,VT } Output: Optimized architecture and weights for each task: At = (Nt,W t) for t = 1, 2, . . . , T for t = 1, 2, . . . , T do\nif t = 1 then Train the initial network A1 on D1 with the weights optimized as W 1; else Generate initial controller parameters θc; for h = 1, 2, . . . ,H do\n/* A new episode */ w ∼ Bernoulli(p); if w = 1 or h = 1 then\n/* Exploration */ Generate a random state string s11:n but keep layer encodings fixed;\nelse Set initial state string s11:n = s U+1 1:n , i.e. the last state of previous episode (h− 1); for u = 1, 2, . . . ,U do Generate the most probable action string āu1:n from s u 1:n by the controller;\nConfigure the task network as Au based on āu1:n and train weights W u on Dt; Evaluate Au with trained W u\non Vt and compute reward Ru; Construct su+11:n from ā u 1:n and b u 1:n where b u 1:n is the layer encoding;\nUpdate θc by REINFORCE using (s11:n, ā 1 1:n, R 1, . . . , sU1:n, ā U 1:n, R U , sU+11:n ); Store Ah = (N ū,W ū ) where ū = arg maxuRu and R̄h = maxuRu;\nStore At = Ah̄ where h̄ = arg maxh R̄h;" }, { "heading": "4 EXPERIMENTS", "text": "We evaluate CLEAS and other state-of-the-art continual learning methods on MNIST and CIFAR-100 datasets. The key results delivered are model accuracies, network complexity and training time. All methods are implemented in Tensorflow and ran on a GTX1080Ti GPU unit." }, { "heading": "4.1 DATASETS AND BENCHMARK ALGORITHMS", "text": "We use three benchmark datasets as follows. Each dataset is divided into T = 10 separate tasks. MNIST-associated tasks are trained by fully-connected neural networks and CIFAR-100 tasks are trained by CNNs.\n(a) MNIST Permutation (Kirkpatrick et al., 2017): Ten variants of the MNIST data, where each task is transformed by a different (among tasks) and fixed (among images in the same task) permutation of pixels. (b) Rotated MNIST (Xu & Zhu, 2018): Another ten variants of MNIST, where each task is rotated by a different and fixed angle between 0 to 180 degree. (c) Incremental CIFAR-100 (Rebuffi et al., 2017): The original CIFAR-100 dataset contains 60,000 32×32 colored images that belong to 100 classes. We divide them into 10 tasks and each task contains 10 different classes and their data.\nWe select four other continual learning methods to compare. One method (MWC) uses a fixed network architecture while the other three use expandable networks.\n(1) MWC: An extension of EWC (Kirkpatrick et al., 2017). By assuming some extent of correlation between consecutive tasks it uses regularization terms to prevent large deviation of the network weights when re-optimized. (2) PGN: Expands the task network by adding a fixed number of neurons and layers (Rusu et al., 2016). (3) DEN: Dynamically decides the number of new neurons by performing selective retraining and network split (Yoon et al., 2018). (4) RCL: Uses NAS to decide the number of new neurons. It also completely eliminates the forgetting problem by holding the previous neurons and their weights unchanged (Xu & Zhu, 2018).\nFor the two MNIST datasets, we follow (Xu & Zhu, 2018) to use a three-layer fully-connected network. We start with 784-312-128-10 neurons with RELU activation for the first task. For CIFAR100, we develop a modified version of LeNet (LeCun et al., 1998) that has three convolutional layers and three fully-connected layers. We start with 16 filters in each layer with sizes of 3 × 3, 3 × 3 and 4× 4 and stride of 1 per layer. Besides, to fairly compare the network choice with (Xu & Zhu, 2018; Yoon et al., 2018), we set: ui = 1000 for MNIST and ui = 128 for CIFAR-100. We also use H = 200 and U = 1. The exploration probability p is set to be 30%. We select the RMSProp optimizer for REINFORCE and Adam for the training task.\nWe also implement a version with states corresponding to individual neurons where the controller is following auj = f(a u j−1, h u j−1). We configure this version under the same experimental settings as of CLEAS and test it on the three datasets. The results show that compared to CLEAS, this version exhibits an inferior performance of -0.31%, -0.29%, -0.75% in relative accuracy, on the three datasets, respectively. Details can be found in Appendix." }, { "heading": "4.2 EXPERIMENTAL RESULTS", "text": "Figure 3: Average test accuracy across all tasks.\nMNIST permutation Rotated MNIST CIFAR-100 datasets\n0\n1\n2\n3\n4\n5\n6\npa ra m ete\nrs (× 10\n5 )\nMWC PGN DEN RCL CLEAS\nFigure 4: Average number of parameters.\nModel Accuracy We first report the averaged model accuracies among all tasks. Fig.3 shows the relative improvements of the network-expandable methods against EWC (numbers on the top are their absolute accuracies). We clearly observe that methods with expandability can achieve much better performance than MWC. Furthermore, we see that CLEAS outperforms other methods. The average relative accuracy improvement of CLEAS vs RCL (the state-of-the-art method and the second best performer) is 0.21%, 0.21% and 6.70%, respectively. There are two reasons: (1) we completely overcome the forgetting problem by not altering the old neurons/filters; (2) our neuron-level control can precisely pick useful old neurons as well as new neurons to better model each new task. Network Complexity Besides model performance, we also care about how complex the network is when used to model each task. We thus report the average number of model weights across all tasks in Fig. 4. First, no surprise to see that MWC consumes the least number of weights since its network is non-expandable. But this also limits its model performance. Second, among the other four methods that expand networks we observe CLEAS using the least number of weights. The average relative complexity improvement of CLEAS vs RCL is 29.9%, 19.0% and 51.0% reduction, respectively. It supports the fact that our NAS using neuron-level control can find a very efficient architecture to model every new task. Network Descriptions We visualize some examples of network architectures the controller generates. Fig. 5 illustrates four optimal configurations (tasks 2 to 5) of the CNN used to model CIFAR-100. Each task uses three convolutional layers and each square represents a filter. A white square means it is not used by the current task; a red square represents it was trained by some earlier task and now reused by the current task; a light yellow square means it was trained before but not reused; and a\ndark yellow square depicts a new filter added. According to the figure, we note that CLEAS tends to maintain a concise architecture all the time. As the task index increases it drops more old filters and only reuses a small portion of them that are useful for current task training, and it is adding fewer new neurons.\nCLEAS-C We also test CLEAS-C which decides the best filter sizes for CNNs. In the CIFAR-100 experiment, CLEAS uses fixed filter sizes 3× 3, 3× 3 and 4× 4 in its three convolutional layers. By comparison, CLEAS-C starts with the same sizes but allows each task to increase the sizes by one. The results show that after training the 10 tasks with CLEAS-C the final sizes become 4× 4, 8× 8, and 8× 8. It achieves a much higher accuracy of 67.4% than CLEAS (66.9%), i.e. a 0.7% improvement. It suggests that customized filter sizes can better promote model performances. On the other hand, complexity of CLEAS-C increases by 92.6%.\nNeuron Allocation We compare CLEAS to RCL on neuron reuse and neuron allocation. Fig. 6 visualizes the number of reused neurons (yellow and orange for RCL; pink and red for CLEAS) and new neurons (dark blue for both methods). There are two observations. On one hand, CLEAS successfully drops many old neurons that are redundant or useless, ending up maintaining a much simpler network. On the other hand, we observe that both of the methods recommend a similar number of new neurons for each task. Therefore, the superiority of CLEAS against RCL lies more on its selection of old neurons. RCL blindly reuses all previous neurons.\n1 2 3 4 5 6 7 8 9 10 task id\n0\n100\n200\n300\n400\n500\nnu m be\nr o f n\neu ro ns\nRCL-1st layer RCL-2st layer CLEAS-1st layer CLEAS-2st layer\nFigure 6: Neuron allocation for MNIST Permulation. MNIST permutation Rotated MNIST CIFAR-100 datasets\n0\n10\n20\n30\n40\n50\n60\n70\nru nt im\ne ( ×1\n02 s)\nMWC PGN DEN RCL CLEAS\nFigure 7: Training time\nTraining Time We also report the training time in Fig.7. It is as expected that CLEAS’ running time is on the high end due to the neuron-level control that results in using a much longer RNN for the controller. On the positive note, the increase in the running time is not substantial.\nHyperparameter Sensitivity We show the hyperparameter analysis in Appendix. The observation is that the hyperparameters used in CLEAS are not as sensitive as those of DEN and RCL. Under all hyperparameter settings CLEAS performs the best." }, { "heading": "5 CONCLUSIONS", "text": "We have proposed and developed a novel approach CLEAS to tackle continual learning problems. CLEAS is a network-expandable approach that uses NAS to dynamically determine the optimal architecture for each task. NAS is able to provide a neuron-level control that decides the reusing of old neurons and the number of new neurons needed. Such a fine-grained control can maintain a very concise network through all tasks. Also, we completely eliminate the catastrophic forgetting problem by never altering the old neurons and their trained weights. With demonstration by means of the experimental results, we note that CLEAS can indeed use simpler networks to achieve yet higher model performances compared to other alternatives. In the future, we plan to design a more efficient search strategy or architecture for the controller such that it can reduce the runtime while not compromising the model performance or network complexity." } ]
2,020
null
SP:047761908963bea6350f5d65a253c09f1a626093
[ "The authors contribute an approach to automatically distinguish between good and bad student assignment submissions by modeling the assignment submissions as MDPs. The authors hypothesize that satisfactory assignments modeled as MDPs will be more alike than they are to unsatisfactory assignments. Therefore this can potentially be used as part of some kind of future automated feedback system. The authors demonstrate this approach on an assignment for students to recreate a simple pong-like environment. They are able to achieve high accuracy over the most common submissions. " ]
Contemporary coding education often present students with the task of developing programs that have user interaction and complex dynamic systems, such as mouse based games. While pedagogically compelling, grading such student programs requires dynamic user inputs, therefore they are difficult to grade by unit tests. In this paper we formalize the challenge of grading interactive programs as a task of classifying Markov Decision Processes (MDPs). Each student’s program fully specifies an MDP where the agent needs to operate and decide, under reasonable generalization, if the dynamics and reward model of the input MDP conforms to a set of latent MDPs. We demonstrate that by experiencing a handful of latent MDPs millions of times, we can use the agent to sample trajectories from the input MDP and use a classifier to determine membership. Our method drastically reduces the amount of data needed to train an automatic grading system for interactive code assignments and present a challenge to state-of-the-art reinforcement learning generalization methods. Together with Code.org, we curated a dataset of 700k student submissions, one of the largest dataset of anonymized student submissions to a single assignment. This Code.org assignment had no previous solution for automatically providing correctness feedback to students and as such this contribution could lead to meaningful improvement in educational experience.
[]
[ { "authors": [ "Karl Cobbe", "Christopher Hesse", "Jacob Hilton", "John Schulman" ], "title": "Leveraging procedural generation to benchmark reinforcement learning", "venue": "arXiv preprint arXiv:1912.01588,", "year": 2019 }, { "authors": [ "Karl Cobbe", "Oleg Klimov", "Chris Hesse", "Taehoon Kim", "John Schulman" ], "title": "Quantifying generalization in reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Albert Corbett" ], "title": "Cognitive computer tutors: Solving the two-sigma problem", "venue": "In International Conference on User Modeling,", "year": 2001 }, { "authors": [ "Petri Ihantola", "Arto Vihavainen", "Alireza Ahadi", "Matthew Butler", "Jürgen Börstler", "Stephen H Edwards", "Essi Isohanni", "Ari Korhonen", "Andrew Petersen", "Kelly Rivers" ], "title": "Educational data mining and learning analytics in programming: Literature review and case studies", "venue": "In Proceedings of the 2015 ITiCSE on Working Group Reports,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Michael Laskin", "Kimin Lee", "Adam Stooke", "Lerrel Pinto", "Pieter Abbeel", "Aravind Srinivas" ], "title": "Reinforcement learning with augmented data", "venue": "arXiv preprint arXiv:2004.14990,", "year": 2020 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Jinwoo Shin", "Honglak Lee" ], "title": "Network randomization: A simple technique for generalization in deep reinforcement learning", "venue": null, "year": 1910 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Eleanor O’Rourke", "Christy Ballweber", "Zoran" ], "title": "Popoviı́. Hint systems may negatively impact performance in educational games", "venue": "In Proceedings of the first ACM conference on Learning@ scale conference,", "year": 2014 }, { "authors": [ "Thomas W Price", "Tiffany Barnes" ], "title": "Position paper: Block-based programming should offer intelligent support for learners", "venue": "IEEE Blocks and Beyond Workshop (B&B),", "year": 2017 }, { "authors": [ "Kelly Rivers", "Kenneth R Koedinger" ], "title": "Automatic generation of programming feedback: A data-driven approach", "venue": "In The First Workshop on AI-supported Education for Computer Science (AIEDCS 2013),", "year": 2013 }, { "authors": [ "Sherry Ruan", "Angelica Willis", "Qianyao Xu", "Glenn M Davis", "Liwei Jiang", "Emma Brunskill", "James A Landay" ], "title": "Bookbuddy: Turning digital materials into interactive foreign language lessons through a voice chatbot", "venue": "In Proceedings of the Sixth (2019) ACM Conference on Learning@ Scale,", "year": 2019 }, { "authors": [ "Sherry Ruan", "Jiayu He", "Rui Ying", "Jonathan Burkle", "Dunia Hakim", "Anna Wang", "Yufeng Yin", "Lily Zhou", "Qianyao Xu", "Abdallah AbuHashem" ], "title": "Supporting children’s math learning with feedback-augmented narrative technology", "venue": "In Proceedings of the Interaction Design and Children Conference,", "year": 2020 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Jamie Ray", "Yann LeCun", "Manohar Paluri" ], "title": "A closer look at spatiotemporal convolutions for action recognition", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Mike Wu", "Milan Mosse", "Noah Goodman", "Chris Piech" ], "title": "Zero shot learning for code education: Rubric sampling with deep learning inference", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Lisa Yan", "Nick McKeown", "Chris Piech" ], "title": "The pyramidsnapshot challenge: Understanding student process from visual output of programs", "venue": "In Proceedings of the 50th ACM Technical Symposium on Computer Science Education,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The rise of online coding education platforms accelerates the trend to democratize high quality computer science education for millions of students each year. Corbett (2001) suggests that providing feedback to students can have an enormous impact on efficiently and effectively helping students learn. Unfortunately contemporary coding education has a clear limitation. Students are able to get automatic feedback only up until they start writing interactive programs. When a student authors a program that requires user interaction, e.g. where a user interacts with the student’s program using a mouse, or by clicking on button it becomes exceedingly difficult to grade automatically. Even for well defined challenges, if the user has any creative discretion, or the problem involves any randomness, the task of automatically assessing the work is daunting. Yet creating more open-ended assignments for students can be particularly motivating and engaging, and also help allow students to practice key skills that will be needed in commercial projects.\nGenerating feedback on interactive programs from humans is more laborious than it might seem. Though the most common student solution to an assignment may be submitted many thousands of times, even for introductory computer science education, the probability distribution of homework submissions follows the very heavy tailed Zipf distribution – the statistical distribution of natural language. This makes grading exceptionally hard for contemporary AI (Wu et al., 2019) as well as massive crowd sourced human efforts (Code.org, 2014). While code as text has proved difficult to grade, actually running student code is a promising path forward (Yan et al., 2019).\nWe formulate the grading via playing task as equivalent to classifying whether an ungraded student program – a new Markov Decision Process (MDP) – belongs to a latent class of correct Markov Decision Processes (representing correct programming solutions to the assignment). Given a discrete set of environments E = {en = (Sn,A, Rn, Pn) : n = 1, 2, 3, ...}, we can partition them into E?\nand E ′. E? is the set of latent MDPs. It includes a handful of reference programs that a teacher has implemented or graded. E ′ is the set of environments specified by student submitted programs. We are building a classifier that determines whether e, a new input decision process is behaviorally identical to the latent decision process.\nPrior work on providing feedback for code has focused on text-based syntactic analysis and automatically constructing solution space (Rivers & Koedinger, 2013; Ihantola et al., 2015). Such feedback orients around providing hints and unable to determine an interactive program’s correctness. Other intelligent tutoring systems focused on math or other skills that don’t require creating interactive programs (Ruan et al., 2019; 2020). Note that in principle one could analyze the raw code and seek to understand if the code produces a dynamics and reward model that is isomorphic to the dynamics and reward generated by a correct program. However, there are many different ways to express the same correct program and classifying such text might require a large amount of data: as a first approach, we avoid this by instead deploying a policy and observing the resulting program behavior, thereby generating execution traces of the student’s implicitly specified MDP that can be used for classification.\nMain contributions in this paper:\n• We introduce the reinforcement learning challenge of Play to Grade. • We propose a baseline algorithm where an agent learns to play a game and use features such as\ntotal reward and anticipated reward to determine correctness.\n• Our classifier obtains 93.1% accuracy on 8359 most frequent programs that cover 50% of the overall submissions and achieve 89.0% accuracy on programs that are submitted by less than 5 times. We gained 14-19% absolute improvement over grading programs via code text.\n• We will release a dataset of over 700k student submissions to support further research." }, { "heading": "2 THE PLAY TO GRADE CHALLENGE", "text": "We formulate the challenge with constraints that are often found in the real world. Given an interactive coding assignment, teacher often has a few reference implementations of the assignment. Teachers use them to show students what a correct solution should look like. We also assume that the teacher can prepare a few incorrect implementations that represent their “best guesses” of what a wrong program should look like.\nTo formalize this setting, we consider a set of programs, each fully specifies an environment and its dynamics: E = {en = (Sn,A, Rn, Pn) : n = 1, 2, 3, ...}. A subset of these environments are\nreference environments that are accessible during training, we refer to them as E?, and we also have a set of environments that are specified by student submitted programs E ′. We can further specify a training set D = {(τ i, yi); y ∈ {0, 1}} where τ i ∼ π(e(i)) and e(i) ∼ E?, and a test set Dtest where e(i) ∼ E ′. The overall objective of this challenge is:\nminL(θ) = min θ min π Ee∼E [Eτ ′∼π(e)[L(pθ(φ(τ ′, π)), y)]] (1)\nWe want a policy that can generate trajectory τ that can help a classifier easily distinguish between an input environment that is correctly implemented and one that is not. We also allow a feature mapping function φ that takes the trajectory and estimations from the agent as intput and output features for the classifier. We can imagine a naive classifier that labels any environment that is playable (defined by being able to obtain rewards) by our agent as correct. A trivial failure case for this classifier would be that if the agent is badly trained and fails to play successfully in a new environment (returning zero reward), we would not know if zero reward indicates the wrongness of the program or the failure of our agent.\nGeneralization challenge In order to avoid the trivial failure case described above – the game states observed are a result of our agent’s failure to play the game, not a result of correctness or wrongness of the program, it is crucial that the agent operates successfully under different correct environments. For any correct environment, E+ = {E?+, E ′+}, the goal is for our agent to obtain the high expected reward.\nπ? = argmax π Ee∼E+ [Eτ∼π(e)[R(τ)]] (2)\nAdditionally, we choose the state space to be the pixel-based screenshot of the game. This assumption imposes the least amount of modification on thousands of games that teaching platforms have created for students over the years.\nThis decision poses a great challenge to our agent. When students create a game, they might choose to express their creativity through a myriad of ways, including but not limited to using exciting background pictures, changing shape or color or moving speed of the game objects, etc. Some of these creative expressions only affect game aesthetics, but other will affect how the game is played (i.e., changing the speed of an object). The agent needs to succeed in these creative settings so that the classifier will not treat creativity as incorrect." }, { "heading": "2.1 BOUNCE GAME SIMULATOR", "text": "We pick the coding game Bounce to be the main game for this challenge. Bounce is a block-based educational game created to help students understand conditionals1. We show actual game scenes in Figure 1, and the coding area in Figure 2a.\n1https://studio.code.org/s/course3/stage/15/puzzle/10\nThe choice gives us three advantages. First, the popularity of this game on Code.org gives us an abundance of real student submissions over the years, allowing us to compare algorithms with real data. Second, a block-based program can be easily represented in a structured format, eliminating the need to write a domain-specific parser for student’s program. Last, in order to measure real progress of this challenge, we need gold labels for each submission. Block-based programming environment allows us to specify a list of legal and illegal commands under each condition which will provide perfect gold labels.\nThe Bounce exercise does not have a bounded solution space, similar to other exercises developed at Code.org. This means that the student can produce arbitrarily long programs, such as repeating the same command multiple times (Figure 3(b)) or changing themes whenever a condition is triggered (Figure 3(a)). These complications can result in very different game dynamics.\nWe created a simulator faithfully executes command under each condition and will return positive reward when “Score point” block is activated, and negative reward when “Socre opponent point” block is activated. In deployment, such simulator needs not be created because coding platforms have already created simulators to run and render student programs." }, { "heading": "2.2 CODE.ORG BOUNCE DATASET", "text": "Code.org is an online computer science education platform that teaches beginner programming. They designed a drag-and-drop interface to teach K-12 students basic programming concepts. Our dataset is compiled from 453,211 students. Each time a student runs their code, the submission is saved. In total, there are 711,274 submissions, where 323,516 unique programs were submitted.\nEvaluation metric In an unbounded solution space, the distribution of student submissions incur a heavy tail, observed by Wu et al. (2019). We show that the distribution of submission in dataset conforms to a Zipf distribution. This suggests that we can partition this dataset into two sections, as seen in Figure 2b. Head + Body: the 8359 most frequently submitted programs that covers 50.5% of the total submissions (359,266). This set contains 4,084 correct programs (48.9%) and 4,275 incorrect programs (51.1%). Tail: This set represents any programs that are submitted less than 5 times. There are 315,157 unique programs in this set and 290,953 of them (92.3%) were only submitted once. We sample 250 correct and 250 incorrect programs uniformly from this set for evaluation.\nReference programs Before looking at the student submitted programs, we attempted to solve the assignment ourselves. Through our attempt, we form an understanding of where the student might make a mistake and what different variations of correct programs could look like. Our process can easily be replicated by teachers. We come up with 8 correct reference programs and 10 incorrect reference programs. This can be regarded as our training data.\nGold annotations We generate the ground-truth gold annotations by defining legal or illegal commands under each condition. For example, having more than one “launch new ball” under “when run” is incorrect. Placing “score opponent point” under “when run” is also incorrect. Abiding by this logic, we put down a list of legal and illegal commands for each condition. We note that, we intentionally chose the bounce program as it was amenable to generating gold annotations due to the API that code.org exposed to students. While our methods apply broadly, this gold annotation system will not scale to other assignments. The full annotation schema is in Appendix A.5." }, { "heading": "3 RELATED WORK", "text": "Education feedback The quality of an online education platform depends on the feedback it can provide to its students. Low quality or missing feedback can greatly reduce motivation for students to continue engaging with the exercise (O’Rourke et al., 2014). Currently, platforms like Code.org that offers block-based programming use syntactic analysis to construct hints and feedbacks (Price & Barnes, 2017). The current state-of-the-art introduces a method for providing coding feedback that works for assignments up approximately 10 lines of code (Wu et al., 2019). The method does not easily generalize to more complicated programming languages. Our method sidesteps the complexity of static code analysis and instead focus on analyzing the MDP specified by the game environment.\nGeneralization in RL We are deciding whether an input MDP belongs to a class of MDPs up to some generalization. The generalization represents the creative expressions from the students. A benchmark has been developed to measure trained agent’s ability to generalize to procedually generated unseen settings of a game (Cobbe et al., 2019b;a). Unlike procedually generated environment where the procedure specifies hyperparameters of the environment, our environment is completely determined by the student program. For example, random theme change can happen if the ball hits the wall. We test the state-of-the-art algorithms that focus on image augmentation techniques on our environment (Lee et al., 2019; Laskin et al., 2020)." }, { "heading": "4 METHOD", "text": "" }, { "heading": "4.1 POLICY LEARNING", "text": "Given an observation st, we first use a convolutional neural network (CNN), the same as the one used in Mnih et al. (2015) as feature extractor over pixel observations. To accumulate multi-step information, such as velocity, we use a Long-short-term Memory Network (LSTM). We construct a one-layer fully connected policy network and value network that takes the last hidden state from LSTM as input.\nWe use Policy Proximal Optimization (PPO), a state-of-the-art on-policy RL algorithm to learn the parameters of our model (Schulman et al., 2017). PPO utilizes an actor-critic style training (Mnih et al., 2016) and learns a policy π(at|st) as well as a value function V π(st) = Eτ∼πθ [∑T t=0 γ trt|s0 = st ] for the policy.\nFor each episode of the agent training, we randomly sample one environment from a fixed set of correct environments in our reference programs: e ∼ E?+. The empirical size of E+ (number of unique correct programs) in our dataset is 107,240. We focus on two types of strategies. One strategy assumes no domain knowledge, which is more realistic. The other strategy assumes adequate representation of possible combinations of visual appearances.\nBaseline training : |E?+| = 1. We only train on the environment specified by the standard program displayed in Figure 2a. This serves as the baseline.\nData augmentation training : |E?+| = 1. This is the domain agnostic strategy where we only include the standard program (representing our lack of knowledge on what visual differences might be in student submitted games). We apply the state-of-the-art RL generalization training algorithm to augment our pixel based observation (Laskin et al., 2020). We adopt the top-performing augmentations (cutout, cutout-color, color-jitter, gray-scale). These augmentations aim to change colors\nor apply partial occlusion to the visual observations randomly so that the agent is more robust to visual disturbance, which translates to better generalization.\nMix-theme training : |E?+| = 8. This is the domain-aware strategy where we include 8 correct environments in our reference environment set, each represents a combination of either “hardcourt” or “retro” theme for paddle, ball, or background. The screenshots of all 8 combinations can be seen in Figure 1. This does not account for dynamics where random theme changes can occur during the game play. However, this does guarantees that the observation state s will always have been seen by the network." }, { "heading": "4.2 CLASSIFIER LEARNING", "text": "We design a classifier that can take inputs from the environment as well as the trained agent. The trajectory τ = (s0, a0, r0, s1, a1, r1, ...) includes both state observations and reward returned by the environment. We build a feature map φ(τ, π) to produce input for the classifier. We want to select features that are most representative of the dynamics and reward model of the MDP that describes the environment. Pixel-based states st has the most amount of information but also the most unstructured. Instead, we choose to focus on total reward and per-step reward trajectory.\nTotal reward A simple feature that can distinguish between different MDPs is the sum of total reward R(τ) = ∑ t rt. Intuitively, incorrect environments could result in an agent not able to get any reward, or extremely high negative or positive reward.\nAnticipated reward A particular type of error in our setting is called a “reward design” error. An example is displayed in Figure 3(c), where a point was scored when the ball hits the paddle. This type of mistake is much harder to catch with total reward. By observing the relationship between V π(st) and rt, we can build an N-th order Markov model to predict rt, given the previous N-step history of V π(st−n+1), ..., V π(st). If we train this model using the correct reference environments, then r̂ can inform us what the correct reward trajectory is expected by the agent. We can then compute the hamming distance between our predicted reward trajectory r̂ and observed reward trajectory r.\np(r0, r1, r2...|v) = p(r0) T∏ t=n p(rt|V π(s<t))\n≈ p(r0) T∏ t=1 p(rt|V π(st−n+1), ..., V π(st))\nr̂ = argmax r̂\np(r̂|v)\nd(r, r̂) = Hamming(r, r̂)/T\n0 200 400 600 800 1000 t\n10\n5\n0\n5\n10\n15\n20\nva lu\ne\nTrajectory V(st) rt\nFigure 4: V π(st) indicates the model’s anticipation of future reward.\nCode-as-text As a baseline, we also explore a classifier that completely ignores the trained agent. We turn the program text into a count-based 189-feature vector (7 conditions × 27 commands), where each feature represents the number of times the command is repeated under a condition." }, { "heading": "5 EXPERIMENT", "text": "" }, { "heading": "5.1 TRAINING", "text": "Policy training We train the agent under different generalization strategy for 6M time steps in total. We use 256-dim hidden state LSTM and trained with 128 steps state history. We train each agent till they can reach maximal reward in their respective training environment.\nClassifier training We use a fully connected network with 1 hidden layer of size 100 and tanh activation function as the classifier. We use Adam optimizer (Kingma & Ba, 2014) and train for\n10,000 iterations till convergence. The classifier is trained with features provided by φ(τ, π). We additionally set a heuristic threshold that if d(r, r̂) < 0.6, we classify the program as having a reward design error.\nTo train the classifier, we sample 16 trajectories by running the agent on the 8 correct and 10 incorrect reference environments. We set the window size of the Markov trajectory prediction model to 5 and train a logistic regression model over pairs of ((V π(st−4), V π(st−3), ..., V π(st)), rt) sampled from the correct reference environments. During evaluation, we vary the number of trajectories we sample (K), and when K > 1, we average the probabilities over K trajectories." }, { "heading": "5.2 GRADING PERFORMANCE", "text": "We evaluate the performance of our classifier over three set of features and show in Table 1. Since we can sample more trajectories from each MDP, we vary the number of samples (K) to show the performance change of different classifiers. When we treat code as text, the representation is fixed, therefore K = 1 for that setting.\nWe set a maximal number of steps to be 1,000 for each trajectory. The frame rate of our game is 50, therefore this correspond to 20 seconds of game play. We terminate and reset the environment after 3 balls have been shot in. When the agent win or lose 3 balls, we give an additional +100 or -100 to mark the winning or losing of the game. We evaluate over the most frequent 8,359 unique programs that covers 50.1% of overall submissions. Since the tail contains many more unique programs, we choose to sample 500 programs uniformly and evaluate our classifier’s performance on them.\nWe can actually see that using the trajectories sampled by the agent, even if we only have 18 labeled reference MDPs for training, we can reach very high accuracy even when we sample very few trajectories. Overall, MDPs on the tail distribution are much harder to classify compared to MDPs from head and body of the distribution. This is perhaps due to the fact the distribution shift that occurs for long-tail classification problems. Overall, when we add reward anticipation as feature into the classifier, we outperform using total reward only." }, { "heading": "5.3 GENERALIZATION PERFORMANCE", "text": "One of our stated goal is for the trained agent π to obtain high expected reward with all correct environments E+, even though π will only be trained with reference environments E?+. We compare different training strategies that allow the agent to generalize to unseen dynamics. In our evaluation, we sample e from the head, body, and tail end of the E+ distribution. Since we only have 51 environments labeled as correct in the head distribution, we evaluate agents on all of them. For the body and tail portion of the distribution, we sample 100 correct environments. The reward scheme is each ball in goal will get +20 reward, and each ball misses paddle gets -10 reward. The game terminates after 5 balls in or missing, making total reward range [-50, 100].\nWe show the result in Figure 5. Since every single agent has been trained on the reference environment specified by the standard program, they all perform relatively well. We can see some of the data augmentation (except Color Jitter) strategies actually help the agent achieve higher reward on the reference environment. However, when we sample correct environments from the body and tail distribution, every training strategy except “Mixed Theme” suffers significant performance drop." }, { "heading": "6 DISCUSSION", "text": "Visual Features One crucial part of the trajectory is visual features. We have experimented with state-of-the-art video classifier such as R3D (Tran et al., 2018) by sampling a couple of thousand videos from both the reference correct and incorrect environments. This approach did not give us a classifier whose accuracy is beyond random chance. We suspect that video classification might suffer from bad sample efficiency. Also, the difference that separates correct environments and incorrect environments concerns more about relationship reasoning for objects than identifying a few pixels from a single frame (i.e., “the ball going through the goal, disappeared, but never gets launched again” is an error, but this error is not the result of any single frame).\nNested objective Our overall objective (Equation 1) is a nested objective, where both the policy and the classifier work collaboratively to minimize the overall loss of the classification. However, in this paper, we took the approach of heuristically defining the optimal criteria for a policy – optimize for expected reward in all correct environments. This is because our policy has orders of magnitude more parameters (LSTM+CNN) than our classifier (one layer FCN). Considering the huge parameter space to search through and the sparsity of signal, the optimization process could be very difficult. However, there are bugs that need to be intentionally triggered, such as the classic sticky action bug in game design, where when the ball hits the paddle through a particular angle, it would appear to have been “stuck” on the paddle and can’t bounce off. This common bug, though not present in our setting, requires collaboration between policy and classifier in order to find out." }, { "heading": "7 CONCLUSION", "text": "We introduce the Play to Grade challenge, where we formulate the problem of interactive coding assignment grading as classifying Markov Decision Processes (MDPs). We propose a simple solution to achieve 94.1% accuracy over 50% of student submissions. Our approach is not specific to the coding assignment of our choice and can scale feedback for real world use." } ]
2,020
null
SP:2eed06887f51560197590d617b1a37ec6d22e943
[ "This paper considers the problem of data-free post-training quantization of classfication networks. It proposes three extensions of an existing framework ZeroQ (Cai et al., 2020): (1). in order to generate distilled data for network sensitivity analysis, the \"Retro Synthesis\" method is proposed to turn a random image into a one that represents a desired class label without relying on batch norm statistics like in ZeroQ; (2). a hybrid quantization strategy is proposed to optionally provide finer-grained per-channel quantization instead of the typical per-layer quantization; (3). a non-uniform quantization grid is proposed to better represent quantized weights, instead of uniform quantization as in ZeroQ. Empirical evaluation demonstrate the effectiveness of the proposed approach." ]
Existing quantization aware training methods attempt to compensate for the quantization loss by leveraging on training data, like most of the post-training quantization methods, and are also time consuming. Both these methods are not effective for privacy constraint applications as they are tightly coupled with training data. In contrast, this paper proposes a data-independent post-training quantization scheme that eliminates the need for training data. This is achieved by generating a faux dataset, hereafter referred to as ‘Retro-Synthesis Data’, from the FP32 model layer statistics and further using it for quantization. This approach outperformed state-of-the-art methods including, but not limited to, ZeroQ and DFQ on models with and without Batch-Normalization layers for 8, 6, and 4 bit precisions on ImageNet and CIFAR-10 datasets. We also introduced two futuristic variants of post-training quantization methods namely ‘Hybrid Quantization’ and ‘Non-Uniform Quantization’. The Hybrid Quantization scheme determines the sensitivity of each layer for per-tensor & per-channel quantization, and thereby generates hybrid quantized models that are ‘10 to 20%’ efficient in inference time while achieving the same or better accuracy compared to per-channel quantization. Also, this method outperformed FP32 accuracy when applied for ResNet-18, and ResNet-50 models on the ImageNet dataset. In the proposed Non-Uniform Quantization scheme, the weights are grouped into different clusters and these clusters are assigned with a varied number of quantization steps depending on the number of weights and their ranges in the respective cluster. This method resulted in ‘1%’ accuracy improvement against state-of-the-art methods on the ImageNet dataset.
[]
[ { "authors": [ "Ron Banner", "Yury Nahshan", "Daniel Soudry" ], "title": "Post training 4-bit quantization of convolutional networks for rapid-deployment", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Chaim Baskin", "Brian Chmiel", "Evgenii Zheltonozhskii", "Ron Banner", "Alex M Bronstein", "Avi Mendelson" ], "title": "Cat: Compression-aware training for bandwidth reduction", "venue": null, "year": 1909 }, { "authors": [ "Yaohui Cai", "Zhewei Yao", "Zhen Dong", "Amir Gholami", "Michael W Mahoney", "Kurt Keutzer" ], "title": "Zeroq: A novel zero shot quantization framework", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yoni Choukroun", "Eli Kravchik", "Fan Yang", "Pavel Kisilev" ], "title": "Low-bit quantization of neural networks for efficient inference", "venue": "IEEE/CVF International Conference on Computer Vision Workshop (ICCVW),", "year": 2019 }, { "authors": [ "Lucian Codrescu" ], "title": "Architecture of the hexagonTM 680 dsp for mobile imaging and computer vision", "venue": "In 2015 IEEE Hot Chips 27 Symposium (HCS),", "year": 2015 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Binaryconnect: Training deep neural networks with binary weights during propagations", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Zhen Dong", "Zhewei Yao", "Amir Gholami", "Michael W Mahoney", "Kurt Keutzer" ], "title": "Hawq: Hessian aware quantization of neural networks with mixed-precision", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Benoit Jacob", "Skirmantas Kligys", "Bo Chen", "Menglong Zhu", "Matthew Tang", "Andrew Howard", "Hartwig Adam", "Dmitry" ], "title": "Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Raghuraman Krishnamoorthi" ], "title": "Quantizing deep convolutional networks for efficient inference: A whitepaper", "venue": "arXiv preprint arXiv:1806.08342,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Rundong Li", "Yan Wang", "Feng Liang", "Hongwei Qin", "Junjie Yan", "Rui Fan" ], "title": "Fully quantized network for object detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Nelson Morgan" ], "title": "Experimental determination of precision requirements for back-propagation training of artificial neural networks", "venue": "In Proc. Second Int’l. Conf. Microelectronics for Neural Networks,,", "year": 1991 }, { "authors": [ "Markus Nagel", "Mart van Baalen", "Tijmen Blankevoort", "Max Welling" ], "title": "Data-free quantization through weight equalization and bias correction", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Haozhi Qi", "Chong You", "Xiaolong Wang", "Yi Ma", "Jitendra Malik" ], "title": "Deep isometric learning for visual recognition", "venue": "arXiv preprint arXiv:2006.16992,", "year": 2020 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Farhadi" ], "title": "Xnor-net: Imagenet classification using binary convolutional neural networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Jiaxiang Wu", "Cong Leng", "Yuhang Wang", "Qinghao Hu", "Jian Cheng" ], "title": "Quantized convolutional neural networks for mobile devices", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Dongqing Zhang", "Jiaolong Yang", "Dongqiangzi Ye", "Gang Hua" ], "title": "Lq-nets: Learned quantization for highly accurate and compact deep neural networks", "venue": "In Proceedings of the European conference on computer vision (ECCV),", "year": 2018 }, { "authors": [ "Ritchie Zhao", "Yuwei Hu", "Jordan Dotzel", "Christopher De Sa", "Zhiru Zhang" ], "title": "Improving neural network quantization without retraining using outlier channel splitting", "venue": null, "year": 1901 }, { "authors": [ "Aojun Zhou", "Anbang Yao", "Yiwen Guo", "Lin Xu", "Yurong Chen" ], "title": "Incremental network quantization: Towards lossless cnns with low-precision weights", "venue": "arXiv preprint arXiv:1702.03044,", "year": 2017 }, { "authors": [ "Chenzhuo Zhu", "Song Han", "Huizi Mao", "William J Dally" ], "title": "Trained ternary quantization", "venue": "arXiv preprint arXiv:1612.01064,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Quantization is a widely used and necessary approach to convert heavy Deep Neural Network (DNN) models in Floating Point (FP32) format to a light-weight lower precision format, compatible with edge device inference. The introduction of lower precision computing hardware like Qualcomm Hexagon DSP (Codrescu, 2015) resulted in various quantization methods (Morgan et al., 1991; Rastegari et al., 2016; Wu et al., 2016; Zhou et al., 2017; Li et al., 2019; Dong et al., 2019; Krishnamoorthi, 2018) compatible for edge devices. Quantizing a FP32 DNN to INT8 or lower precision results in model size reduction by at least 4X based on the precision opted for. Also, since the computations happen in lower precision, it implicitly results in faster inference time and lesser power consumption. The above benefits with quantization come with a caveat of accuracy loss, due to noise introduced in the model’s weights and activations.\nIn order to reduce this accuracy loss, quantization aware fine-tuning methods are introduced (Zhu et al., 2016; Zhang et al., 2018; Choukroun et al., 2019; Jacob et al., 2018; Baskin et al., 2019; Courbariaux et al., 2015), wherein the FP32 model is trained along with quantizers and quantized weights. The major disadvantages of these methods are, they are computationally intensive and time-consuming since they involve the whole training process. To address this, various post-training quantization methods (Morgan et al., 1991; Wu et al., 2016; Li et al., 2019; Banner et al., 2019) are developed that resulted in trivial to heavy accuracy loss when evaluated on different DNNs. Also, to determine the quantized model’s weight and activation ranges most of these methods require access to training data, which may not be always available in case of applications with security and privacy\nconstraints which involve card details, health records, and personal images. Contemporary research in post-training quantization (Nagel et al., 2019; Cai et al., 2020) eliminated the need for training data for quantization by estimating the quantization parameters from the Batch-Normalization (BN) layer statistics of the FP32 model but fail to produce better accuracy when BN layers are not present in the model.\nTo address the above mentioned shortcomings, this paper proposes a data-independent post-training quantization method that estimates the quantization ranges by leveraging on ‘retro-synthesis’ data generated from the original FP32 model. This method resulted in better accuracy as compared to both data-independent and data-dependent state-of-the-art quantization methods on models ResNet18, ResNet50 (He et al., 2016), MobileNetV2 (Sandler et al., 2018), AlexNet (Krizhevsky et al., 2012) and ISONet (Qi et al., 2020) on ImageNet dataset (Deng et al., 2009). It also outperformed state-of-the-art methods even for lower precision such as 6 and 4 bit on ImageNet and CIFAR-10 datasets. The ‘retro-synthesis’ data generation takes only 10 to 12 sec of time to generate the entire dataset which is a minimal overhead as compared to the benefit of data independence it provides. Additionally, this paper introduces two variants of post-training quantization methods namely ‘Hybrid Quantization’ and ‘Non-Uniform Quantization’." }, { "heading": "2 PRIOR ART", "text": "" }, { "heading": "2.1 QUANTIZATION AWARE TRAINING BASED METHODS", "text": "An efficient integer only arithmetic inference method for commonly available integer only hardware is proposed in Jacob et al. (2018), wherein a training procedure is employed which preserves the accuracy of the model even after quantization. The work in Zhang et al. (2018) trained a quantized bit compatible DNN and associated quantizers for both weights and activations instead of relying on handcrafted quantization schemes for better accuracy. A ‘Trained Ternary Quantization’ approach is proposed in Zhu et al. (2016) wherein the model is trained to be capable of reducing the weights to 2-bit precision which achieved model size reduction by 16x without much accuracy loss. Inspired by other methods Baskin et al. (2019) proposes a ‘Compression Aware Training’ scheme that trains a model to learn compression of feature maps in a better possible way during inference. Similarly, in binary connect method (Courbariaux et al., 2015) the network is trained with binary weights during forward and backward passes that act as a regularizer. Since these methods majorly adopt training the networks with quantized weights and quantizers, the downside of these methods is not only that they are time-consuming but also they demand training data which is not always accessible." }, { "heading": "2.2 POST TRAINING QUANTIZATION BASED METHODS", "text": "Several post-training quantization methods are proposed to replace time-consuming quantization aware training based methods. The method in Choukroun et al. (2019) avoids full network training, by formalizing the linear quantization as ‘Minimum Mean Squared Error’ and achieves better accuracy without retraining the model. ‘ACIQ’ method (Banner et al., 2019) achieved accuracy close to FP32 models by estimating an analytical clipping range of activations in the DNN. However, to compensate for the accuracy loss, this method relies on a run-time per-channel quantization scheme for activations which is inefficient and not hardware friendly. In similar lines, the OCS method (Zhao et al., 2019) proposes to eliminate the outliers for better accuracy with minimal overhead. Though these methods considerably reduce the time taken for quantization, they are unfortunately tightly coupled with training data for quantization. Hence they are not suitable for applications wherein access to training data is restricted. The contemporary research on data free post-training quantization methods was successful in eliminating the need for accessing training data. By adopting a per-tensor quantization approach, the DFQ method (Nagel et al., 2019) achieved accuracy similar to the perchannel quantization approach through cross layer equalization and bias correction. It successfully eliminated the huge weight range variations across the channels in a layer by scaling the weights for cross channels. In contrast ZeroQ (Cai et al., 2020) proposed a quantization method that eliminated the need for training data, by generating distilled data with the help of the Batch-Normalization layer statistics of the FP32 model and using the same for determining the activation ranges for quantization and achieved state-of-the-art accuracy. However, these methods tend to observe accuracy degradation when there are no Batch-Normalization layers present in the FP32 model.\nTo address the above shortcomings the main contributions in this paper are as follows:\n• A data-independent post-training quantization method by generating the ‘Retro Synthesis’ data, for estimating the activation ranges for quantization, without depending on the BatchNormalization layer statistics of the FP32 model.\n• Introduced a ‘Hybrid Quantization’ method, a combination of Per-Tensor and Per-Channel schemes, that achieves state-of-the-art accuracy with lesser inference time as compared to fully per-channel quantization schemes.\n• Recommended a ‘Non-Uniform Quantization’ method, wherein the weights in each layer are clustered and then allocated with a varied number of bins to each cluster, that achieved ‘1%’ better accuracy against state-of-the-art methods on ImageNet dataset." }, { "heading": "3 METHODOLOGY", "text": "This section discusses the proposed data-independent post-training quantization methods namely (a) Quantization using retro-synthesis data, (b) Hybrid Quantization, and (c) Non-Uniform Quantization." }, { "heading": "3.1 QUANTIZATION USING RETRO SYNTHESIS DATA", "text": "In general, post-training quantization schemes mainly consist of two parts - (i) quantizing the weights that are static in a given trained FP32 model and (ii) determining the activation ranges for layers like ReLU, Tanh, Sigmoid that vary dynamically for different input data. In this paper, asymmetric uniform quantization is used for weights whereas the proposed ‘retro-synthesis’ data is used to determine the activation ranges. It should be noted that we purposefully chose to resort to simple asymmetric uniform quantization to quantize the weights and also have not employed any advanced techniques such as outlier elimination of weight clipping for the reduction of quantization loss. This is in the interest of demonstrating the effectiveness of ‘retro-synthesis’ data in accurately determining the quantization ranges of activation outputs. However, in the other two proposed methods (b), and (c) we propose two newly developed weight quantization methods respectively for efficient inference with improved accuracy." }, { "heading": "3.1.1 RETRO-SYNTHESIS DATA GENERATION", "text": "Aiming for a data-independent quantization method, it is challenging to estimate activation ranges without having access to the training data. An alternative is to use “random data” having Gaussian distribution with ‘zero mean’ and ‘unit variance’ which results in inaccurate estimation of activation ranges thereby resulting in poor accuracy. The accuracy degrades rapidly when quantized for lower precisions such as 6, 4, and 2 bit. Recently ZeroQ (Cai et al., 2020) proposed a quantization method using distilled data and showed significant improvement, with no results are showcasing the generation of distilled data for the models without Batch-Normalization layers and their corresponding accuracy results.\nIn contrast, inspired by ZeroQ (Cai et al., 2020) we put forward a modified version of the data generation approach by relying on the fact that, DNNs which are trained to discriminate between different image classes embeds relevant information about the images. Hence, by considering the class loss for a particular image class and traversing through the FP32 model backward, it is possible to generate image data with similar statistics of the respective class. Therefore, the proposed “retrosynthesis” data generation is based on the property of the trained DNN model, where the image data that maximizes the class score is generated, by incorporating the notion of the class features captured by the model. Like this, we generate a set of images corresponding to each class using which the model is trained. Since the data is generated from the original model itself we named the data as “retro-synthesis” data. It should be observed that this method has no dependence on the presence of Batch-Normalization layers in the FP32 model, thus overcoming the downside of ZeroQ. It is also evaluated that, for the models with Batch-Normalization layers, incorporating the proposed “class-loss” functionality to the distilled data generation algorithm as in ZeroQ results in improved accuracy. The proposed “retro-synthesis” data generation method is detailed in Algorithm 1. Given, a fully trained FP32 model and a class of interest, our aim is to empirically generate an image that\nis representative of the class in terms of the model class score. More formally, let P (C) be the soft-max of the class C, computed by the final layer of the model for an image I . Thus, the aim is, to generate an image such that, this image when passed to the model will give the highest softmax value for class C.\nAlgorithm 1 Retro synthesis data generation Input: Pre-determined FP32 model (M), Target class (C). Output: A set of retro-synthesis data corresponding to Target class (C).\n1. Init: I ← random gaussian(batch-size, input shape) 2. Init: Target←rand(No. of classes) 3 argmax(Target) = C 3. Init: µ0 = 0, σ0 = 1 4. Get (µi, σi) from batch norm layers of M (if present), i ∈ 0, 1, . . . , n where n → No.of\nbatch norm layers 5. for j = 0, 1, . . . ,No. of Epochs\n(a) Forward propagate I and gather intermediate activation statistics (b) Output = M(I) (c) LossBN=0 (d) for k = 0, 1, . . . , n\ni. Get (µk, σk) ii. LossBN ← LossBN+L((µk, σk), (µBNk , σBNk ))\n(e) Calculate (µ′0, σ′0) of I (f) LossG ← L((µ0, σ0), (µ′0, σ′0)) (g) LossC ← L(Target,Output) (h) Total loss = LossBN + LossG + LossC (i) Update I ← backward(Total loss)\nThe “retro-synthesis” data generation for a target class C starts with random data of Gaussian distribution I and performing a forward pass on I to obtain intermediate activations and output labels. Then we calculate the aggregated loss that occurs between, stored batch norm statistics and the intermediate activation statistics (LBN ), the Gaussian loss (LG), and the class loss (LC) between the output of the forward pass and our target output. The L2 loss formulation as in equation 1 is used for LBN and LG calculation whereas mean squared error is used to compute LC . The calculated loss is then backpropagated till convergence thus generating a batch of retro-synthesis data for a class C. The same algorithm is extendable to generate the retro-synthesis data for all classes as well.\nL((µk, σk), (µ BN k , σ BN k )) = ‖µk − µBNk ‖2 2 + ‖σk − σBNk ‖2 2 (1)\nWhere L is the computed loss, µk, σk, and µBNk , σ BN k are the mean and standard deviation of the kth activation layer and the Batch-Normalization respectively .\nBy observing the sample visual representation of the retro-synthesis data comparing against the random data depicted in Fig. 1, it is obvious that the retro-synthesis data captures relevant features from the respective image classes in a DNN understandable format. Hence using the retro-synthesis data for the estimation of activation ranges achieves better accuracy as compared to using random data. Also, it outperforms the state-of-the-art data-free quantization methods (Nagel et al., 2019; Cai et al., 2020) with a good accuracy margin when validated on models with and without BatchNormalization layers. Therefore, the same data generation technique is used in the other two proposed quantization methods (b) and (c) as well." }, { "heading": "3.2 HYBRID QUANTIZATION", "text": "In any quantization method, to map the range of floating-point values to integer values, parameters such as scale and zero point are needed. These parameters can be calculated either for per-layer of\nthe model or per-channel in each layer of the model. The former is referred to as ‘per-tensor/perlayer quantization’ while the latter is referred to as ‘per-channel quantization’. Per-channel quantization is preferred over per-tensor in many cases because it is capable of handling the scenarios where weight distribution varies widely among different channels in a particular layer. However, the major drawback of this method is, it is not supported by all hardware (Nagel et al., 2019) and also it needs to store scale and zero point parameters for every channel thus creating an additional computational and memory overhead. On the other hand, per-tensor quantization which is more hardware friendly suffers from significant accuracy loss, mainly at layers where the weight distribution varies significantly across the channels of the layer, and the same error will be further propagated down to consecutive layers of the model resulting in increased accuracy degradation. In the majority of the cases, the number of such layers present in a model is very few, for example in the case of MobileNet-V2 only very few depthwise separable layers show significant weight variations across channels which result in huge accuracy loss (Nagel et al., 2019). To compensate such accuracy loss per-channel quantization methods are preferred even though they are not hardware friendly and computationally expensive. Hence, in the proposed “Hybrid Quantization” technique we determined the sensitivity of each layer corresponding to both per-channel and per-tensor quantization schemes and observe the loss behavior at different layers of the model. Thereby we identify the layers which are largely sensitive to per-tensor (which has significant loss of accuracy) and then quantize only these layers using the per-channel scheme while quantizing the remaining less sensitive layers with the per-tensor scheme. For the layer sensitivity estimation KL-divergence (KLD) is calculated between the outputs of the original FP32 model and the FP32 model wherein the i-th layer is quantized using per-tensor and per-channel schemes. The computed layer sensitivity is then compared against a threshold value (Th) in order to determine whether a layer is suitable to be quantized using the per-tensor or per-channel scheme. This process is repeated for all the layers in the model.\nThe proposed Hybrid Quantization scheme can be utilized for a couple of benefits, one is for accuracy improvement and the other is for inference time optimization. For accuracy improvement, the threshold value has to be set to zero, Th = 0. By doing this, a hybrid quantization model with a unique combination of per-channel and per-tensor quantized layers is achieved such that, the accuracy is improved as compared to a fully per-channel quantized model and in some cases also FP32 model. For inference time optimization the threshold value Th is determined heuristically by observing the loss behavior of each layer that aims to generate a model with the hybrid approach, having most of the layers quantized with the per-tensor scheme and the remaining few sensitive layers quantized with the per-channel scheme. In other words, we try to create a hybrid quantized model as close as possible to the fully per-tensor quantized model so that the inference is faster with the constraint of accuracy being similar to the per-channel approach. This resulted in models where per-channel quantization is chosen for the layers which are very sensitive to per-tensor quantization. For instance, in case of ResNet-18 model, fully per-tensor quantization accuracy is 69.7% and fully per-channel accuracy is 71.48%. By performing the sensitivity analysis of each layer, we observe that only the second convolution layer is sensitive to per-tensor quantization because of the huge variation in weight distribution across channels of that layer. Hence, by applying per-channel quantization only to this layer and per-tensor quantization to all the other layers we achieved 10 − 20% reduction in inference time. The proposed method is explained in detail in Algorithm 2. For every layer in the model, we find an auxiliary model Aux-model=quantize(M, i, qs) where, the step quantize(M, i, qs) quantizes the i-th layer of the model M using qs quant scheme, where qs could be per-channel or per-tensor while keeping all other layers same as the original FP32 weight values. To find the sensitivity of a layer, we find the KLD between the Aux-model and the original FP32 model outputs. If the sensitivity difference between per-channel and per-tensor is greater than\nthe threshold value Th, we apply per-channel quantization to that layer else we apply per-tensor quantization. The empirical results with this method are detailed in section 4.2.\nAlgorithm 2 Hybrid Quantization scheme Input: Fully trained FP32 model (M) with n layers, retro-synthesis data (X) generated in Part A. Output: Hybrid Quantized Model.\n1. Init: quant scheme← {PC, PT} 2. Init: Mq ←M 3. for i = 0, 1, . . . , n\n(a) error[PC]← 0 , error[PT ]← 0 (b) for (qs in quant scheme)\ni. Aux-model← quantize(M ,i,qs). ii. Output←M(X).\niii. Aux-output← Aux-model(X) iv. e← KLD(Output, Aux-output) v. error[qs]← e\n(c) if error[PT ]− error[PC] < Th Mq ← quantize(Mq , i, PT ) else Mq ← quantize(Mq , i, PC)" }, { "heading": "3.3 NON-UNIFORM QUANTIZATION", "text": "In the uniform quantization method, the first step is to segregate the entire weights range of the given FP32 model into 2K groups of equal width, where ‘K’ is bit precision chosen for the quantization, like K = 8, 6, 4, etc. Since we have a total of 2K bins or steps available for quantization, the weights in each group are assigned to a step or bin. The obvious downside with this approach is, even though, the number of weights present in each group is different, an equal number of steps are assigned to each group. From the example weight distribution plot shown in Fig. 2 it is evident that the number of weights and their range in ‘group-m’ is very dense and spread across the entire range, whereas they are very sparse and also concentrated within a very specific range value in ‘group-n’. In the uniform quantization approach since an equal number of steps are assigned to each group, unfortunately, all the widely distributed weights in ‘group-m’ are quantized to a single value, whereas the sparse weights present in ‘group-n’ are also quantized to a single value. Hence it is not possible to accurately dequantize the weights in ‘group-m’, which leads to accuracy loss. Although a uniform quantization scheme seems to be a simpler approach it is not optimal. A possible scenario is described in Fig. 2, there may exist many such scenarios in real-time models. Also, in cases where the weight distribution has outliers, uniform quantization tends to perform badly as it ends up in assigning too many steps even for groups with very few outlier weights. In such cases, it is reasonable to assign more steps to the groups with more number of weights and fewer steps to the groups with less number of weights. With this analogy, in the proposed Non-Uniform Quantization method, first the entire weights range is divided into three clusters using Interquartile Range (IQR) Outlier Detection Technique, and then assign a variable number of steps for each cluster of weights. Later, the quantization process for the weights present in each cluster is performed similar to the uniform quantization method, by considering the steps allocated for that respective cluster as the total number of steps.\nWith extensive experiments, it is observed that assigning the number of steps to a group by considering just the number of weights present in the group, while ignoring the range, results in accuracy degradation, since there may be more number of weights in a smaller range and vice versa. Therefore it is preferable to consider both the number of weights and the range of the group for assigning the number of steps for a particular group. The effectiveness of this proposed method is graphically demonstrated for a sample layer of the ResNet-18 model in Fig. 3 in the appendix A.1. By observing the three weight plots it is evident that the quantized weight distribution using the proposed Non-Uniform Quantization method is more identical to FP32 distribution, unlike the uniform\nquantization method and hence it achieves a better quantized model. Also, it should be noted that the proposed Non-Uniform quantization method is a fully per-tensor based method." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "4.1 RESULTS FOR QUANTIZATION METHOD USING RETRO-SYNTHESIS DATA", "text": "Table 1 shows the benefits of quantization using the ‘retro-synthesis’ data 3.1 against state-of-theart methods. In the case of models with Batch-Normalization layers, the proposed method achieves 1.5% better accuracy against DFQ and a marginal improvement against ZeroQ. Also, our method outperformed FP32 accuracy in the case of ResNet-18 and ResNet-50. In the case of models without Batch-Normalization layers such as Alexnet and ISONet (Qi et al., 2020) the proposed method outperformed the ZeroQ method by 2− 3% on the ImageNet dataset.\nTable 2 demonstrates the effectiveness of the proposed retro-synthesis data for low-precision (weights quantized to 6-bit and the activations quantized to 8-bit (W6A8)). From the results, it is evident that the proposed method outperformed the ZeroQ method.\nThe efficiency of the proposed quantization method for lower bit precision on the CIFAR-10 dataset for ResNet-20 and ResNet-56 models is depicted in Table 3 below. From the results, it is evident\nthat the proposed method outperforms the state-of-the-art methods even for lower precision 8, 6, and 4 bit weights with 8 bit activations." }, { "heading": "4.2 RESULTS FOR HYBRID QUANTIZATION METHOD", "text": "Table 4 demonstrates the benefits of the proposed Hybrid Quantization method in two folds, one is for accuracy improvement and the other is for the reduction in inference time. From the results, it is observed that the accuracy is improved for all the models as compared to the per-channel scheme. It should also be observed that the proposed method outperformed FP32 accuracy for ResNet-18 and ResNet-50. Also by applying the per-channel (PC) quantization scheme to very few sensitive layers as shown in “No. of PC layers” column of Table 4, and applying the per-tensor (PT) scheme to remaining layers, the proposed method optimizes inference time by 10− 20% while maintaining a very minimal accuracy degradation against the fully per-channel scheme." }, { "heading": "4.3 RESULTS FOR NON-UNIFORM QUANTIZATION", "text": "Since the proposed Non-Uniform Quantization method is a fully per-tensor based method, to quantitatively demonstrate its effect, we choose to compare the models quantized using this method against the fully per-tensor based uniform quantization method. The results with this approach depicted in Table 5, accuracy improvement of 1% is evident for the ResNet-18 model." }, { "heading": "5 CONCLUSION AND FUTURE SCOPE", "text": "This paper proposes a data independent post training quantization scheme using “retro sysnthesis” data, that does not depend on the Batch-Normalization layer statistics and outperforms the state-ofthe-art methods in accuracy. Two futuristic post training quantization methods are also discussed namely “Hybrid Quantization” and “Non-Uniform Quantization” which resulted in better accuracy and inference time as compared to the state-of-the-art methods. These two methods unleashes a lot of scope for future research in similar lines. Also in future more experiments can be done on lower precision quantization such as 6-bit, 4-bit and 2-bit precision using these proposed approaches." }, { "heading": "A APPENDIX", "text": "A.1 NON-UNIFORM QUANTIZATION METHOD\nA.1.1 CLUSTERING MECHANISM\nThe IQR of a range of values is defined as the difference between the third and first quartilesQ3, and Q1 respectively. Each quartile is the median of the data calculated as follows. Given, an even 2n or odd 2n+1 number of values, the first quartileQ1 is the median of the n smallest values and the third quartile Q3 is the median of the n largest values. The second quartile Q2 is same as the ordinary median. Outliers here are defined as the observations that fall below the range Q1 − 1.5IQR or above the range Q3 + 1.5IQR. This approach results in grouping the values into three clusters C1, C2, and C3 with ranges R1 = [min,Q1 − 1.5IQR), R2 = [Q1 − 1.5IQR,Q3 + 1.5IQR], and R3 = (Q3 + 1.5IQR,max] respectively.\nWith extensive experiments it is observed that, assigning the number of steps to a group by considering just the number of weights present in the group, while ignoring the range, results in accuracy degradation, since there may be more number of weights in a smaller range and vice versa. Therefore it is preferable to consider both number of weights and the range of the group for assigning the number of steps for a particular group. With this goal we arrived at the number of steps allocation methodology as explained below in detail.\nA.1.2 NUMBER OF STEPS ALLOCATION METHOD FOR EACH GROUP\nSuppose Wi, and Ri represent the number of weights and the range of i-th cluster respectively, then the number of steps allocated Si for the i-th cluster is directly proportional to Ri and Wi as shown in equation 2 below.\nSi = C × (Ri ×Wi) (2)\nThus, the number of steps Si allocated for i-th cluster can be calculated from equation 2 by deriving the proportionality constant C based on the constraint Σ(Si) = 2k, where k is the quantization bit precision chosen. So, using this bin allocation method we assign the number of bins to each cluster. Once the number of steps are allocated for each cluster the quantization is performed on each cluster to obtain the quantized weights.\nA.2 SENSITIVITY ANALYSIS FOR PER-TENSOR AND PER-CHANNEL QUANTIZATION SCHEMES\nFrom the sensitivity plot in Fig. 4 it is very clear that only few layers in MobileNetV2 model are very sensitive for per-tensor scheme and other layers are equally sensitive to either of the schemes. Hence we can achieve betetr accuracy by just quantizing those few sensitive layers using per-channel scheme and remaining layers using per-tensor scheme.\nA.3 SENSITIVITY ANALYSIS OF GROUND TRUTH DATA, RANDOM DATA AND THE PROPOSED RETRO-SYNTHESIS DATA\nFrom the sensitivity plot inFig. 5, it is evident that there is a clear match between the layer sensitivity index plots of the proposed retro-synthesis data (red-plot) and the ground truth data (green plot) whereas huge deviation is observed in case of random data (blue plot). Hence it can be concluded that the proposed retro-synthesis data generation scheme can generate data with similar characteristics as that of ground truth data and is more effective as compared to random data.\nSensitivity plot comparing Ground truth data Vs random data Vs retrosynthesis data\nFigure 5: Sensitivity plot describing the respective layer’s sensitivity for original ground truth dataset, random data and the proposed retro-synthesis data for ResNet-18 model quantized using per-channel scheme. The horizontal axis represent the layer number and the vertical axis represents the sensitivity value." } ]
2,020
null
SP:259b64e62b640ccba4bc82c50e59db7662677e6b
[ "The authors propose a bootstrap framework for understanding generalization in deep learning. In particular, instead of the usual decomposition of test error as training error plus the generalization gap, the bootstrap framework decomposes the empirical test error as online error plus the bootstrap error (the gap between the population and empirical error). The authors then demonstrate empirically on variants of CIFAR10 and a subset of ImageNet that the bootstrap error is small on several common architectures. Hence, the empirical test error is controlled by the online error (i.e. a rapid decrease in the error in the online setting leads to low test error). The authors then provide empirical evidence to demonstrate that same techniques perform well in both over and under-parameterized regimes. " ]
We propose a new framework for reasoning about generalization in deep learning. The core idea is to couple the Real World, where optimizers take stochastic gradient steps on the empirical loss, to an Ideal World, where optimizers take steps on the population loss. This leads to an alternate decomposition of test error into: (1) the Ideal World test error plus (2) the gap between the two worlds. If the gap (2) is universally small, this reduces the problem of generalization in offline learning to the problem of optimization in online learning. We then give empirical evidence that this gap between worlds can be small in realistic deep learning settings, in particular supervised image classification. For example, CNNs generalize better than MLPs on image distributions in the Real World, but this is “because” they optimize faster on the population loss in the Ideal World. This suggests our framework is a useful tool for understanding generalization in deep learning, and lays a foundation for future research in the area.
[ { "affiliations": [], "name": "OFFLINE GENERALIZERS" }, { "affiliations": [], "name": "Preetum Nakkiran" }, { "affiliations": [], "name": "Behnam Neyshabur" }, { "affiliations": [], "name": "Hanie Sedghi" } ]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Martin Anthony", "Peter L Bartlett" ], "title": "Neural network learning: Theoretical foundations", "venue": "cambridge university press,", "year": 2009 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Behnam Neyshabur", "Yi Zhang" ], "title": "Stronger generalization bounds for deep nets via a compression approach", "venue": "arXiv preprint arXiv:1802.05296,", "year": 2018 }, { "authors": [ "Sanjeev Arora", "Simon Du", "Wei Hu", "Zhiyuan Li", "Ruosong Wang" ], "title": "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Peter L Bartlett" ], "title": "For valid generalization the size of the weights is more important than the size of the network", "venue": "In Advances in neural information processing systems,", "year": 1997 }, { "authors": [ "Peter L Bartlett", "Shahar Mendelson" ], "title": "Rademacher and gaussian complexities: Risk bounds and structural results", "venue": "Journal of Machine Learning Research,", "year": 2002 }, { "authors": [ "Peter L Bartlett", "Vitaly Maiorov", "Ron Meir" ], "title": "Almost linear vc dimension bounds for piecewise polynomial networks", "venue": "In Advances in neural information processing systems,", "year": 1999 }, { "authors": [ "Peter L Bartlett", "Dylan J Foster", "Matus J Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Mikhail Belkin", "Daniel J Hsu", "Partha Mitra" ], "title": "Overfitting or perfect fitting? risk bounds for classification and regression rules that interpolate", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Mikhail Belkin", "Siyuan Ma", "Soumik Mandal" ], "title": "To understand deep learning we need to understand kernel learning", "venue": "arXiv preprint arXiv:1802.01396,", "year": 2018 }, { "authors": [ "Lukas Biewald" ], "title": "Experiment tracking with weights and biases, 2020", "venue": "URL https://www. wandb.com/. Software available from wandb.com", "year": 2020 }, { "authors": [ "Anselm Blumer", "Andrzej Ehrenfeucht", "David Haussler", "Manfred K Warmuth" ], "title": "Learnability and the vapnik-chervonenkis dimension", "venue": "Journal of the ACM (JACM),", "year": 1989 }, { "authors": [ "Jörg Bornschein", "Francesco Visin", "Simon Osindero" ], "title": "Small data, big decisions: Model selection in the small-data regime", "venue": null, "year": 2020 }, { "authors": [ "Olivier Bousquet", "André Elisseeff" ], "title": "Algorithmic stability and generalization performance", "venue": "In Advances in Neural Information Processing Systems,", "year": 2001 }, { "authors": [ "Wieland Brendel", "Matthias Bethge" ], "title": "Approximating cnns with bag-of-local-features models works surprisingly well on imagenet", "venue": "arXiv preprint arXiv:1904.00760,", "year": 2019 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Sébastien Bubeck" ], "title": "Introduction to online optimization", "venue": "Lecture Notes,", "year": 2011 }, { "authors": [ "Mark Chen", "Alec Radford", "Rewon Child", "Jeff Wu", "Heewoo Jun", "Prafulla Dhariwal", "David Luan", "Ilya Sutskever" ], "title": "Generative pretraining from pixels", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Lenaic Chizat", "Francis Bach" ], "title": "Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss", "venue": "arXiv preprint arXiv:2002.04486,", "year": 2020 }, { "authors": [ "Terrance DeVries", "Graham W Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "arXiv preprint arXiv:1708.04552,", "year": 2017 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Nas-bench-201: Extending the scope of reproducible neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Alexey Dosovitskiy", "Lucas Beyer", "Alexander Kolesnikov", "Dirk Weissenborn", "Xiaohua Zhai", "Thomas Unterthiner", "Mostafa Dehghani", "Matthias Minderer", "Georg Heigold", "Sylvain Gelly" ], "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "venue": "arXiv preprint arXiv:2010.11929,", "year": 2020 }, { "authors": [ "Gintare Karolina Dziugaite", "Daniel M Roy" ], "title": "Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data", "venue": "arXiv preprint arXiv:1703.11008,", "year": 2017 }, { "authors": [ "Gintare Karolina Dziugaite", "Alexandre Drouin", "Brady Neal", "Nitarshan Rajkumar", "Ethan Caballero", "Linbo Wang", "Ioannis Mitliagkas", "Daniel M Roy" ], "title": "In search of robust measures of generalization", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "B. Efron" ], "title": "Bootstrap methods: Another look at the jackknife", "venue": "Ann. Statist., 7(1):1–26,", "year": 1979 }, { "authors": [ "Bradley Efron", "Robert Tibshirani" ], "title": "Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy", "venue": "Statistical science,", "year": 1986 }, { "authors": [ "Bradley Efron", "Robert J Tibshirani" ], "title": "An introduction to the bootstrap", "venue": "CRC press,", "year": 1994 }, { "authors": [ "Chuang Gan", "Jeremy Schwartz", "Seth Alter", "Martin Schrimpf", "James Traer", "Julian De Freitas", "Jonas Kubilius", "Abhishek Bhandwaldar", "Nick Haber", "Megumi Sano" ], "title": "Threedworld: A platform for interactive multi-modal physical simulation", "venue": "arXiv preprint arXiv:2007.04954,", "year": 2020 }, { "authors": [ "Xiand Gao", "Xiaobo Li", "Shuzhong Zhang" ], "title": "Online learning with non-convex losses and nonstationary regret", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "Noah Golowich", "Alexander Rakhlin", "Ohad Shamir" ], "title": "Size-independent sample complexity of neural networks", "venue": "In Conference On Learning Theory,", "year": 2018 }, { "authors": [ "Raphael Gontijo-Lopes", "Sylvia J Smullin", "Ekin D Cubuk", "Ethan Dyer" ], "title": "Affinity and diversity: Quantifying mechanisms of data augmentation", "venue": "arXiv preprint arXiv:2002.08973,", "year": 2020 }, { "authors": [ "Suriya Gunasekar", "Jason Lee", "Daniel Soudry", "Nathan Srebro" ], "title": "Characterizing implicit bias in terms of optimization geometry", "venue": "arXiv preprint arXiv:1802.08246,", "year": 2018 }, { "authors": [ "Suriya Gunasekar", "Jason D Lee", "Daniel Soudry", "Nati Srebro" ], "title": "Implicit bias of gradient descent on linear convolutional networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Moritz Hardt", "Ben Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Nick Harvey", "Christopher Liaw", "Abbas Mehrabian" ], "title": "Nearly-tight vc-dimension bounds for piecewise linear neural networks", "venue": "In Conference on Learning Theory,", "year": 2017 }, { "authors": [ "Trevor Hastie", "Robert Tibshirani", "Jerome Friedman" ], "title": "The elements of statistical learning: data mining, inference, and prediction", "venue": "Springer Science & Business Media,", "year": 2009 }, { "authors": [ "Elad Hazan" ], "title": "Introduction to online convex optimization", "venue": "arXiv preprint arXiv:1909.05207,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Jonathan Ho", "Ajay Jain", "Pieter Abbeel" ], "title": "Denoising diffusion probabilistic models", "venue": "arXiv preprint arxiv:2006.11239,", "year": 2020 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Like Hui", "Mikhail Belkin" ], "title": "Evaluation of neural architectures trained with square loss vs crossentropy in classification", "venue": null, "year": 2006 }, { "authors": [ "J.D. Hunter" ], "title": "Matplotlib: A 2d graphics environment", "venue": "Computing in Science & Engineering,", "year": 2007 }, { "authors": [ "Prateek Jain", "Purushottam Kar" ], "title": "Non-convex optimization for machine learning", "venue": "arXiv preprint arXiv:1712.07897,", "year": 2017 }, { "authors": [ "Gareth James", "Daniela Witten", "Trevor Hastie", "Robert Tibshirani" ], "title": "An introduction to statistical learning, volume", "venue": null, "year": 2013 }, { "authors": [ "Ziwei Ji", "Matus Telgarsky" ], "title": "The implicit bias of gradient descent on nonseparable data", "venue": "In Conference on Learning Theory,", "year": 2019 }, { "authors": [ "Yiding Jiang", "Behnam Neyshabur", "Hossein Mobahi", "Dilip Krishnan", "Samy Bengio" ], "title": "Fantastic generalization measures and where to find them", "venue": null, "year": 1912 }, { "authors": [ "Chi Jin", "Rong Ge", "Praneeth Netrapalli", "Sham M Kakade", "Michael I Jordan" ], "title": "How to escape saddle points efficiently", "venue": "arXiv preprint arXiv:1703.00887,", "year": 2017 }, { "authors": [ "Jared Kaplan", "Sam McCandlish", "Tom Henighan", "Tom B Brown", "Benjamin Chess", "Rewon Child", "Scott Gray", "Alec Radford", "Jeffrey Wu", "Dario Amodei" ], "title": "Scaling laws for neural language models", "venue": null, "year": 2001 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alexander Kolesnikov", "Lucas Beyer", "Xiaohua Zhai", "Joan Puigcerver", "Jessica Yung", "Sylvain Gelly", "Neil Houlsby" ], "title": "Large scale learning of general visual representations for transfer", "venue": null, "year": 1912 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Master’s thesis,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Jason D Lee", "Max Simchowitz", "Michael I Jordan", "Benjamin Recht" ], "title": "Gradient descent only converges to minimizers", "venue": "In Conference on learning theory,", "year": 2016 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable architecture search", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Philip M Long", "Hanie Sedghi" ], "title": "Generalization bounds for deep convolutional neural networks", "venue": "arXiv preprint arXiv:1905.12600,", "year": 2019 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Odalric-Ambrym Maillard", "Rémi Munos" ], "title": "Online learning in adversarial lipschitz environments", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2010 }, { "authors": [ "Wes McKinney" ], "title": "Data structures for statistical computing in python", "venue": "In Proceedings of the 9th Python in Science Conference,", "year": 2010 }, { "authors": [ "Vaishnavh Nagarajan", "J. Zico Kolter" ], "title": "Uniform convergence may be unable to explain generalization in deep learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Preetum Nakkiran", "Yamini Bansal" ], "title": "Distributional generalization: A new kind of generalization", "venue": "arXiv preprint arXiv:2009.08092,", "year": 2020 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep double descent: Where bigger models and more data hurt", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Behnam Neyshabur" ], "title": "Towards learning convolutions from scratch", "venue": "arXiv preprint arXiv:2007.13657,", "year": 2020 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "In search of the real inductive bias: On the role of implicit regularization in deep learning", "venue": "In ICLR (Workshop),", "year": 2015 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "Norm-based capacity control in neural networks", "venue": "In Conference on Learning Theory, pages 1376–1401,", "year": 2015 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "Nathan Srebro" ], "title": "A pac-bayesian approach to spectrally-normalized margin bounds for neural networks", "venue": "arXiv preprint arXiv:1707.09564,", "year": 2017 }, { "authors": [ "Behnam Neyshabur", "Zhiyuan Li", "Srinadh Bhojanapalli", "Yann LeCun", "Nathan Srebro" ], "title": "Towards understanding the role of over-parametrization in generalization of neural networks", "venue": null, "year": 1805 }, { "authors": [ "David Page" ], "title": "How to train your resnet, 2018", "venue": null, "year": 2018 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "Jonathan S Rosenfeld", "Amir Rosenfeld", "Yonatan Belinkov", "Nir Shavit" ], "title": "A constructive prediction of the generalization error across scales", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Shai Shalev-Shwartz", "Shai Ben-David" ], "title": "Understanding machine learning: From theory to algorithms", "venue": "Cambridge university press,", "year": 2014 }, { "authors": [ "Shai Shalev-Shwartz" ], "title": "Online learning and online convex optimization", "venue": "Foundations and trends in Machine Learning,", "year": 2011 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Suriya Gunasekar", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Chen Sun", "Abhinav Shrivastava", "Saurabh Singh", "Abhinav Gupta" ], "title": "Revisiting unreasonable effectiveness of data in deep learning era", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Aad W Van der Vaart" ], "title": "Asymptotic statistics, volume 3", "venue": "Cambridge university press,", "year": 2000 }, { "authors": [ "V.N. Vapnik", "A.Y. Chervonenkis" ], "title": "On the uniform convergence of relative frequencies of events to their probabilities", "venue": "Theory of Probability and its Applications,", "year": 1971 }, { "authors": [ "Colin Wei", "Tengyu Ma" ], "title": "Improved sample complexities for deep neural networks and robust classification via an all-layer margin", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz" ], "title": "Huggingface’s transformers: State-of-the-art natural language processing", "venue": null, "year": 1910 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Lin Yang", "Lei Deng", "Mohammad H Hajiesmaili", "Cheng Tan", "Wing Shing Wong" ], "title": "An optimal algorithm for online non-convex learning", "venue": "Proceedings of the ACM on Measurement and Analysis of Computing Systems,", "year": 2018 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Dosovitskiy" ], "title": "2020), with a patch size of 4× 4 (adapted for the smaller CIFAR-10 image", "venue": null, "year": 2020 }, { "authors": [ "monyan", "Zisserman" ], "title": "Preactivation ResNets (He et al., 2016b), DenseNet", "venue": null, "year": 2015 }, { "authors": [ "Dong", "Yang" ], "title": "2020), while also varying width and depth for added diversity", "venue": null, "year": 2020 }, { "authors": [ "Chen" ], "title": "2020) in that we simply attach the classification head to the [average-pooled", "venue": null, "year": 2020 }, { "authors": [ "Gan" ], "title": "CIFAR-5m, CIFAR-10), and testing on both datasets. Table 1 shows a ResNet18 trained with standard data-augmentation, and Table 2 shows a WideResNet28-10 (Zagoruyko and Komodakis, 2016) trained with cutout augmentation (DeVries and Taylor, 2017). Mean of 5 trials for all results. In particular, the WRN-28-10 trained on CIFAR-5m achieves 91.2% test accuracy on the original CIFAR-10 test", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The goal of a generalization theory in supervised learning is to understand when and why trained models have small test error. The classical framework of generalization decomposes the test error of a model ft as:\nTestError(ft) = TrainError(ft) + [TestError(ft)− TrainError(ft)]︸ ︷︷ ︸ Generalization gap\n(1)\nand studies each part separately (e.g. Vapnik and Chervonenkis (1971); Blumer et al. (1989); ShalevShwartz and Ben-David (2014)). Many works have applied this framework to study generalization of deep networks (e.g. Bartlett (1997); Bartlett et al. (1999); Bartlett and Mendelson (2002); Anthony and Bartlett (2009); Neyshabur et al. (2015b); Dziugaite and Roy (2017); Bartlett et al. (2017); Neyshabur et al. (2017); Harvey et al. (2017); Golowich et al. (2018); Arora et al. (2018; 2019);\nAllen-Zhu et al. (2019); Long and Sedghi (2019); Wei and Ma (2019)). However, there are at least two obstacles to understanding generalization of modern neural networks via the classical approach.\n1. Modern methods can interpolate, reaching TrainError ≈ 0, while still performing well. In these settings, the decomposition of Equation (1) does not actually reduce test error into two different subproblems: it amounts to writing TestError = 0 + TestError. That is, understanding the generalization gap here is exactly equivalent to understanding the test error itself. 2. Most if not all techniques for understanding the generalization gap (e.g. uniform convergence, VC-dimension, regularization, stability, margins) remain vacuous (Zhang et al., 2017; Belkin et al., 2018a;b; Nagarajan and Kolter, 2019) and not predictive (Nagarajan and Kolter, 2019; Jiang et al., 2019; Dziugaite et al., 2020) for modern networks.\nIn this work, we propose an alternate approach to understanding generalization to help overcome these obstacles. The key idea is to consider an alternate decomposition:\nTestError(ft) = TestError(f iidt )︸ ︷︷ ︸ A: Online Learning + [TestError(ft)− TestError(f iidt )]︸ ︷︷ ︸ B: Bootstrap error\n(2)\nwhere ft is the neural-network after t optimization steps (the “Real World”), and f iidt is a network trained identically to ft, but using fresh samples from the distribution in each mini-batch step (the “Ideal World”). That is, f iidt is the result of optimizing on the population loss for t steps, while ft is the result of optimizing on the empirical loss as usual (we define this more formally later).\nThis leads to a different decoupling of concerns, and proposes an alternate research agenda to understand generalization. To understand generalization in the bootstrap framework, it is sufficient to understand:\n(A) Online Learning: How quickly models optimize on the population loss, in the infinite-data regime (the Ideal World). (B) Finite-Sample Deviations: How closely models behave in the finite-data vs. infinite-data regime (the bootstrap error).\nAlthough neither of these points are theoretically understood for deep networks, they are closely related to rich areas in optimization and statistics, whose tools have not been brought fully to bear on the problem of generalization. The first part (A) is purely a question in online stochastic optimization: We have access to a stochastic gradient oracle for a population loss function, and we are interested in how quickly an online optimization algorithm (e.g. SGD, Adam) reaches small population loss. This problem is well-studied in the online learning literature for convex functions (Bubeck, 2011; Hazan, 2019; Shalev-Shwartz et al., 2011), and is an active area of research in nonconvex settings (Jin et al., 2017; Lee et al., 2016; Jain and Kar, 2017; Gao et al., 2018; Yang et al., 2018; Maillard and Munos, 2010). In the context of neural networks, optimization is usually studied on the empirical loss landscape (Arora et al., 2019; Allen-Zhu et al., 2019), but we propose studying optimization on the population loss landscape directly. This highlights a key difference in our approach: we never compare test and train quantities— we only consider test quantities.\nThe second part (B) involves approximating fresh samples with “reused” samples, and reasoning about behavior of certain functions under this approximation. This is closely related to the nonparametric bootstrap in statistics (Efron, 1979; Efron and Tibshirani, 1986), where sampling from the population distribution is approximated by sampling with replacement from an empirical distribution. Bootstrapped estimators are widely used in applied statistics, and their theoretical properties are known in certain cases (e.g. Hastie et al. (2009); James et al. (2013); Efron and Hastie (2016); Van der Vaart (2000)). Although current bootstrap theory does not apply to neural networks, it is conceivable that these tools could eventually be extended to our setting.\nExperimental Validation. Beyond the theoretical motivation, our main experimental claim is that the bootstrap decomposition is actually useful: in realistic settings, the bootstrap error is often small, and the performance of real classifiers is largely captured by their performance in the Ideal World. Figure 1 shows one example of this, as a preview of our more extensive experiments in Section 4. We plot the test error of a ResNet (He et al., 2016a), an MLP, and a Vision Transformer (Dosovitskiy et al., 2020) on a CIFAR-10-like task, over increasing minibatch SGD iterations. The Real World is trained on 50K samples for 100 epochs. The Ideal World is trained on 5 million samples with a single pass. Notice that the bootstrap error is small for all architectures, although the generalization\ngap can be large. In particular, the convnet generalizes better than the MLP on finite data, but this is “because” it optimizes faster on the population loss with infinite data. See Appendix D.1 for details." }, { "heading": "Our Contributions.", "text": "• Framework: We propose the Deep Bootstrap framework for understanding generalization in deep learning, which connects offline generalization to online optimization. (Section 2). • Validation: We give evidence that the bootstrap error is small in realistic settings for supervised image classification, by conducting extensive experiments on large-scale tasks (including variants of CIFAR-10 and ImageNet) for many architectures (Section 4). Thus,\nThe generalization of models is largely determined by their optimization speed in online and offline learning.\n• Implications: We highlight how our framework can unify and yield insight into important phenomena in deep learning, including implicit bias, model selection, data-augmentation and pretraining (Section 5). In particular:\nGood models and training procedures are those which (1) optimize quickly in the Ideal World and\n(2) do not optimize too quickly in the Real World.\nAdditional Related Work. The bootstrap error is also related to algorithmic stability (e.g. Bousquet and Elisseeff (2001); Hardt et al. (2016)), since both quantities involve replacing samples with fresh samples. However, stability-based generalization bounds cannot tightly bound the bootstrap error, since there are many settings where the generalization gap is high, but bootstrap error is low." }, { "heading": "2 THE DEEP BOOSTRAP", "text": "Here we more formally describe the Deep Bootstrap framework and our main claims. LetF denote a learning algorithm, including architecture and optimizer. We consider optimizers which can be used in online learning, such as stochastic gradient descent and variants. Let TrainF (D, n, t) denote training in the “Real World”: using the architecture and optimizer specified by F , on a train set of n samples from distribution D, for t optimizer steps. Let TrainF (D,∞, t) denote this same optimizer operating on the population loss (the “Ideal World”). Note that these two procedures use identical architectures, learning-rate schedules, mini-batch size, etc – the only difference is, the Ideal World optimizer sees a fresh minibatch of samples in each optimization step, while the Real World reuses samples in minibatches. Let the Real and Ideal World trained models be:\nReal World: ft ← TrainF (D, n, t) Ideal World: f iidt ← TrainF (D,∞, t)\nWe now claim that for all t until the Real World converges, these two models ft, f iidt have similar test performance. In our main claims, we differ slightly from the presentation in the Introduction in that we consider the “soft-error” of classifiers instead of their hard-errors. The soft-accuracy of classifiers is defined as the softmax probability on the correct label, and (soft-error) := 1 − (soft-accuracy). Equivalently, this is the expected error of temperature-1 samples from the softmax distribution. Formally, define ε as the bootstrap error – the gap in soft-error between Real and Ideal worlds at time t:\nTestSoftErrorD(ft) = TestSoftErrorD(f iidt ) + ε(n,D,F , t) (3) Our main experimental claim is that the bootstrap error ε is uniformly small in realistic settings.\nClaim 1 (Bootstrap Error Bound, informal) For choices of (n,D,F) corresponding to realistic settings in deep learning for supervised image classification, the bootstrap error ε(n,D,F , t) is small for all t ≤ T0. The “stopping time” T0 is defined as the time when the Real World reaches small training error (we use 1%) – that is, when Real World training has essentially converged.\nThe restriction on t ≤ T0 is necessary, since as t→∞ the Ideal World will continue to improve, but the Real World will at some point essentially stop changing (when train error ≈ 0). However, we claim that these worlds are close for “as long as we can hope”— as long as the Real World optimizer is still moving significantly.\nError vs. Soft-Error. We chose to measure soft-error instead of hard-error in our framework for both empirical and theoretically-motivated reasons. Empirically, we found that the bootstrap gap is often smaller with respect to soft-errors. Theoretically, we want to define the bootstrap gap such that it converges to 0 as data and model size are scaled to infinity. Specifically, if we consider an overparameterized scaling limit where the Real World models always interpolate the train data, then Distributional Generalization (Nakkiran and Bansal, 2020) implies that the bootstrap gap for test error will not converge to 0 on distributions with non-zero Bayes risk. Roughly, this is because the Ideal World classifier will converge to the Bayes optimal one (argmaxy p(y|x)), while the Real World interpolating classifier will converge to a sampler from p(y|x). Considering soft-errors instead of errors nullifies this issue. We elaborate further on the differences between the worlds in Section 6. See also Appendix C for relations to the nonparametric bootstrap (Efron, 1979)." }, { "heading": "3 EXPERIMENTAL SETUP", "text": "Our bootstrap framework could apply to any setting where an iterative optimizer for online learning is applied in an offline setting. In this work we primarily consider stochastic gradient descent (SGD) applied to deep neural networks for image classification. This setting is well-developed both in practice and in theory, and thus serves as an appropriate first step to vet theories of generalization, as done in many recent works (e.g. Jiang et al. (2019); Neyshabur et al. (2018); Zhang et al. (2017); Arora et al. (2019)). Our work does not depend on overparameterization— it holds for both under and over parameterized networks, though it is perhaps most interesting in the overparameterized setting. We now describe our datasets and experimental methodology." }, { "heading": "3.1 DATASETS", "text": "Measuring the bootstrap error in realistic settings presents some challenges, since we do not have enough samples to instantiate the Ideal World. For example, for a Real World CIFAR-10 network trained on 50K samples for 100 epochs, the corresponding Ideal World training would require 5 million samples (fresh samples in each epoch). Since we do not have 5 million samples for CIFAR10, we use the following datasets as proxies. More details, including sample images, in Appendix E.\nCIFAR-5m. We construct a dataset of 6 million synthetic CIFAR-10-like images, by sampling from the CIFAR-10 Denoising Diffusion generative model of Ho et al. (2020), and labeling the unconditional samples by a 98.5% accurate Big-Transfer model (Kolesnikov et al., 2019). These are synthetic images, but close to CIFAR-10 for research purposes. For example, a WideResNet28-10 trained on 50K samples from CIFAR-5m reaches 91.2% test accuracy on CIFAR-10 test set. We use 5 million images for training, and reserve the rest for the test set. We plan to release this dataset.\nImageNet-DogBird. To test our framework in more complex settings, with real images, we construct a distribution based on ImageNet ILSVRC-2012 (Russakovsky et al., 2015). Recall that we need a setting with a large number of samples relative to the difficulty of the task: if the Real World performs well with few samples and few epochs, then we can simulate it in the Ideal World. Thus, we construct a simpler binary classification task out of ImageNet by collapsing classes into the superclasses “hunting dog” and “bird.” This is a roughly balanced task with 155K total images." }, { "heading": "3.2 METHODOLOGY", "text": "For experiments on CIFAR-5m, we exactly simulate the Real and Ideal worlds as described in Section 2. That is, for every Real World architecture and optimizer we consider, we construct the corresponding Ideal World by executing the exact same training code, but using fresh samples in each epoch. The rest of the training procedure remains identical, including data-augmentation and learning-rate schedule. For experiments on ImageNet-DogBird, we do not have enough samples to exactly simulate the Ideal World. Instead, we approximate the Ideal World by using the full training set (N = 155K) and data-augmentation. Formally, this corresponds to approximating TrainF (D,∞, t) by TrainF (D, 155K, t). In practice, we train the Real World on n = 10K samples for 120 epochs, so we can approximate this with less than 8 epochs on the full 155K train set. Since we train with data augmentation (crop+resize+flip), each of the 8 repetitions of each sample will undergo different random augmentations, and thus this plausibly approximates fresh samples.\nStopping time. We stop both Real and Ideal World training when the Real World reaches a small value of train error (which we set as 1% in all experiments). This stopping condition is necessary, as described in Section 2. Thus, for experiments which report test performance “at the end of training”, this refers to either when the target number of epochs is reached, or when Real World training has converged (< 1% train error). We always compare Real and Ideal Worlds after the exact same number of train iterations." }, { "heading": "4 MAIN EXPERIMENTS", "text": "We now give evidence to support our main experimental claim, that the bootstrap error ε is often small for realistic settings in deep learning for image classification. In all experiments in this section, we instantiate the same model and training procedure in the Real and Ideal Worlds, and observe that the test soft-error is close at the end of training. Full experimental details are in Appendix D.2.\nCIFAR-5m. In Figure 2a we consider a variety of standard architectures on CIFAR-5m, from fullyconnected nets to modern convnets. In the Real World, we train these architectures with SGD on n = 50K samples from CIFAR-5m, for 100 total epochs, with varying initial learning rates. We then construct the corresponding Ideal Worlds for each architecture and learning rate, trained in the same way with fresh samples each epoch. Figure 2a shows the test soft-error of the trained classifiers in the Real and Ideal Worlds at the end of training. Observe that test performance is very close in Real and Ideal worlds, although the Ideal World sees 100× unique samples during training. To test our framework for more diverse architectures, we also sample 500 random architectures from the DARTS search space (Liu et al., 2019). These are deep convnets of varying width and depth, which range in size from 70k to 5.5 million parameters. Figure 2b shows the Real and Ideal World test performance at the end of training— these are often within 3%.\nImageNet: DogBird. We now test various ImageNet architectures on ImageNet-DogBird. The Real World models are trained with SGD on n = 10K samples with standard ImageNet data augmentation. We approximate the Ideal World by training on 155K samples as described in Section 3.2. Figure 3a plots the Real vs. Ideal World test error at the end of training, for various architectures. Figure 3b shows this for ResNet-18s of varying widths." }, { "heading": "5 DEEP PHENOMENA THROUGH THE BOOTSTRAP LENS", "text": "Here we show that our Deep Bootstrap framework can be insightful to study phenomena and design choices in deep learning. For example, many effects in the Real World can be seen through their corresponding effects in the Ideal World. Full details for experiments provided in Appendix D.\nModel Selection in the Over- and Under-parameterized Regimes. Much of theoretical work in deep learning focuses on overparameterized networks, which are large enough to fit their train sets.\nHowever, in modern practice, state-of-the-art networks can be either over or under-parameterized, depending on the scale of data. For example, SOTA models on 300 million JFT images or 1 billion Instagram images are underfitting, due to the massive size of the train set (Sun et al., 2017; Mahajan et al., 2018). In NLP, modern models such as GPT-3 and T5 are trained on massive internet-text datasets, and so are solidly in the underparameterized regime (Kaplan et al., 2020; Brown et al., 2020; Raffel et al., 2019). We highlight one surprising aspect of this situation:\nThe same techniques (architectures and training methods) are used in practice in both over- and under-parameterized regimes.\nFor example, ResNet-101 is competitive both on 1 billion images of Instagram (when it is underparameterized) and on 50k images of CIFAR-10 (when it is overparameterized). This observation was made recently in Bornschein et al. (2020) for overparameterized architectures, and is also consistent with the conclusions of Rosenfeld et al. (2019). It is apriori surprising that the same architectures do well in both over and underparameterized regimes, since there are very different considerations in each regime. In the overparameterized regime, architecture matters for generalization reasons: there are many ways to fit the train set, and some architectures lead SGD to minima that generalize better. In the underparameterized regime, architecture matters for purely optimization reasons: all models will have small generalization gap with 1 billion+ samples, but we seek models which are capable of reaching low values of test loss, and which do so quickly (with few optimization steps). Thus, it should be surprising that in practice, we use similar architectures in both regimes.\nOur work suggests that these phenomena are closely related: If the boostrap error is small, then we should expect that architectures which optimize well in the infinite-data (underparameterized) regime also generalize well in the finite-data (overparameterized) regime. This unifies the two apriori different principles guiding model-selection in over and under-parameterized regimes, and helps understand why the same architectures are used in both regimes.\nImplicit Bias via Explicit Optimization. Much recent theoretical work has focused on the implicit bias of gradient descent (e.g. Neyshabur et al. (2015a); Soudry et al. (2018); Gunasekar et al. (2018b;a); Ji and Telgarsky (2019); Chizat and Bach (2020)). For overparameterized networks, there are many minima of the empirical loss, some which have low test error and others which have high test error. Thus studying why interpolating networks generalize amounts to studying why SGD is “biased” towards finding empirical minima with low population loss. Our framework suggests an alternate perspective: instead of directly trying to characterize which empirical minima SGD reaches, it may be sufficient to study why SGD optimizes quickly on the population loss. That is, instead of studying implicit bias of optimization on the empirical loss, we could study explicit properties of optimization on the population loss.\nThe following experiment highlights this approach. Consider the D-CONV and D-FC architectures introduced recently by Neyshabur (2020). D-CONV is a deep convolutional network and D-FC is its fully-connected counterpart: an MLP which subsumes the convnet in expressive capacity. That is, D-FC is capable of representing all functions that D-CONV can represent, since it replaces all conv layers with fully-connected layers and unties all the weights. Both networks reach close to 0 train\nerror on 50K samples from CIFAR-5m, but the convnet generalizes much better. The traditional explanation for this is that the “implicit bias” of SGD biases the convnet to a better-generalizing minima than the MLP. We show that, in fact, this generalization is captured by the fact that DCONV optimizes much faster on the population loss than D-FC. Figure 5c shows the test and train errors of both networks when trained on 50K samples from CIFAR-5m, in the Real and Ideal Worlds. Observe that the Real and Ideal world test performances are nearly identical.\nSample Size. In Figure 4, we consider the effect of varying the train set size in the Real World. Note that in this case, the Ideal World does not change. There are two effects of increasing n: (1) The stopping time extends— Real World training continues for longer before converging. And (2) the bootstrap error decreases. Of these, (1) is the dominant effect. Figure 4a illustrates this behavior in detail by considering a single model: ResNet-18 on CIFAR-5m. We plot the Ideal World behavior of ResNet-18, as well as different Real Worlds for varying n. All Real Worlds are stopped when they reach < 1% train error, as we do throughout this work. After this point their test performance is essentially flat (shown as faded lines). However, until this stopping point, all Real Worlds are roughly close to the Ideal World, becoming closer with larger n. These learning curves are representative of most architectures in our experiments. Figure 4b shows the same architectures of Figure 2a, trained on various sizes of train sets from CIFAR-5m. The Real and Ideal worlds may deviate from each other at small n, but become close for realistically large values of n.\nData Augmentation. Data augmentation in the Ideal World corresponds to randomly augmenting each fresh sample before training on it (as opposed to re-using the same sample for multiple augmentations). There are 3 potential effects of data augmentation in our framework: (1) it can affect the Ideal World optimization, (2) it can affect the bootstrap gap, and (3) it can affect the Real World stopping time (time until training converges). We find that the dominant factors are (1) and (3), though data augmentation does typically reduce the bootstrap gap as well. Figure 5a shows the effect of data augmentation on ResNet-18 for CIFAR-5m. In this case, data augmentation does not change the Ideal World much, but it extends the time until the Real World training converges. This view suggests that good data augmentations should (1) not hurt optimization in the Ideal World (i.e., not destroy true samples much), and (2) obstruct optimization in the Real World (so the Real World can improve for longer before converging). This is aligned with the “affinity” and “diversity” view of data augmentations in Gontijo-Lopes et al. (2020). See Appendix B.3 for more figures, including examples where data augmentation hurts the Ideal World.\nPretraining. Figure 5b shows the effect of pretraining for Image-GPT (Chen et al., 2020), a transformer pretrained for generative modeling on ImageNet. We fine-tune iGPT-S on 2K samples of CIFAR-10 (not CIFAR-5m, since we have enough samples in this case) and compare initializing from an early checkpoint vs. the final pretrained model. The fully-pretrained model generalizes better in the Real World, and also optimizes faster in the Ideal World. Additional experiments including ImageNet-pretrained Vision Transformers (Dosovitskiy et al., 2020) are in Appendix B.5.\nRandom Labels. Our approach of comparing Real and Ideal worlds also captures generalization in the random-label experiment of Zhang et al. (2017). Specifically, if we train on a distribution with purely random labels, both Real and Ideal world models will have trivial test error." }, { "heading": "6 DIFFERENCES BETWEEN THE WORLDS", "text": "In our framework, we only compare the test soft-error of models in the Real and Ideal worlds. We do not claim these models are close in all respects— in fact, this is not true. For example, Figure 6 shows the same ResNet-18s trained in the Introduction (Figure 1), measuring three different metrics in both worlds. Notably, the test loss diverges drastically between the Real and Ideal worlds, although the test soft-error (and to a lesser extent, test error) remains close. This is because training to convergence in the Real World will cause the network weights to grow unboundedly, and the softmax distribution to concentrate (on both train and test). In contrast, training in the Ideal World will generally not cause weights to diverge, and the softmax will remain diffuse. This phenomenon also means that the Error and Soft-Error are close in the Real World, but can be slightly different in the Ideal World, which is consistent with our experiments." }, { "heading": "7 CONCLUSION AND DISCUSSION", "text": "We propose the Deep Bootstrap framework for understanding generalization in deep learning. Our approach is to compare the Real World, where optimizers take steps on the empirical loss, to an Ideal World, where optimizers have infinite data and take steps on the population loss. We find that in modern settings, the test performance of models is close between these worlds. This establishes a new connection between the fields of generalization and online learning: models which learn quickly (online) also generalize well (offline). Our framework thus provides a new lens on deep phenomena, and lays a promising route towards theoretically understanding generalization in deep learning.\nLimitations. Our work takes first steps towards characterizing the bootstrap error ε, but fully understanding this, including its dependence on problem parameters (n,D,F , t), is an important area for future study. The bootstrap error is not universally small for all models and learning tasks: for example, we found the gap was larger at limited sample sizes and without data augmentation. Moreover, it can be large in simple settings like linear regression (Appendix A), or settings when the Real World test error is non-monotonic (e.g. due to epoch double-decent (Nakkiran et al., 2020)). Nevertheless, the gap appears to be small in realistic deep learning settings, and we hope that future work can help understand why." }, { "heading": "ACKNOWLEDGEMENTS", "text": "Work completed in part while PN was interning at Google. PN also supported in part by a Google PhD Fellowship, the Simons Investigator Awards of Boaz Barak and Madhu Sudan, and NSF Awards under grants CCF 1565264, CCF 1715187 and IIS 1409097." }, { "heading": "A TOY EXAMPLE", "text": "Here we present a theoretically-inspired toy example, giving a simple setting where the bootstrap gap is small, but the generalization gap is large. We also give an analogous example where the bootstrap error is large. The purpose of these examples is (1) to present a simple setting where the bootstrap framework can be more useful than studying the generalization gap. And (2) to illustrate that the bootstrap gap is not always small, and can be large in certain standard settings.\nWe consider the following setup. Let us pass to a regression setting, where we have a distribution over (x, y) ∈ Rd × R, and we care about mean-square-error instead of classification error. That is, for a model f , we have TestMSE(f) := Ex,y[(f(x) − y)2]. Both our examples are from the following class of distributions in dimension d = 1000.\nx ∼ N (0, V ) y := σ(〈β∗, x〉)\nwhere β∗ ∈ Rd is the ground-truth, and σ is a pointwise activation function. The model family is linear, fβ(x) := 〈β, x〉 We draw n samples from the distribution, and train the model fβ using full-batch gradient descent on the empirical loss:\nTrainMSE(fβ) := 1\nn ∑ i (f(xi)− yi)2 = 1 n ||Xβ − y||2\nWe chose β∗ = e1, and covariance V to be diagonal with 10 eigenvalues of 1 and the remaining eigenvalues of 0.1. That is, x is essentially 10-dimensional, with the remaining coordinates “noise.”\nThe two distributions are instances of the above setting for different choices of parameters.\n• Setting A. Linear activation σ(x) = x. With n = 20 train samples. • Setting B. Sign activation σ(x) = sgn(x). With n = 100 train samples.\nSetting A is a standard well-specified linear regression setting. Setting B is a misspecified regression setting. Figure 7 shows the Real and Ideal worlds in these settings, for gradient-descent on the empirical loss (with step-size η = 0.1). Observe that in the well-specified Setting A, the Ideal World performs much better than the Real World, and the bootstrap framework is not as useful. However, in the misspecified Setting B, the bootstrap gap remains small even as the generalization gap grows.\nThis toy example is contrived to help isolate factors important in more realistic settings. We have observed behavior similar to Setting B in other simple settings with real data, such as regression on MNIST/Fashion-MNIST, as well as in the more complex settings in the body of this paper." }, { "heading": "B ADDITIONAL FIGURES", "text": "B.1 INTRODUCTION EXPERIMENT\nFigure 8 shows the same experiment as Figure 1 in the Introduction, including the train error in the Real World. Notice that the bootstrap error remains small, even as the generalization gap (between train and test) grows." }, { "heading": "B.2 DARTS ARCHITECTURES", "text": "Figure 9 shows the Real vs Ideal world for trained random DARTS architectures." }, { "heading": "B.3 EFFECT OF DATA AUGMENTATION", "text": "Figure 10 shows the effect of data-augmentation in the Ideal World, for several selected architectures on CIFAR-5m. Recall that data augmentation in the Ideal World corresponds to randomly augmenting each fresh sample once, as opposed to augmenting the same sample multiple times. We train with SGD using the same hyperparameters as the main experiments (described in Appendix D.2). We use standard CIFAR-10 data augmentation: random crop and random horizontal flip.\nThe test performance without augmentation is shown as solid lines, and with augmentation as dashed lines. Note that VGG and ResNet do not behave differently with augmentation, but augmentation significantly hurts AlexNet and the MLP. This may be because VGG and ResNet have global spatial pooling, which makes them (partially) shift-invariant, and thus more amenable to the random cropping. In contrast, augmentation hurts the architectures without global pooling, perhaps because for these architectures, augmented samples appear more out-of-distribution.\nFigure 11a shows the same architectures and setting as Figure 2 but trained without data augmentation. That is, we train on 50K samples from CIFAR-5m, using SGD with cosine decay and initial learning rate {0.1, 0.01, 0.001}. Figure 11b shows learning curves with and without data augmentation of a ResNet-18 on n = 10k samples. This is the analogous setting of Figure 5a in the body, which is for n = 50k samples." }, { "heading": "B.4 ADAM", "text": "Figure 12 shows several experiments with the Adam optimizer (Kingma and Ba, 2014) in place of SGD. We train all architectures on 50K samples from CIFAR-5m, with data-augmentation, batchsize 128, using Adam with default parameters (lr=0.001, β1 = 0.9, β2 = 0.999)." }, { "heading": "B.5 PRETRAINING", "text": "" }, { "heading": "B.5.1 PRETRAINED MLP", "text": "Figure 14 shows the effect of pretraining for an MLP (3x2048) on CIFAR-5m, by comparing training from scratch (random initialization) to training from an ImageNet-pretrained initialization. The pretrained MLP generalizes better in the Real World, and also optimizes faster in the Ideal World. We fine tune on 50K samples from CIFAR-5m, with no data-augmentation.\nFor ImageNet-pretraining, we train the MLP[3x2048] on full ImageNet (224px, 1000 classes), using Adam with default settings, and batchsize 1024. We use standard ImageNet data augmentation (random resized crop + horizontal flip) and train for 500 epochs. This MLP achieves test accuracy 21% and train accuracy 30% on ImageNet. For fine-tuning, we adapt the network to 32px input size by resizing the first layer filters from 224x224 to 32x32 via bilinear interpolation. We then replace the classification layer, and fine-tune the entire network on CIFAR-5m." }, { "heading": "B.5.2 PRETRAINED VISION TRANSFORMER", "text": "Figure 13 shows the effect of pretraining for Vision Transformer (ViT-B/4). We compare ViT-B/4 trained from scratch to training from an ImageNet-pretrained initialization. The color of line in Figure 13 indicates the pretraining strategy, and the weight of the line indicates the measurment (Ideal World test error, Real World test error, or Real World train error). We fine tune on 50K samples from CIFAR-5m, with standard CIFAR-10 data augmentation. Notice that pretrained ViT generalizes better in the Real World, and also optimizes correspondingly faster in the Ideal World.\nBoth ViT models are fine-tuned using SGD identical to Figure 1 in the Introduction, as described in Section D.1. For ImageNet pretraining, we train on ImageNet resized to 32×32, after standard data augmentation. We pretrain for 30 epochs using Adam with batchsize 2048 and constant learning rate 1e-4. We then replace and zero-initialize the final layer in the MLP head, and fine-tune the full model for classification on CIFAR-5m. This pretraining process it not as extensive as in Dosovitskiy et al. (2020); we use it to demonstrate that our framework captures the effect of pretraining in various settings." }, { "heading": "B.6 LEARNING RATE", "text": "Figure 2a shows that the Real and Ideal world remain close across varying initial learning rates. All of the figures in the body use a cosine decay learning rate schedule, but this is only for simplicity; we observed that the effect of various learning rate schedules are mirrored in Real and Ideal worlds. For example, Figure 15 shows a ResNet18 in the Real World trained with SGD for 50 epochs on CIFAR-5m, with a step-wise decay schedule (initial LR 0.1, dropping by factor 10 at 1/3 and 2/3 through training). Notice that the Ideal World error drops correspondingly with the Real World, suggesting that the LR drop has a similar effect on the population optimization as it does on the empirical optimization." }, { "heading": "B.7 DIFFERENCE BETWEEN WORLDS", "text": "Figure 16 shows test soft-error, test error, and test loss for the MLP from Figure 1." }, { "heading": "B.8 ERROR VS. SOFTERROR", "text": "Here we show the results of several of our experiments if we measure the bootstrap gap with respect to Test Error instead of SoftError. The bootstrap gap is often reasonably small even with respect to Error, though it is not as well behaved as SoftError.\nFigure 17 shows the same setting as Figure 2a in the body, but measuring Error instaed of SoftError." }, { "heading": "B.8.1 TRAINING WITH MSE", "text": "We can measure Test Error even for networks which do not naturally output a probability distribution. Here, we train various architectures on CIFAR-5m using the squared-loss (MSE) directly on logits, with no softmax layer. This follows the methodology in Hui and Belkin (2020). We train all Real-World models using SGD, batchsize 128, momentum 0.9, initial learning rate 0.002 with cosine decay for 100 epochs.\nFigure 18 shows the Test Error and Test Loss in the Real and Ideal Worlds. The bootstrap gap, with respect to test error, for MSE-trained networks is reasonably small – though there are deviations in the low error regime. Compare this to Figure 2a, which measures the SoftError for networks trained with cross-entropy." }, { "heading": "C BOOTSTRAP CONNECTION", "text": "Here we briefly describe the connection between our Deep Bootstrap framework and the nonparametric bootstrap of Efron (1979).\nFor an online learning procedure F and a sequence of labeled samples {xi}, let TrainF (x1, x2, . . . ) denote the function which optimizes on the samples x1, x2, . . . in sequence, and outputs the resulting model. (For example, the function which initializes a network of a certain architecture, and takes successive gradient steps on the sequence of samples, and outputs the resulting model).\nFor a given (n,D,F , t), define the function G : X t → R as follows. G takes as input t labeled samples {xi}, and outputs the Test Soft-Error (w.r.t D) of training on the sequence {xi}. That is,\nG(x1, x2, . . . xt) := TestSoftErrorD(TrainF (x1, x2, . . . , xt))\nNow, the Ideal World test error is simply G evaluated on iid samples xi ∼ D:\nIdeal World: TestSoftErrorD(f iidt ) = G({xi}) where xi ∼ D\nThe Real World, using a train set of size n < t, is equivalent1 to evaluatingG on t examples sampled with replacement from a train set of size n. This corresponds to training on the same sample multiple times, for t total train steps.\nReal World: TestSoftErrorD(ft) = G({x̃i}) where S ∼ Dn; x̃i ∼ S\nHere, the samples x̃i are drawn with replacement from the train set S. Thus, the Deep Bootstrap error ε = G({x̃i}) − G({xi}) measures the deviation of a certain function when it is evaluated on iid samples v.s. on samples-with-replacement, which is exactly the form of bootstrap error in applications of the nonparametric bootstrap (Efron, 1979; Efron and Tibshirani, 1986; 1994).\n1Technically we do not sample-with-replacement in the experiments, we simply reuse each sample a fixed number of times (once in each epoch). We describe it as sampling-with-replacement here to more clearly relate it to the nonparametric bootstrap." }, { "heading": "D APPENDIX: EXPERIMENTAL DETAILS", "text": "Technologies. All experiments run on NVIDIA V100 GPUs. We used PyTorch (Paszke et al., 2019), NumPy (Harris et al., 2020), Hugging Face transformers (Wolf et al., 2019), pandas (McKinney et al., 2010), W&B (Biewald, 2020), Matplotlib (Hunter, 2007), and Plotly (Inc., 2015).\nD.1 INTRODUCTION EXPERIMENT\nAll architectures in the Real World are trained with n = 50K samples from CIFAR-5m, using SGD on the cross-entropy loss, with cosine learning rate decay, for 100 epochs. We use standard CIFAR10 data augmentation of random crop+horizontal flip. All models use batch size 128, so they see the same number of samples at each point in training.\nThe ResNet is a preactivation ResNet18 (He et al., 2016b), the MLP has 5 hidden layers of width 2048, with pre-activation batch norm. The Vision Transformer uses the ViT-Base configuration from Dosovitskiy et al. (2020), with a patch size of 4× 4 (adapted for the smaller CIFAR-10 image size of 32 × 32). We use the implementation from https://github.com/lucidrains/ vit-pytorch. We train all architectures including ViT from scratch, with no pretraining. ResNets and MLP use initial learning rate 0.1 and momentum 0.9. ViT uses initial LR 0.01, momentum 0.9, and weight decay 1e-4. We did not optimize ViT hyperparameters as extensively as in Dosovitskiy et al. (2020); this experiment is only to demonstrate that our framework is meaningful for diverse architectures.\nFigure 1 plots the Test Soft-Error over the course of training, and the Train Soft-Error at the end of training. We plot median over 10 trials (with random sampling of the train set, random initialization, and random SGD order in each trial)." }, { "heading": "D.2 MAIN EXPERIMENTS", "text": "For CIFAR-5m we use the following architectures: AlexNet (Krizhevsky et al., 2012), VGG (Simonyan and Zisserman, 2015), Preactivation ResNets (He et al., 2016b), DenseNet (Huang et al., 2017). The Myrtle5 architecture is a 5-layer CNN introduced by (Page, 2018).\nIn the Real World, we train these architectures on n = 50K samples from CIFAR-5m using cross-entropy loss. All models are trained with SGD with batchsize 128, initial learning rate {0.1, 0.01, 0.001}, cosine learning rate decay, for 100 total epochs, with data augmentation: random horizontal flip and RandomCrop(32, padding=4). We plot median over 10 trials (with random sampling of the train set, random initialization, and random SGD order in each trial).\nDARTS Architectures. We sample architectures from the DARTS search space (Liu et al., 2019), as implemented in the codebase of Dong and Yang (2020). We follow the parameters used for CIFAR10 in Dong and Yang (2020), while also varying width and depth for added diversity. Specifically, we use 4 nodes, number of cells ∈ {1, 5}, and width ∈ {16, 64}. We train all DARTS architectures with SGD, batchsize 128, initial learning rate 0.1, cosine learning rate decay, for 100 total epochs, with standard augmentation (random crop+flip).\nImageNet: DogBird All architectures for ImageNet-DogBird are trained with SGD, batchsize 128, learning rate 0.01, momentum 0.9, for 120 epochs, with standard ImageNet data augmentation (random resized crop to 224px, horizontal flip). We report medians over 10 trials for each architecture.\nWe additional include the ImageNet architectures: BagNet (Brendel and Bethge, 2019), MobileNet (Sandler et al., 2018), and ResNeXt (Xie et al., 2017). The architectures SCONV9 and SCONV33 refer to the S-CONV architectures defined by Neyshabur (2020), instantiated for ImageNet with base-width 48, image size 224, and kernel size {9, 33} respectively.\nD.3 IMPLICIT BIAS\nWe use the D-CONV architecture from (Neyshabur, 2020), with base width 32, and the corresponding D-FC architecture. PyTorch specification of these architectures are provided in Appendix F for convenience. We train both architectures with SGD, batchsize 128, initial learning rate 0.1, cosine\nlearning rate decay, for 100 total epochs, with random crop + horizontal flip data-augmentation. We plot median errors over 10 trials.\nD.4 IMAGE-GPT FINETUNING\nWe fine-tune iGPT-S, using the publicly available pretrained model checkpoints from Chen et al. (2020). The “Early” checkpoint in Figure 5b refers to checkpoint 131000, and the “Final” checkpoint is 1000000. Following Chen et al. (2020), we use Adam with (lr = 0.003, β1 = 0.9, β2 = 0.95), and batchsize 128. We do not use data augmentation. For simplicity, we differ slightly from Chen et al. (2020) in that we simply attach the classification head to the [average-pooled] last transformer layer, and we fine-tune using only classification loss and not the joint generative+classification loss used in Chen et al. (2020). Note that we fine-tune the entire model, not just the classification head." }, { "heading": "E APPENDIX: DATASETS", "text": "" }, { "heading": "E.1 CIFAR-5M", "text": "CIFAR-5m is a dataset of 6 million synthetic CIFAR-10-like images. We release this dataset publicly on Google Cloud Storage, as described in https://github.com/preetum/cifar5m.\nThe images are RGB 32 × 32px. We generate samples from the Denoising Diffusion generative model of Ho et al. (2020) trained on the CIFAR-10 train set (Krizhevsky, 2009). We use the publicly available trained model and sampling code provided by the authors at https: //github.com/hojonathanho/diffusion. We then label these unconditional samples by a 98.5% accurate Big-Transfer model (Kolesnikov et al., 2019). Specifically, we use the pretrained BiT-M-R152x2 model, fine-tuned on CIFAR-10 using the author-provided code at https: //github.com/google-research/big_transfer. We use 5 million images for training, and reserve the remaining images for the test set.\nThe distribution of CIFAR-5m is of course not identical to CIFAR-10, but is close for research purposes. For example, we show baselines of training a network on 50K samples of either dataset (CIFAR-5m, CIFAR-10), and testing on both datasets. Table 1 shows a ResNet18 trained with standard data-augmentation, and Table 2 shows a WideResNet28-10 (Zagoruyko and Komodakis, 2016) trained with cutout augmentation (DeVries and Taylor, 2017). Mean of 5 trials for all results. In particular, the WRN-28-10 trained on CIFAR-5m achieves 91.2% test accuracy on the original CIFAR-10 test set. We hope that as simulated 3D environments become more mature (e.g. Gan et al. (2020)), they will provide a source of realistic infinite datasets to use in such research.\nRandom samples from CIFAR-5m are shown in Figure 19. For comparison, we show random samples from CIFAR-10 in Figure 20.\nTrained On Test Error On CIFAR-10 CIFAR-5m\nCIFAR-10 0.032 0.091 CIFAR-5m 0.088 0.097\nTable 2: WRN28-10 + cutout on CIFAR-10/5m\nE.2 IMAGENET: DOGBIRD\nThe ImageNet-DogBird task is constructed by collapsing classes from ImageNet. The task is to distinguish dogs from birds. The dogs are all ImageNet classes under the WordNet synset “hunting dog” (including 63 ImageNet classes) and birds are all classes under synset “bird” (including 59 ImageNet classes). This is a relatively easy task compared to full ImageNet: A ResNet-18 trained on 10K samples from ImageNet-DogBird, with standard ImageNet data augmentation, can achieve test accuracy 95%. The listing of the ImageNet wnids included in each class is provided below. Hunting Dogs (n2087122): n02091831, n02097047, n02088364, n02094433, n02097658, n02089078, n02090622, n02095314, n02102040, n02097130, n02096051, n02098105, n02095889, n02100236, n02099267, n02102318, n02097474, n02090721, n02102973, n02095570, n02091635, n02099429, n02090379, n02094258, n02100583, n02092002, n02093428, n02098413, n02097298, n02093754, n02096177, n02091032, n02096437, n02087394, n02092339, n02099712, n02088632, n02093647, n02098286, n02096585, n02093991, n02100877, n02094114, n02101388, n02089973, n02088094, n02088466, n02093859, n02088238, n02102480, n02101556, n02089867, n02099601, n02102177, n02101006, n02091134, n02100735, n02099849, n02093256, n02097209, n02091467, n02091244, n02096294\nBirds (n1503061): n01855672, n01560419, n02009229, n01614925, n01530575, n01798484, n02007558, n01860187,\nn01820546, n01817953, n01833805, n02058221, n01806567, n01558993, n02056570, n01797886, n02018207, n01828970,\nn02017213, n02006656, n01608432, n01818515, n02018795, n01622779, n01582220, n02013706, n01534433, n02027492,\nn02012849, n02051845, n01824575, n01616318, n02002556, n01819313, n01806143, n02033041, n01601694, n01843383,\nn02025239, n02002724, n01843065, n01514859, n01796340, n01855032, n01580077, n01807496, n01847000, n01532829,\nn01537544, n01531178, n02037110, n01514668, n02028035, n01795545, n01592084, n01518878, n01829413, n02009912,\nn02011460" }, { "heading": "F D-CONV, D-FC ARCHITECTURE DETAILS", "text": "For convenience, we provide PyTorch specifications for the D-CONV and D-FC architectures from Neyshabur (2020) which we use in this work.\nD-CONV. This model has 6563498 parameters." }, { "heading": "Network(", "text": "(features): Sequential(\n(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1) , bias=False)\n(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True) (2): ReLU(inplace=True) (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)\n, bias=False) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) (5): ReLU(inplace=True) (6): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)\n, bias=False) (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) (8): ReLU(inplace=True) (9): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)\n, bias=False) (10): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (11): ReLU(inplace=True) (12): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)\n, bias=False) (13): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (14): ReLU(inplace=True) (15): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)\n, bias=False) (16): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) (17): ReLU(inplace=True) (18): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)\n, bias=False) (19): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) (20): ReLU(inplace=True) (21): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)\n, bias=False) (22): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True) (23): ReLU(inplace=True)\n) (classifier): Sequential(\n(0): Linear(in_features=2048, out_features=2048 , bias=False)\n(1): BatchNorm1d(2048, eps=1e-05, momentum=0.1, affine=True) (2): ReLU(inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Linear(in_features=2048, out_features=10, bias=True)\n) )\nD-FC. This model has 1170419722 parameters." }, { "heading": "Network(", "text": "(features): Sequential(\n(0): Linear(in_features=3072, out_features=32768, bias=False) (1): BatchNorm1d(32768, eps=1e-05, momentum=0.1, affine=True (2): ReLU(inplace=True) (3): Linear(in_features=32768, out_features=16384, bias=False)\n(4): BatchNorm1d(16384, eps=1e-05, momentum=0.1, affine=True) (5): ReLU(inplace=True) (6): Linear(in_features=16384, out_features=16384, bias=False) (7): BatchNorm1d(16384, eps=1e-05, momentum=0.1, affine=True) (8): ReLU(inplace=True) (9): Linear(in_features=16384, out_features=8192, bias=False) (10): BatchNorm1d(8192, eps=1e-05, momentum=0.1, affine=True) (11): ReLU(inplace=True) (12): Linear(in_features=8192, out_features=8192, bias=False) (13): BatchNorm1d(8192, eps=1e-05, momentum=0.1, affine=True) (14): ReLU(inplace=True) (15): Linear(in_features=8192, out_features=4096, bias=False) (16): BatchNorm1d(4096, eps=1e-05, momentum=0.1, affine=True) (17): ReLU(inplace=True) (18): Linear(in_features=4096, out_features=4096, bias=False) (19): BatchNorm1d(4096, eps=1e-05, momentum=0.1, affine=True) (20): ReLU(inplace=True) (21): Linear(in_features=4096, out_features=2048, bias=False) (22): BatchNorm1d(2048, eps=1e-05, momentum=0.1, affine=True) (23): ReLU(inplace=True)\n) (classifier): Sequential(\n(0): Linear(in_features=2048, out_features=2048, bias=False) (1): BatchNorm1d(2048, eps=1e-05, momentum=0.1, affine=True) (2): ReLU(inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Linear(in_features=2048, out_features=10, bias=True)\n) )" } ]
2,021
null
SP:1b984693f1a64c86306aff37d58f9ff188bcf67e
[ "This paper presents a general Self-supervised Time Series representation learning framework. It explores the inter-sample relation reasoning and intra-temporal relation reasoning of time series to capture the underlying structure pattern of the unlabeled time series data. The proposed method achieves new state-of-the-art results and outperforms existing methods by a significant margin on multiple real-world time-series datasets for the classification tasks." ]
Self-supervised learning achieves superior performance in many domains by extracting useful representations from the unlabeled data. However, most of traditional self-supervised methods mainly focus on exploring the inter-sample structure while less efforts have been concentrated on the underlying intra-temporal structure, which is important for time series data. In this paper, we present SelfTime: a general Self-supervised Time series representation learning framework, by exploring the inter-sample relation and intra-temporal relation of time series to learn the underlying structure feature on the unlabeled time series. Specifically, we first generate the inter-sample relation by sampling positive and negative samples of a given anchor sample, and intra-temporal relation by sampling time pieces from this anchor. Then, based on the sampled relation, a shared feature extraction backbone combined with two separate relation reasoning heads are employed to quantify the relationships of the sample pairs for inter-sample relation reasoning, and the relationships of the time piece pairs for intra-temporal relation reasoning, respectively. Finally, the useful representations of time series are extracted from the backbone under the supervision of relation reasoning heads. Experimental results on multiple real-world time series datasets for time series classification task demonstrate the effectiveness of the proposed method. Code and data are publicly available 1.
[]
[ { "authors": [ "Anthony Bagnall", "Jason Lines", "Jon Hills", "Aaron Bostrom" ], "title": "Time-series classification with cote: the collective of transformation-based ensembles", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2015 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Mustafa Gokce Baydogan", "George Runger", "Eugene Tuv" ], "title": "A bag-of-features framework to classify time series", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Donald J Berndt", "James Clifford" ], "title": "Using dynamic time warping to find patterns in time series", "venue": "In KDD workshop,", "year": 1994 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In Proceedings of the 37th international conference on machine learning (ICML),", "year": 2020 }, { "authors": [ "Ziqiang Cheng", "Yang Yang", "Wei Wang", "Wenjie Hu", "Yueting Zhuang", "Guojie Song" ], "title": "Time2graph: Revisiting time series modeling with dynamic shapelets", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Samarjit Das" ], "title": "Time series analysis, volume 10", "venue": "Princeton university press, Princeton,", "year": 1994 }, { "authors": [ "Hoang Anh Dau", "Eamonn Keogh", "Kaveh Kamgar", "Chin-Chia Michael Yeh", "Yan Zhu", "Shaghayegh Gharghabi", "Chotirat Ann Ratanamahatana", "Yanping", "Bing Hu", "Nurjahan Begum", "Anthony Bagnall", "Abdullah Mueen", "Gustavo Batista", "Hexagon-ML" ], "title": "The ucr time series classification", "venue": null, "year": 2018 }, { "authors": [ "Terrance DeVries", "Graham W Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "arXiv preprint arXiv:1708.04552,", "year": 2017 }, { "authors": [ "Vincent Fortuin", "Matthias Hüser", "Francesco Locatello", "Heiko Strathmann", "Gunnar Rätsch" ], "title": "Somvae: Interpretable discrete representation learning on time series", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jean-Yves Franceschi", "Aymeric Dieuleveut", "Martin Jaggi" ], "title": "Unsupervised scalable representation learning for multivariate time series", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Micah B Goldwater", "Hilary Don", "Moritz J F Krusche", "Evan J Livesey" ], "title": "Relational discovery in category learning", "venue": "Journal of Experimental Psychology: General,", "year": 2018 }, { "authors": [ "Tomasz Górecki", "Maciej Łuczak" ], "title": "Non-isometric transforms in time series classification using dtw", "venue": "Knowledge-Based Systems,", "year": 2014 }, { "authors": [ "Alex Graves", "Abdel-rahman Mohamed", "Geoffrey Hinton" ], "title": "Speech recognition with deep recurrent neural networks", "venue": "IEEE international conference on acoustics, speech and signal processing,", "year": 2013 }, { "authors": [ "Jon Hills", "Jason Lines", "Edgaras Baranauskas", "James Mapp", "Anthony Bagnall" ], "title": "Classification of time series by shapelet transformation", "venue": "Data Mining and Knowledge Discovery,", "year": 2014 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Brian Kenji Iwana", "Seiichi Uchida" ], "title": "Time series data augmentation for neural networks by time warping with a discriminative teacher", "venue": "arXiv preprint arXiv:2004.08780,", "year": 2020 }, { "authors": [ "Shayan Jawed", "Josif Grabocka", "Lars Schmidt-Thieme" ], "title": "Self-supervised learning for semisupervised time series classification", "venue": "In Pacific-Asia Conference on Knowledge Discovery and Data Mining,", "year": 2020 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Myeongsu Kang", "Jaeyoung Kim", "Linda M Wills", "Jong-Myon Kim" ], "title": "Time-varying and multiresolution envelope analysis and discriminative feature analysis for bearing fault diagnosis", "venue": "IEEE Transactions on Industrial Electronics,", "year": 2015 }, { "authors": [ "Fazle Karim", "Somshubra Majumdar", "Houshang Darabi", "Shun Chen" ], "title": "Lstm fully convolutional networks for time series classification", "venue": "IEEE access,", "year": 2017 }, { "authors": [ "Charles Kemp", "Joshua B Tenenbaum" ], "title": "The discovery of structural form", "venue": "Proceedings of the National Academy of Sciences,", "year": 2008 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In 3th International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Arthur Le Guennec", "Simon Malinowski", "Romain Tavenard" ], "title": "Data Augmentation for Time Series Classification using Convolutional Neural Networks", "venue": "In ECML/PKDD Workshop on Advanced Analytics and Learning on Temporal Data,", "year": 2016 }, { "authors": [ "Jason Lines", "Anthony Bagnall" ], "title": "Time series classification with ensembles of elastic distance measures", "venue": "Data Mining and Knowledge Discovery,", "year": 2015 }, { "authors": [ "Qianli Ma", "Wanqing Zhuang", "Garrison Cottrell" ], "title": "Triple-shapelet networks for time series classification", "venue": "IEEE International Conference on Data Mining (ICDM),", "year": 2019 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Ishan Misra", "C Lawrence Zitnick", "Martial Hebert" ], "title": "Shuffle and learn: unsupervised learning using temporal order verification", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Boris N Oreshkin", "Dmitri Carpov", "Nicolas Chapados", "Yoshua Bengio" ], "title": "N-beats: Neural basis expansion analysis for interpretable time series forecasting", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Santiago Pascual", "Mirco Ravanelli", "Joan Serrà", "Antonio Bonafonte", "Yoshua Bengio" ], "title": "Learning problem-agnostic speech representations from multiple self-supervised tasks", "venue": "In Proc. of the Conf. of the Int. Speech Communication Association (INTERSPEECH),", "year": 2019 }, { "authors": [ "Santiago Pascual", "Mirco Ravanelli", "Joan Serrà", "Antonio Bonafonte", "Yoshua Bengio" ], "title": "Learning Problem-Agnostic Speech Representations from Multiple Self-Supervised Tasks", "venue": "In Proc. of the Conf. of the Int. Speech Communication Association (INTERSPEECH),", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in neural information processing systems, NeurIPS,", "year": 2019 }, { "authors": [ "Massimiliano Patacchiola", "Amos Storkey" ], "title": "Self-supervised relational reasoning for representation learning", "venue": "arXiv preprint arXiv:2006.05849,", "year": 2020 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Mirco Ravanelli", "Jianyuan Zhong", "Santiago Pascual", "Pawel Swietojanski", "Joao Monteiro", "Jan Trmal", "Yoshua Bengio" ], "title": "Multi-task self-supervised learning for robust speech recognition", "venue": "In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Aaqib Saeed", "Tanir Ozcelebi", "Johan Lukkien" ], "title": "Multi-task self-supervised learning for human activity detection", "venue": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies,", "year": 2019 }, { "authors": [ "Aaqib Saeed", "Flora D Salim", "Tanir Ozcelebi", "Johan Lukkien" ], "title": "Federated self-supervised learning of multi-sensor representations for embedded intelligence", "venue": "IEEE Internet of Things Journal,", "year": 2020 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Timothy Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Pritam Sarkar", "Ali Etemad" ], "title": "Self-supervised learning for ecg-based emotion recognition", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Patrick Schäfer" ], "title": "The boss is concerned with time series classification in the presence of noise", "venue": "Data Mining and Knowledge Discovery,", "year": 2015 }, { "authors": [ "Steffen Schneider", "Alexei Baevski", "Ronan Collobert", "Michael Auli" ], "title": "wav2vec: Unsupervised pre-training for speech recognition", "venue": "In Proc. of the Conf. of the Int. Speech Communication Association (INTERSPEECH),", "year": 2019 }, { "authors": [ "Rajat Sen", "Hsiang-Fu Yu", "Inderjit S Dhillon" ], "title": "Think globally, act locally: A deep neural network approach to high-dimensional time series forecasting", "venue": "In Advances in Neural Information Processing Systems, NeurIPS,", "year": 2019 }, { "authors": [ "Satya Narayan Shukla", "Benjamin M Marlin" ], "title": "Interpolation-prediction networks for irregularly sampled time series", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Richard Socher", "Danqi Chen", "Christopher D Manning", "Andrew Ng" ], "title": "Reasoning with neural tensor networks for knowledge base completion", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "ABA Stevner", "Diego Vidaurre", "Joana Cabral", "K Rapuano", "Søren Føns Vind Nielsen", "Enzo Tagliazucchi", "Helmut Laufs", "Peter Vuust", "Gustavo Deco", "Mark W Woolrich" ], "title": "Discovery of key whole-brain transitions and dynamics during human wakefulness and non-rem sleep", "venue": "Nature communications,", "year": 2019 }, { "authors": [ "Terry T Um", "Franz MJ Pfister", "Daniel Pichler", "Satoshi Endo", "Muriel Lang", "Sandra Hirche", "Urban Fietzek", "Dana" ], "title": "Kulić. Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks", "venue": "In Proceedings of the 19th ACM International Conference on Multimodal Interaction,", "year": 2017 }, { "authors": [ "Jiangliu Wang", "Jianbo Jiao", "Yun-Hui Liu" ], "title": "Self-supervised video representation learning by pace prediction", "venue": "In European conference on computer vision,", "year": 2020 }, { "authors": [ "Xiaolong Wang", "Abhinav Gupta" ], "title": "Unsupervised learning of visual representations using videos", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Donglai Wei", "Joseph J Lim", "Andrew Zisserman", "William T Freeman" ], "title": "Learning and using the arrow of time", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Lexiang Ye", "Eamonn Keogh" ], "title": "Time series shapelets: a new primitive for data mining", "venue": "In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2009 }, { "authors": [ "Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart" ], "title": "Deep reinforcement learning with relational inductive biases", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhibin Zhao", "Tianfu Li", "Jingyao Wu", "Chuang Sun", "Shibin Wang", "Ruqiang Yan", "Xuefeng Chen" ], "title": "Deep learning algorithms for rotating machinery intelligent diagnosis: An open source benchmark study", "venue": null, "year": 2003 }, { "authors": [ "Bolei Zhou", "Alex Andonian", "Aude Oliva", "Antonio Torralba" ], "title": "Temporal relational reasoning in videos", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Time series data is ubiquitous and there has been significant progress for time series analysis (Das, 1994) in machine learning, signal processing, and other related areas, with many real-world applications such as healthcare (Stevner et al., 2019), industrial diagnosis (Kang et al., 2015), and financial forecasting (Sen et al., 2019).\nDeep learning models have emerged as successful models for time series analysis (Hochreiter & Schmidhuber, 1997; Graves et al., 2013; Shukla & Marlin, 2019; Fortuin et al., 2019; Oreshkin et al., 2020). Despite their fair share of success, the existing deep supervised models are not suitable for high-dimensional time series data with a limited amount of training samples as those data-driven approaches rely on finding ground truth for supervision, where data labeling is a labor-intensive and time-consuming process, and sometimes impossible for time series data. One solution is to learn useful representations from unlabeled data, which can substantially reduce dependence on costly manual annotation.\nSelf-supervised learning aims to capture the most informative properties from the underlying structure of unlabeled data through the self-generated supervisory signal to learn generalized representations. Recently, self-supervised learning has attracted more and more attention in computer vision by designing different pretext tasks on image data such as solving jigsaw puzzles (Noroozi & Favaro, 2016), inpainting (Pathak et al., 2016), rotation prediction(Gidaris et al., 2018), and contrastive learning of visual representations(Chen et al., 2020), and on video data such as object tracking (Wang & Gupta, 2015), and pace prediction (Wang et al., 2020). Although some video-based ap-\n1Anonymous repository link.\nproaches attempt to capture temporal information in the designed pretext task, time series is far different structural data compared with video. More recently, in the time series analysis domain, some metric learning based self-supervised methods such as triplet loss (Franceschi et al., 2019) and contrastive loss (Schneider et al., 2019; Saeed et al., 2020), or multi-task learning based self-supervised methods that predict different handcrafted features (Pascual et al., 2019a; Ravanelli et al., 2020) and different signal transformations (Saeed et al., 2019; Sarkar & Etemad, 2020) have emerged. However, few of those works consider the intra-temporal structure of time series. Therefore, how to design an efficient pretext task in a self-supervised manner for time series representation learning is still an open problem.\nIn this work, we present SelfTime: a general self-supervised time series representation learning framework. Inspired by relational discovery during self-supervised human learning, which attempts to discover new knowledge by reasoning the relation among entities (Goldwater et al., 2018; Patacchiola & Storkey, 2020), we explore the inter-sample relation reasoning and intra-temporal relation reasoning of time series to capture the underlying structure pattern of the unlabeled time series data. Specifically, as shown in Figure 1, for inter-sample relation reasoning, given an anchor sample, we generate from its transformation counterpart and another individual sample as the positive and negative samples respectively. For intra-temporal relation reasoning, we firstly generate an anchor piece, then, several reference pieces are sampled to construct different scales of temporal relation between the anchor piece and the reference piece, where relation scales are determined based on the temporal distance. Note that in Figure 1, we only show an example of 3-scale temporal relations including short-term, middle-term, and long-term relation for an illustration, whereas in different scenarios, there could be different temporal relation scale candidates. Based on the sampled relation, a shared feature extraction backbone combined with two separate relation reasoning heads are employed to quantify the relationships between the sample pairs or the time piece pairs for inter-sample relation reasoning or intra-temporal relation reasoning, respectively. Finally, the useful representations of time series are extracted from the backbone under the supervision of relation reasoning heads on the unlabeled data. Overall, SelfTime is simple yet effective by conducting the designed pretext tasks directly on the original input signals.\nOur main contributions are three-fold: (1) we present a general self-supervised time series representation learning framework by investigating different levels of relations of time series data including inter-sample relation and intra-temporal relation. (2) We design a simple and effective intra-temporal relation sampling strategy to capture the underlying temporal patterns of time series. (3) We conduct extensive experiments on different categories of real-world time series data, and systematically study the impact of different data augmentation strategies and temporal relation sampling strategies on self-supervised learning of time series. By comparing with multiple state-of-the-art baselines, experimental results show that SelfTime builds new state-of-the-art on self-supervised time series representation learning." }, { "heading": "2 RELATED WORK", "text": "Time Series Modeling. In the last decades, time series modeling has been paid close attention with numerous efficient methods, including distance-based methods, feature-based methods, ensemblebased methods, and deep learning based methods. Distance-based methods (Berndt & Clifford,\n1994; Górecki & Łuczak, 2014) try to measure the similarity between time series using Euclidean distance or Dynamic Time Warping distance, and then conduct classification based on 1-NN classifiers. Feature-based methods aim to extract useful feature for time series representation. Two typical types including bag-of-feature based methods (Baydogan et al., 2013; Schäfer, 2015) and shapelet based methods (Ye & Keogh, 2009; Hills et al., 2014). Ensemble-based methods (Lines & Bagnall, 2015; Bagnall et al., 2015) aims at combining multiple classifiers for higher classification performance. More recently, deep learning based methods (Karim et al., 2017; Ma et al., 2019; Cheng et al., 2020) conduct classification by cascading the feature extractor and classifier based on MLP, RNN, and CNN in an end-to-end manner. Our approach focuses instead on self-supervised representation learning of time series on unlabeled data, exploiting inter-sample relation and intra-temporal relation of time series to guide the generation of useful feature.\nRelational Reasoning. Reasoning the relations between entities and their properties makes significant sense to generally intelligent behavior (Kemp & Tenenbaum, 2008). In the past decades, there has been an extensive researches about relational reasoning and its applications including knowledge base (Socher et al., 2013), question answering (Johnson et al., 2017; Santoro et al., 2017), video action recognition (Zhou et al., 2018), reinforcement learning (Zambaldi et al., 2019), and graph representation (Battaglia et al., 2018), which perform relational reasoning directly on the constructed sets or graphs that explicitly represent the target entities and their relations. Different from those previous works that attempt to learn a relation reasoning head for a special task, inter-sample relation reasoning based on unlabeled image data is employed in (Patacchiola & Storkey, 2020) to learn useful visual representation in the underlying backbone. Inspired by this, in our work, we focus on time series data by exploring both inter-sample and intra-temporal relation for time series representation in a self-supervised scenario.\nSelf-supervised Learning. Self-supervised learning has attracted lots of attention recently in different domains including computer vision, audio/speech processing, and time series analysis. For image data, the pretext tasks including solving jigsaw puzzles (Noroozi & Favaro, 2016), rotation prediction (Gidaris et al., 2018), and visual contrastive learning (Chen et al., 2020) are designed for self-supervised visual representation. For video data, the pretext tasks such as frame order validation (Misra et al., 2016; Wei et al., 2018), and video pace prediction (Wang et al., 2020) are designed which considering additional temporal signal of video. Different from video signal that includes plenty of raw feature in both spatial and temporal dimension, time series is far different structural data with less raw features at each time point. For time series data such as audio and ECG, the metric learning based methods such as triplet loss (Franceschi et al., 2019) and contrastive loss (Schneider et al., 2019; Saeed et al., 2020), or multi-task learning based methods that predict different handcrafted features such as MFCCs, prosody, and waveform (Pascual et al., 2019a; Ravanelli et al., 2020), and different transformations of raw signal (Sarkar & Etemad, 2020; Saeed et al., 2019) have emerged recently. However, few of those works consider the intra-temporal structure of time series. Therefore, how to design an efficient self-supervised pretext task to capture the underlying structure of time series is still an open problem." }, { "heading": "3 METHOD", "text": "Given an unlabeled time series set T = {tn}Nn=1, where each time series tn = (tn,1, ...tn,T )T contains T ordered real values. We aim to learn a useful representation zn = fθ(tn) from the backbone encoder fθ(·) where θ is the learnable weights of the neural networks. The architecture of the proposed SelfTime is shown in Figure 2, which consists of an inter-sample relational reasoning branch and an intra-temporal relational reasoning branch. Firstly, taking the original time series signals and their sampled time pieces as the inputs, a shared backbone encoder fθ(·) extracts time series feature and time piece feature to aggregate the inter-sample relation feature and intra-temporal relation feature respectively, and then feeds them to two separate relation reasoning heads rµ(·) and rϕ(·) to reason the final relation score of inter-sample relation and intra-temporal relation." }, { "heading": "3.1 INTER-SAMPLE RELATION REASONING", "text": "Formally, given any two different time series samples tm and tn from T , we randomly generate two sets of K augmentationsA(tm) = {t(i)m }Ki=1 andA(tn) = {t (i) n }Ki=1, where t (i) m and t (i) n are the i-th\naugmentations of tm and tn respectively. Then, we construct two types of relation pairs: positive relation pairs and negative relation pairs. A positive relation pair is (t(i)m , t (j) m ) sampled from the same augmentation set A(tm), while a negative relation pair is (t(i)m , t(j)n ) sampled from different augmentation sets A(tm) and A(tn). Based on the sampled relation pairs, we use the backbone encoder fθ to learn the relation representation as follows: Firstly, we extract sample representations z (i) m = fθ(t (i) m ), z (j) m = fθ(t (j) m ), and z (j) n = fθ(t (j) n ). Then, we construct the positive relation representation [z(i)m , z (j) m ], and the negative relation representation [z (i) m , z (j) n ], where [·, ·] denotes the vector concatenation operation. Next, the inter-sample relation reasoning head rµ(·) takes the generated relation representation as input to reason the final relation score h(i,j)2m−1 = rµ([z (i) m , z (j) m ]) for positive relation and h(i,j)2m = rµ([z (i) m , z (j) n ]) for negative relation, respectively. Finally, the inter-sample relation reasoning task is formulated as a binary classification task and the model is trained with binary cross-entropy loss Linter as follows:\nLinter = − 2N∑ n=1 K∑ i=1 K∑ j=1 (y(i,j)n · log(h(i,j)n ) + (1− y(i,j)n ) · log(1− h(i,j)n )) (1)\nwhere y(i,j)n = 1 for the positive relation and y (i,j) n = 0 for the negative relation." }, { "heading": "3.2 INTRA-TEMPORAL RELATION REASONING", "text": "To capture the underlying temporal structure along the time dimension, we try to explore the intratemporal relation among time pieces and ask the model to predict the different types of temporal relation. Formally, given a time series sample tn = (tn,1, ...tn,T )T, we define anL-length time piece pn,u of tn starting at time step u as a contiguous subsequence pn,u = (tn,u, tn,u+1, ..., tn,u+L−1)T. Firstly, we sample different types of temporal relation among time pieces as follows: Randomly sample two L-length pieces pn,u and pn,v of tn starting at time step u and time step v respectively. Then, the temporal relation between pn,u and pn,v is assigned based on their temporal distance du,v , e.g., for similarity, we define the temporal distance du,v = |u− v| as the absolute value of the difference between two starting step u and v. Next, we define C types of temporal relations for each pair of pieces based on their temporal distance, e.g., for similarity, we firstly set a distance threshold as D = bT/Cc, and then, if the distance du,v of a piece pair is less than D, we assign the relation label as 0, if du,v is greater than D and less than 2D, we assign the relation label as 1, and so on until we sample C types of temporal relations. The details of the intra-temporal relation sampling algorithm are shown in Algorithm 1.\nBased on the sampled time pieces and their temporal relations, we use the shared backbone encoder fθ to extract the representations of time pieces firstly, where zn,u = fθ(pn,u) and zn,v = fθ(pn,v). Then, we construct the temporal relation representation as [zn,u, zn,v]. Next, the intra-temporal relation reasoning head rϕ(·) takes the relation representation as input to reason the final relation score h(u,v)n = rϕ([zn,u, zn,v]). Finally, the intra-temporal relation reasoning task is formulated as a multi-class classification problem and the model is trained with cross-entropy loss Lintra as\nfollows:\nLintra = − N∑ n=1 y(u,v)n · log exp(h (u,v) n )∑C c=1 exp(h (u,v) n )\n(2)\nAlgorithm 1: Temporal Relation Sampling.\nRequire: tn: A T -length time series. pn,u, pn,v: two L-length pieces of tn. C: Number of relation classes. Ensure: y (u,v) n ∈ {1, 2, ..., C}: The label of the tem-\nporal relation between pn,u and pn,v . 1: du,v = |u− v|, D = bT/Cc 2: if du,v ≤ D then 3: y(u,v)n = 0 4: else if du,v ≤ 2 ∗D then 5: y(u,v)n = 1 6: ... 7: else if du,v ≤ (C − 1) ∗D then 8: y(u,v)n = C − 2 9: else\n10: y(u,v)n = C − 1 11: end if 12: return y(u,v)n\nBy jointly optimizing the inter-sample relation reasoning objective (Eq. 1) and intra-temporal relation reasoning objective (Eq. 2), the final training loss is defined as follows:\nL = Linter + Lintra (3)\nAn overview for training SelfTime is given in Algorithm 2 in Appendix A. SelfTime is an efficient algorithm compared with the traditional contrastive learning models such as SimCLR. The complexity of SimCLR is O(N2K2), while the complexity of SelfTime is O(NK2) +O(NK), where O(NK2) is the complexity of inter-sample relation reasoning module, andO(NK) is the complexity of intratemporal relation reasoning module. It can be seen that SimCLR scales quadratically in both training size N and augmentation number K. However, in SelfTime, inter-sample relation reasoning module scales quadratically with the number of augmentations K, and linearly with the training size N , and intra-temporal relation reasoning module scales linearly with both augmentations and training size.\n4 EXPERIMENTS\n4.1 EXPERIMENTAL SETUP\nDatasets. To evaluate the effectiveness of the proposed method, in the experiment, we use three categories time series including four public datasets CricketX, UWaveGestureLibraryAll (UGLA), DodgerLoopDay (DLD), and InsectWingbeatSound (IWS) from the UCR Time Series Archive2 (Dau et al., 2018), along with two real-world\nbearing datasets XJTU3 and MFPT4 (Zhao et al., 2020). All six datasets consist of various numbers of instances, signal lengths, and number of classes. The statistics of the datasets are shown in Table 1.\nTime Series Augmentation The data augmentations for time series are generally based on random transformation in two domains (Iwana & Uchida, 2020): magnitude domain and time domain. In the magnitude domain, transformations are performed on the values of time series where the values at each time step are modified but the time steps are constant. The common magnitude domain based augmentations include jittering, scaling, magnitude warping (Um et al., 2017), and cutout (DeVries & Taylor, 2017). In the time domain, transformations are performed along the time axis that the elements of the time series are displaced to different time steps than the original sequence. The common time domain based augmentations include time warping (Um et al., 2017), window slicing, and window warping (Le Guennec et al., 2016). More visualization details of different augmentations are shown in Figure 3.\n2https://www.cs.ucr.edu/˜eamonn/time_series_data_2018/ 3https://biaowang.tech/xjtu-sy-bearing-datasets/ 4https://www.mfpt.org/fault-data-sets/\nBaselines. We compare SelfTime against several state-of-the-art methods of self-supervised representation learning:\n• Supervised consists of a backbone encoder as the same with SelfTime and a linear classifier, which conducts fully supervised training over the whole networks.\n• Random Weights is the same as Supervised in the architecture, but freezing the backbone’s weights during the training and optimizing only the linear classifier.\n• Triplet Loss (Franceschi et al., 2019) is an unsupervised time series representation learning model that uses triplet loss to push a subsequence of time series close to its context and distant from a randomly chosen time series.\n• Deep InfoMax (Hjelm et al., 2019) is a framework of unsupervised representation learning by maximizing mutual information between the input and output of a feature encoder from the local and global perspectives.\n• Forecast (Jawed et al., 2020) is a semi-supervised time series classification model that leverages features learned from the self-supervised forecasting task on unlabeled data. In the experiment, we throw away the supervised classification branch and use only the forecasting branch to learn the representations of time series.\n• Transformation (Sarkar & Etemad, 2020) is a self-supervised model by designing transformation recognition of different time series transformations as pretext task.\n• SimCLR (Chen et al., 2020) is a simple but effective framework for self-supervised representation learning by maximizing agreement between different views of augmentation from the same sample via a contrastive loss in the latent space.\n• Relation (Patacchiola & Storkey, 2020) is relational reasoning based self-supervised representation learning model by reasoning the relations between views of the sample objects as positive, and reasoning the relations between different objects as negative.\nEvaluation. As a common evaluation protocol, linear evaluation is used in the experiment by training a linear classifier on top of the representations learned from different self-supervised models to evaluate the quality of the learned embeddings. For data splitting, we set the training/validation/test split as 50%/25%/25%. During the pretraining stage, we randomly split the data 5 times with different seeds, and train the backbone on them. During the linear evaluation, we train the linear classifier 10 times on each split data, and the best model on the validation dataset was used for testing. Finally, we report the classification accuracy as mean with the standard deviation across all trials.\nImplementation. All experiments were performed using PyTorch (v1.4.0) (Paszke et al., 2019). A simple 4-layer 1D convolutional neural network with ReLU activation and batch normalization (Ioffe & Szegedy, 2015) were used as the backbone encoder fθ for SelfTime and all other baselines, and use two separated 2-layer fully-connected networks with 256 hidden-dimensions as the inter-sample relation reasoning head rµ and intra-temporal relation reasoning head rϕ respectively (see Table 4 in Appendix B for details). Adam optimizer (Kingma & Ba, 2015) was used with a learning rate of 0.01 for pretraining and 0.5 for linear evaluation. The batch size is set as 128 for all models. For fair comparison, we generate K = 16 augmentations for each sample although\nmore augmentation results in better performance (Chen et al., 2020; Patacchiola & Storkey, 2020). More implement details of baselines are shown in Appendix D. More experimental results about the impact of augmentation number K are shown in Appendix E.\n4.2 ABLATION STUDIES\nIn this section, we firstly investigate the impact of different temporal relation sampling settings on intra-temporal relation reasoning. Then, we explore the effectiveness of inter-sample relation reasoning, intra-temporal relation reasoning, and their combination (SelfTime), under different time series augmentation strategies. Experimental results show that both intersample relation reasoning and intra-temporal relation reasoning achieve remarkable performance, which helps the network to learn more discriminating features of time series.\nTemporal Relation Sampling. To investigate the different settings of temporal relation sampling strategy on the impact of linear evaluation performance, in the experiment, we set different numbers of temporal relation class C and time piece\nlength L. Specifically, to investigate the impact of class number, we firstly set the piece length L = 0.2 ∗ T as 20% of the original time series length, then, we vary C from 2 to 8 during the temporal relation sampling. As shown in Figure 4, we show the results of parameter sensitivity experiments on CricketX, where blue bar indicates class reasoning accuracy on training data (Class ACC) and brown line indicates the linear evaluation accuracy on test data (Linear ACC). With the increase of class number, the Linear ACC keeps increasing until C = 5, and we find that a small value C = 2 and a big value C = 8 result in worse performance. One possible reason behind this is that the increase of class number drops the Class ACC and makes the relation reasoning task too difficult for the network to learn useful representation. Similarly, when set the class number C = 3 and vary the piece length L from 0.1 ∗ T to 0.4 ∗ T , we find that the Linear ACC grows up with the increase of piece size until L = 0.3 ∗ T , and also, either small value or big value of L will drop the evaluation performance, which makes the relation reasoning task too simple (with high Class ACC) or too difficult (with low Class ACC) and prevents the network from learning useful semantic representation. Therefore, as consistent with the observations of self-supervised studies in other domains (Pascual et al., 2019b; Wang et al., 2020), an appropriate pretext task designing is crucial for the self-supervised time series representation learning. In the experiment, to select a moderately difficult pretext task for different datasets, we set {class number (C), piece size (L/T )} as {3, 0.2} for CricketX, {4, 0.2} for UWaveGestureLibraryAll, {5, 0.35} for DodgerLoopDay, {6, 0.4} for InsectWingbeatSound, {4, 0.2} for MFPT, and {4, 0.2} for XJTU. More experimental results on other five datasets for parameter sensitivity analysis are shown in Appendix E.\nImpact of Different Relation Modules and Data Augmentations. To explore the effectiveness of different relation reasoning modules including inter-sample relation reasoning, intra-temporal relation reasoning, and their combination (SelfTime), in the experiment, we systematically investigate the different data augmentations on the impact of linear evaluation for different modules. Here, we consider several common augmentations including magnitude domain based transformations such as jittering (Jit.), cutout (Cut.), scaling (Sca.), magnitude warping (M.W.), and time domain based transformations such as time warping (T.W.), window slicing (W.S.), window warping (W.W.). Figure 5 shows linear evaluation results on CricketX dataset under individual and composition of transformations for inter-sample relation reasoning, intra-temporal relation reasoning, and their combination (SelfTime). Firstly, we observe that the composition of different data augmentations is crucial for learning useful representations. For example, inter-sample relation reasoning is more sensitive to the augmentations, and performs worse under Cut., Sca., and M.W. augmentations, while intratemporal relation reasoning is less sensitive to the manner of augmentations, although it performs better under the time domain based transformation. Secondly, by combining both the inter-sample and intra-temporal relation reasoning, the proposed SelfTime achieves better performance, which demonstrates the effectiveness of considering different levels of relation for time series representation learning. Thirdly, we find that the composition from a magnitude-based transformation (e.g. scaling, magnitude warping) and a time-based transformation (e.g. time warping, window slicing) facilitates the model to learn more useful representations. Therefore, in this paper, we select the composition of magnitude warping and time warping augmentations for all experiments. Similar experimental conclusions also hold on for other datasets. More experimental results on the other five datasets for evaluation of the impact of different relation modules and data augmentations are shown in Appendix F." }, { "heading": "4.3 TIME SERIES CLASSIFICATION", "text": "In this section, we evaluate the proposed method by comparing with other state-of-the-arts on time series classification task. Firstly, we conduct linear evaluation to assess the quality of the learned representations. Then, we evaluate the performance of all methods in transfer learning by training on the unlabeled source dataset and conduct linear evaluation on the labeled target dataset. Finally, we qualitatively evaluate and verify the semantic consistency of the learned representations.\nLinear Evaluation. Following the previous studies (Chen et al., 2020; Patacchiola & Storkey, 2020), we train the backbone encoder for 400 epochs on the unlabeled training set, and then train a linear classifier for 400 epochs on top of the backbone features (the backbone weights are frozen without back-propagation). As shown in Table 2, our proposed SelfTime consistently outperforms all baselines across all datasets. SelfTime improves the accuracy over the best baseline (Relation) by 5.05% (CricketX), 5.06% (UGLA), 14.61% (DLD), 7.85% (IWS), 6.73% (MFPT), and 1.67% (XJTU) respectively. Among those baselines, either global features (Deep InfoMax, Transformation, SimCLR, Relation) or local features (Triplet Loss, Deep InfoMax, Forecast) are considered during representation learning, they neglect the essential temporal information of time series except Triplet Loss and Forecast. However, by simply forecasting future time pieces, Forecast cannot capture useful temporal structure effectively, which results in low-quality representations. Also, in Triplet Loss, a time-based negative sampling is used to capture the inter-sample temporal relation among time pieces sampled from the different time series, which is cannot directly and efficiently capture the intra-sample temporal pattern of time series. Different from all those baselines, SelfTime not only extracts global and local features by taking the whole time series and its time pieces\nas inputs during feature extraction, but also captures the implicit temporal structure by reasoning intra-temporal relation among time pieces.\nDomain Transfer. To evaluate the transferability of the learned representations, we conduct experiments in transfer learning by training on the unlabeled source dataset and conduct linear evaluation on the labeled target dataset. In the experiment, we select two datasets from the same category as the source and target respectively. As shown in Table 3, experimental results show that our SelfTime outperforms all the other baselines under different conditions. For example, SelfTime achieves an improvement over the Relation by 4.73% on UGLA→CricketX transfer, and over Deep InfoMax 20.2% on IWS→DLD transfer, and over Relation 6.81% on XJTU→MFPT transfer, respectively, which demonstrates the good transferability of the proposed method.\nVisualization. To qualitatively evaluate the learned representations, we use the trained backbone to extract the features and visualize them in 2D space using t-SNE (Maaten & Hinton, 2008) to verify the semantic consistency of the learned representations. Figure 6 shows the visualization results of features from the baselines and the proposed SelfTime on UGLA dataset. It is obvious that by capturing global sample structure and local temporal structure, SelfTime learns more semantic representations and results in better clustering ability for time series data, where more semantic consistency is preserved in the learned representations by our proposed method." }, { "heading": "5 CONCLUSION", "text": "We presented a self-supervised approach for time series representation learning, which aims to extract useful feature from the unlabeled time series. By exploring the inter-sample relation and intratemporal relation, SelfTime is able to capture the underlying useful structure of time series. Our main finding is that designing appropriate pretext tasks from both the global-sample structure and local-temporal structure perspectives is crucial for time series representation learning, and this finding motivates further thinking of how to better leverage the underlying structure of time series. Our experiments on multiple real-world datasets show that our proposed method consistently outperforms the state-of-the-art self-supervised representation learning models, and establishes a new state-of-the-art in self-supervised time series classification. Future directions of research include exploring more effective intra-temporal structure (i.e. reasoning temporal relation under the time point level), and extending the SelfTime to multivariate time series by considering the causal relationship among variables." }, { "heading": "A PSEUDO-CODE OF SELFTIME", "text": "The overview of training process for SelfTime is summarized in Algorithm 2." }, { "heading": "B ARCHITECTURE DIAGRAM", "text": "SelfTime consists of a backbone encoder, a inter-sample relation reasoning head, and a intratemporal relation reasoning head. The detail architectural diagrams of SelfTime are shown in Table 4." }, { "heading": "C DATA AUGMENTATION", "text": "In this section, we list the configuration details of augmentation used in the experiment:\nAlgorithm 2: SelfTime\nRequire: Time series set T = {tn}Nn=1. fθ: Encoder backbone. rµ: Inter-sample relation reasoning head. rϕ: Intra-temporal relation reasoning head. Ensure: fθ: An updated encoder backbone.\n1: for tm, tn ∈ T do 2: Generate two augmentation sets A(tm) and A(tn) 3: Sample positive relation pair (t(i)m , t (j) m ) and negative\nrelation pair (t(i)m , t (j) n ) from A(tm) and A(tn)\n4: z(i)m = fθ(t (i) m ) . Sample representation 5: z(j)m = fθ(t (j) m ) . Sample representation 6: z(j)n = fθ(t (j) n ) . Sample representation 7: h(i,j)2m−1 = rµ([z (i) m , z (j) m ]) . Reasoning score of positive relation 8: h(i,j)2m = rµ([z (i) m , z (j) n ]) . Reasoning score of negative relation\n9: Sample time piece relation pair (pn,u,pn,v) by Algorithm 1 10: zn,u = fθ(pn,u) . Time piece representation 11: zn,v = fθ(pn,v) . Time piece representation 12: h(u,v)n = rϕ([zn,u, zn,v]) . Reasoning score of intra-temporal relation 13: end for 14: Linter = − ∑2N n=1 ∑K i=1 ∑K j=1(y (i,j) n · log(h(i,j)n ) +(1− y(i,j)n ) · log(1− h(i,j)n )) . Inter-sample relation reasoning loss 15: Lintra = − ∑N n=1 y (u,v) n · log exp(h (u,v) n )∑C\nc=1 exp(h (u,v) n )\n. Intra-temporal relation reasoning loss\n16: Update fθ, rµ, and rϕ by minimizing L = Linter + Lintra 17: return encoder backbone fθ, throw away rµ, and rϕ\nJittering: We add the gaussian noise to the original time series, where noise is sampled from a Gaussian distribution N (0, 0.2). Scaling: We multiply the original time series with a random scalar sampled from a Gaussian distribution N (0, 0.4). Cutout: We replace a random 10% part of the original time series with zeros and remain the other parts unchanged.\nMagnitude Warping: We multiply a warping amount determined by a cubic spline line with 4 knots on the original time series at random locations and magnitudes. The peaks or valleys of the knots are set as µ = 1 and σ = 0.3 (Um et al., 2017).\nTime Warping: We set the warping path according to a smooth cubic spline-based curve with 8 knots, where the random magnitudes is µ = 1 and a σ = 0.2 for each knot (Um et al., 2017).\nWindow Slicing: We randomly crop 80% of the original time series and interpolate the cropped time series back to the original length (Le Guennec et al., 2016).\nWindow Warping: We randomly select a time window that is 30% of the original time series length, and then warp the time dimension by 0.5 times or 2 times (Le Guennec et al., 2016)." }, { "heading": "D BASELINES", "text": "Triplet Loss5 (Franceschi et al., 2019) We download the authors’ official source code and use the same backbone as SelfTime, and set the number of negative samples as 10. We use Adam optimizer with learning rate 0.001 according to grid search and batch size 128 as same with SelfTime.\nDeep InfoMax6 (Hjelm et al., 2019) We download the authors’ official source code and use the same backbone as SelfTime, and set the parameter α = 0.5, β = 1.0, γ = 0.1 through grid search. We use Adam optimizer with learning rate 0.0001 according grid search and batch size 128 as same with SelfTime.\nForecast7 (Jawed et al., 2020) Different from the original multi-task model proposed by authors, we throw away the supervised classification branch and use only the proposed forecasting branch to learn the representation in a fully self-supervised manner. We use Adam optimizer with learning rate 0.01 according to grid search and batch size 128 as same as SelfTime.\nTransformation8 (Sarkar & Etemad, 2020) We refer to the authors’ official source code and reimplement it in PyTorch by using the same backbone and two-layer projection head as same with SelfTime. We use Adam optimizer with learning rate 0.001 according to grid search and batch size 128 as same with SelfTime.\nSimCLR9 (Chen et al., 2020) We download the authors’ official source code by using the same backbone and two-layer projection head as same with SelfTime. We use Adam optimizer with learning rate 0.5 according grid search and batch size 128 as same as SelfTime.\nRelation10 (Patacchiola & Storkey, 2020) We download the authors’ official source code by using the same backbone and relation module as same with SelfTime. For augmentation, we set K = 16, and use Adam optimizer with learning rate 0.5 according to grid search and batch size 128 as same with SelfTime." }, { "heading": "E PARAMETER SENSITIVITY", "text": "Figure 7 shows the impact of different augmentation numberK on all datasets. It’s obvious that more augmentations result in better performance, which demonstrates that introducing more reference\n5https://github.com/White-Link/UnsupervisedScalableRepresentationLearningTimeSeries 6https://github.com/rdevon/DIM 7https://github.com/super-shayan/semi-super-ts-clf 8https://code.engineering.queensu.ca/17ps21/SSL-ECG 9https://github.com/google-research/simclr\n10https://github.com/mpatacchiola/self-supervised-relational-reasoning\nsamples (including positive samples and negative samples) for the anchor sample raises the power of relational reasoning.\nFigure 8 shows the impact of different temporal relation class numbers and piece sizes on other five datasets: UWaveGestureLibraryAll, DodgerLoopDay, InsectWingbeatSound, MFPT, and XJTU, where the blue bar indicates class reasoning accuracy on training data (Class ACC) and the brown line indicates the linear evaluation accuracy on test data (Linear ACC). We find an interesting phenomenon is that both small values of class number C or piece size L/T , and big values C or L/T , result in worse performance. One possible reason behind this is that the increase of class number drops the Class ACC and makes the relation reasoning task too simple (with high Class ACC) or too difficult (with low Class ACC) and prevents the network from learning useful semantic representation. Therefore, an appropriate pretext task designing is crucial for the self-supervised time series representation learning. In the experiment, to select a moderately difficult pretext task for different datasets, we set {class number (C), piece size (L/T )} as {4, 0.2} for UWaveGestureLibraryAll, {5, 0.35} for DodgerLoopDay, {6, 0.4} for InsectWingbeatSound, {4, 0.2} for MFPT, and {4, 0.2} for XJTU." }, { "heading": "F ABLATION STUTY", "text": "In this section, we additionally explore the effectiveness of different relation reasoning modules including inter-sample relation reasoning, intra-temporal relation reasoning, and their combination (SelfTime) on other five datasets including UWaveGestureLibraryAll, DodgerLoopDay, InsectWingbeatSound, MFPT, and XJTU. Specifically, in the experiment, we systematically investigate the different data augmentations on the impact of linear evaluation for different modules. Here, we consider several common augmentations including magnitude domain based transformations such as jittering (Jit.), cutout (Cut.), scaling (Sca.), magnitude warping (M.W.), and time domain based transformations such as time warping (T.W.), window slicing (W.S.), window warping (W.W.). Figure 9 and Figure 10 show linear evaluation results on five datasets under individual and composition of transformations for inter-sample relation reasoning, intra-temporal relation reasoning, and their combination (SelfTime). As similar to the observations from CricketX, firstly, we observe that the composition of different data augmentations is crucial for learning useful representations. For example, inter-sample relation reasoning is more sensitive to the augmentations, and performs worse under Cut., Sca., and M.W. augmentations, while intra-temporal relation reasoning is less sensitive to the manner of augmentations on all datasets. Secondly, by combining both the inter-sample and intra-temporal relation reasoning, the proposed SelfTime achieves better performance, which demonstrates the effectiveness of considering different levels of relation for time series representation learning. Thirdly, overall, we find that the composition from a magnitude-\nbased transformation (e.g. scaling, magnitude warping) and a time-based transformation (e.g. time warping, window slicing) facilitates the model to learn more useful representations. Therefore, in this paper, we select the composition of magnitude warping and time warping augmentations for all datasets, although other compositions might result in better performance." } ]
2,020
null
SP:9513f146a764d9e67b7d054692d0a923622ff007
[ "This paper proposes to use orthogonal weight constraints for autoencoders. The authors demonstrate that under orthogonal weights (hence invertible), more features could be extracted. The theory is conducted under linear cases while the authors claim it can be applied to more complicated scenarios such as higher dimension and with nonlinearity. The experiments demonstrate the performance of proposed model on classification tasks and generative tasks. Several baselines are compared." ]
The pressing need for pretraining algorithms has been diminished by numerous advances in terms of regularization, architectures, and optimizers. Despite this trend, we re-visit the classic idea of unsupervised autoencoder pretraining and propose a modified variant that relies on a full reverse pass trained in conjunction with a given training task. We establish links between SVD and pretraining and show how it can be leveraged for gaining insights about the learned structures. Most importantly, we demonstrate that our approach yields an improved performance for a wide variety of relevant learning and transfer tasks ranging from fully connected networks over ResNets to GANs. Our results demonstrate that unsupervised pretraining has not lost its practical relevance in today’s deep learning environment.
[ { "affiliations": [], "name": "REVIVING AUTOENCODER PRETRAINING" } ]
[ { "authors": [ "REFERENCES Michele Alberti", "Mathias Seuret", "Rolf Ingold", "Marcus Liwicki" ], "title": "A pitfall of unsupervised pretraining", "venue": "arXiv preprint arXiv:1703.04332,", "year": 2017 }, { "authors": [ "Lynton Ardizzone", "Jakob Kruse", "Sebastian Wirkert", "Daniel Rahner", "Eric W Pellegrini", "Ralf S Klessen", "Lena Maier-Hein", "Carsten Rother", "Ullrich Köthe" ], "title": "Analyzing inverse problems with invertible neural networks", "venue": "arXiv preprint arXiv:1808.04730,", "year": 2018 }, { "authors": [ "Nitin Bansal", "Xiaohan Chen", "Zhangyang Wang" ], "title": "Can we gain more from orthogonality regularizations in training deep cnns? In Advances in Neural Information Processing Systems, pp. 4266–4276", "venue": null, "year": 2018 }, { "authors": [ "Yoshua Bengio", "Pascal Lamblin", "Dan Popovici", "Hugo Larochelle" ], "title": "Greedy layer-wise training of deep networks. In Advances in neural information processing", "venue": null, "year": 2007 }, { "authors": [ "Joan Bruna", "Arthur Szlam", "Yann LeCun" ], "title": "Signal recovery from pooling representations", "venue": "arXiv preprint arXiv:1311.4025,", "year": 2013 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "venue": "arXiv preprint arXiv:1406.1078,", "year": 2014 }, { "authors": [ "Hui Ding", "Shaohua Kevin Zhou", "Rama Chellappa" ], "title": "Facenet2expnet: Regularizing a deep face recognition net for expression recognition", "venue": "IEEE International Conference on Automatic Face & Gesture Recognition (FG", "year": 2017 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Mengnan Du", "Ninghao Liu", "Xia Hu" ], "title": "Techniques for interpretable machine learning", "venue": "arXiv preprint arXiv:1808.00033,", "year": 2018 }, { "authors": [ "Marie-Lena Eckert", "Kiwon Um", "Nils" ], "title": "Thuerey. Scalarflow: a large-scale volumetric data set of real-world scalar transport flows for computer animation and machine learning", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Dumitru Erhan", "Aaron Courville", "Yoshua Bengio", "Pascal Vincent" ], "title": "Why does unsupervised pre-training help deep learning", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "arXiv preprint arXiv:1803.03635,", "year": 2018 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "arXiv preprint arXiv:1811.12231,", "year": 2018 }, { "authors": [ "Aidan N Gomez", "Mengye Ren", "Raquel Urtasun", "Roger B Grosse" ], "title": "The reversible residual network: Backpropagation without storing activations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Kasthurirangan Gopalakrishnan", "Siddhartha K Khaitan", "Alok Choudhary", "Ankit Agrawal" ], "title": "Deep convolutional neural networks with transfer learning for computer vision-based data-driven pavement distress detection", "venue": "Construction and Building Materials,", "year": 2017 }, { "authors": [ "Stephen José Hanson", "Lorien Y Pratt" ], "title": "Comparing biases for minimal network construction with back-propagation", "venue": "In Advances in Neural Information Processing Systems,", "year": 1989 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Hans Hersbach", "Bill Bell", "Paul Berrisford", "Shoji Hirahara", "András Horányi", "Joaquı́n Muñoz-Sabater", "Julien Nicolas", "Carole Peubey", "Raluca Radu", "Dinand Schepers" ], "title": "The era5 global reanalysis", "venue": "Quarterly Journal of the Royal Meteorological Society,", "year": 1999 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Lei Huang", "Xianglong Liu", "Bo Lang", "Adams Wei Yu", "Yongliang Wang", "Bo Li" ], "title": "Orthogonal weight normalization: Solution to optimization over multiple dependent stiefel manifolds in deep neural networks", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Jörn-Henrik Jacobsen", "Arnold Smeulders", "Edouard Oyallon" ], "title": "i-revnet: Deep invertible networks", "venue": "arXiv preprint arXiv:1802.07088,", "year": 2018 }, { "authors": [ "Kui Jia", "Dacheng Tao", "Shenghua Gao", "Xiangmin Xu" ], "title": "Improving training of deep neural networks via singular value bounding", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Kenji Kawaguchi", "Leslie Pack Kaelbling", "Yoshua Bengio" ], "title": "Generalization in deep learning", "venue": "arXiv preprint arXiv:1710.05468,", "year": 2017 }, { "authors": [ "Michael Kazhdan", "Thomas Funkhouser", "Szymon Rusinkiewicz" ], "title": "Rotation invariant spherical harmonic representation of 3 d shape descriptors", "venue": "In Symposium on geometry processing,", "year": 2003 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "arXiv preprint arXiv:1711.05101,", "year": 2017 }, { "authors": [ "Aravindh Mahendran", "Andrea Vedaldi" ], "title": "Visualizing deep convolutional neural networks using natural pre-images", "venue": "International Journal of Computer Vision,", "year": 2016 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Mete Ozay", "Takayuki Okatani" ], "title": "Optimization on submanifolds of convolution kernels in cnns", "venue": "arXiv preprint arXiv:1610.07008,", "year": 2016 }, { "authors": [ "Antti Rasmus", "Mathias Berglund", "Mikko Honkala", "Harri Valpola", "Tapani Raiko" ], "title": "Semisupervised learning with ladder networks", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Stephan Rasp", "Peter D Dueben", "Sebastian Scher", "Jonathan A Weyn", "Soukayna Mouatadid", "Nils" ], "title": "Thuerey. Weatherbench: A benchmark dataset for data-driven weather forecasting", "venue": "arXiv preprint arXiv:2002.00469,", "year": 2020 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do imagenet classifiers generalize to imagenet", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sashank J Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of adam and beyond", "venue": "arXiv preprint arXiv:1904.09237,", "year": 2019 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computerassisted intervention,", "year": 2015 }, { "authors": [ "Ravid Shwartz-Ziv", "Naftali Tishby" ], "title": "Opening the black box of deep neural networks via information", "venue": "arXiv preprint arXiv:1703.00810,", "year": 2017 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Yunfei Teng", "Anna Choromanska" ], "title": "Invertible autoencoder for domain", "venue": "adaptation. Computation,", "year": 2019 }, { "authors": [ "Naftali Tishby", "Noga Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": "In 2015 IEEE Information Theory Workshop (ITW),", "year": 2015 }, { "authors": [ "Lisa Torrey", "Jude Shavlik" ], "title": "Transfer learning. In Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, pp. 242–264", "venue": "IGI Global,", "year": 2010 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Isabelle Lajoie", "Yoshua Bengio", "Pierre-Antoine Manzagol", "Léon Bottou" ], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "Journal of machine learning research,", "year": 2010 }, { "authors": [ "Michael E Wall", "Andreas Rechtsteiner", "Luis M Rocha" ], "title": "Singular value decomposition and principal component analysis. In A practical approach to microarray data", "venue": null, "year": 2003 }, { "authors": [ "Janett Walters-Williams", "Yan Li" ], "title": "Estimation of mutual information: A survey", "venue": "In International Conference on Rough Sets and Knowledge Technology,", "year": 2009 }, { "authors": [ "Andreas S Weigend", "David E Rumelhart", "Bernardo A Huberman" ], "title": "Generalization by weightelimination with application to forecasting", "venue": "In Advances in Neural Information Processing Systems,", "year": 1991 }, { "authors": [ "You Xie", "Erik Franz", "Mengyu Chu", "Nils" ], "title": "Thuerey. tempogan: A temporally coherent, volumetric gan for super-resolution fluid flow", "venue": "ACM Transactions on Graphics (TOG),", "year": 2018 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features in deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Amir R Zamir", "Alexander Sax", "William Shen", "Leonidas J Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Taskonomy: Disentangling task transfer learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Matthew D Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Lijing Zhang", "Yao Lu", "Ge Song", "Hanfeng Zheng" ], "title": "Rc-cnn: Reverse connected convolutional neural network for accurate player detection", "venue": "In Pacific Rim International Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yingbo Zhou", "Devansh Arpit", "Ifeoma Nwogu", "Venu Govindaraju" ], "title": "Is joint training better for deep auto-encoders", "venue": "arXiv preprint arXiv:1405.1380,", "year": 2014 }, { "authors": [ "Zhang et al", "2018a", "Teng", "Choromanska" ], "title": "2019), these modules primarily focus on transferring information between layers for a given task, and on auto-encoder structures for domain adaptation, respectively. A.2 PRETRAINING AND SINGULAR VALUE DECOMPOSITION In this section we give a more detailed derivation of our loss formulation, extending Section", "venue": null, "year": 2019 }, { "authors": [ "Hersbach" ], "title": "re-sampled to a 5.625◦ resolution, yielding 32× 64 grid points in ca. two-hour intervals. Data from the year of 1979 to 2015 (i.e., 162114 samples) are used for training, the year of 2016 for validation. The last two years (2017 and 2018) are used as test data. All RMSE measurements are latitude-weighted to account for area distortions from the spherical projection", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "While approaches such as greedy layer-wise autoencoder pretraining (Bengio et al., 2007; Vincent et al., 2010; Erhan et al., 2010) arguably paved the way for many fundamental concepts of today’s methodologies in deep learning, the pressing need for pretraining neural networks has been diminished in recent years. This was primarily caused by numerous advances in terms of regularization (Srivastava et al., 2014; Hanson & Pratt, 1989; Weigend et al., 1991), network architectures (Ronneberger et al., 2015; He et al., 2016; Vaswani et al., 2017), and improved optimization algorithms (Kingma & Ba, 2014; Loshchilov & Hutter, 2017; Reddi et al., 2019). Despite these advances, training deep neural networks that generalize well to a wide range of previously unseen tasks remains a fundamental challenge (Neyshabur et al., 2017; Kawaguchi et al., 2017; Frankle & Carbin, 2018).\nInspired by techniques for orthogonalization (Ozay & Okatani, 2016; Jia et al., 2017; Bansal et al., 2018), we re-visit the classic idea of unsupervised autoencoder pretraining in the context of reversible network architectures. Hence, we propose a modified variant that relies on a full reverse\npass trained in conjunction with a given training task. A key insight is that there is no need for ”greediness”, i.e., layer-wise decompositions of the network structure, and it is additionally beneficial to take into account a specific problem domain at the time of pretraining. We establish links between singular value decomposition (SVD) and pretraining, and show how our approach yields an embedding of problem-aware dominant features in the weight matrices. An SVD can then be leveraged to conveniently gain insights about learned structures. Most importantly, we demonstrate that the proposed pretraining yields an improved performance for a variety of learning and transfer tasks. Our formulation incurs only a very moderate computational cost, is very easy to integrate, and widely applicable.\nThe structure of our networks is influenced by invertible network architectures that have received significant attention in recent years (Gomez et al., 2017; Jacobsen et al., 2018; Zhang et al., 2018a). However, instead of aiming for a bijective mapping that reproduces inputs, we strive for learning a general representation by constraining the network to represent an as-reversible-as-possible process for all intermediate layer activations. Thus, even for cases where a classifier can, e.g., rely on color for inference of an object type, the model is encouraged to learn a representation that can recover the input. Hence, not only the color of the input should be retrieved, but also, e.g., its shape. In contrast to most structures for invertible networks, our approach does not impose architectural restrictions. We demonstrate the benefits of our pretraining for a variety of architectures, from fully connected layers to convolutional neural networks (CNNs), over networks with and without batch normalization, to GAN architectures. We discuss other existing approaches and relate them to the proposed method in the appendix.\nBelow, we will first give an overview of our formulation and its connection to singular values, before evaluating our model in the context of transfer learning. For a regular, i.e., a non-transfer task, the goal usually is to train a network that gives optimal performance for one specific goal. During a regular training run, the network naturally exploits any observed correlations between input and output distribution. An inherent difficulty in this setting is that typically no knowledge about the specifics of the new data and task domains is available when training the source model. Hence, it is common practice to target broad and difficult tasks hoping that this will result in features that are applicable in new domains (Zamir et al., 2018; Gopalakrishnan et al., 2017; Ding et al., 2017). Motivated by autoencoder pretraining, we instead leverage a pretraining approach that takes into account the data distribution of the inputs. We demonstrate the gains in accuracy for original and new tasks below for a wide range of applications, from image classification to data-driven weather forecasting." }, { "heading": "2 METHOD", "text": "With state-of-the-art methods, there is no need for breaking down the training process into single layers. Hence, we consider approaches that target whole networks, and especially orthogonalization regularizers as a starting point (Huang et al., 2018). Orthogonality constraints were shown to yield improved training performance in various settings (Bansal et al., 2018), and can be formulated as:\nLort = n∑\nm=1 ∥∥MTmMm − I∥∥2F , (1) i.e., enforcing the transpose of the weight matrix Mm ∈ Rs out m×s in m for all layers m to yield its inverse when being multiplied with the original matrix. I denotes the identity matrix with I = (e1m, ...e sinm m ), ejm denoting the jth column unit vector. Minimizing equation 1, i.e. M T mMm− I = 0 is mathematically equivalent to: MTmMme j m − ejm = 0, j = 1, 2, ..., sinm, (2)\nwith rank(MTmMm) = s in m, and e j m as eigenvectors of M T mMm with eigenvalues of 1. This formulation highlights that equation 2 does not depend on the training data, and instead only targets the content of Mm. Inspired by the classical unsupervised pretraining, we re-formulate the orthogonality constraint in a data-driven manner to take into account the set of inputs Dm for the current layer (either activation from a previous layer or the training data D1), and instead minimize\nLRR = n∑\nm=1\n(MTmMmd i m − dim)2 = n∑ m=1 ((MTmMm − I)dim)2, (3)\nwhere dim ∈ Dm ⊂ Rs in m . Due to its reversible nature, we will denote our approach with an RR subscript in the following. In contrast to classical autoencoder pretraining, we are minimizing this loss jointly for all layers of a network, and while orthogonality only focuses onMm, our formulation allows for minimizing the loss by extracting the dominant features of the input data.\nLet q denote the number of linearly independent entries in Dm, i.e. its dimension, and t the size of the training data, i.e. |Dm| = t, usually with q < t. For every single datum dim, i = 1, 2, ..., t, equation 3 results in\nMTmMmd i m − dim = 0, (4)\nand hence dim are eigenvectors of M T mMm with corresponding eigenvalues being 1. Thus, instead of the generic constraint MTmMm = I that is completely agnostic to the data at hand, the proposed formulation of equation 4 is aware of the training data, which improves the generality of the learned representation, as we will demonstrate in detail below.\nAs by construction, rank(Mm) = r 6 min(sinm, s out m ), the SVD of Mm yields:\nMm = UmΣmV T m , with\n{ Um = (u 1 m,u 2 m, ...,u r m,u r+1 m , ...,u soutm m ) ∈ Rs out m×s out m ,\nVm = (v 1 m,v 2 m, ...,v r m,v r+1 m , ...,v sinm m ) ∈ Rs in m×s in m ,\n(5)\nwith left and right singular vectors in Um and Vm, respectively, and Σm having square roots of the r eigenvalues of MTmMm on its diagonal. u k m and v k m(k = 1, ..., r) are the eigenvectors of MmM T m and MTmMm, respectively (Wall et al., 2003). Here, especially the right singular vectors in V T m are important, as they determine which structures of the input are processed by the transformation Mm. The original orthogonality constraint with equation 2 yields r unit vectors ejm as the eigenvectors of MTmMm. Hence, the influence of equation 2 on Vm is completely independent of training data and learning objectives.\nNext, we show that LRR facilitates learning dominant features from a given data set. For this, we consider an arbitrary basis for spanning the space of inputsDm for layerm. Let Bm : 〈 w1m, ...,w q m 〉 denote a set of q orthonormal basis vectors obtained via a Gram-Schmidt process, with t> q > r, and Dm denoting the matrix of the vectors in Bm. As we show in more detail in the appendix, our constraint from equation 4 requires eigenvectors of MTmMm to be w i m, with Vm containing r orthogonal vectors (v1m,v 2 m, ...,v r m) from Dm and (sinm − r) vectors from the null space of M .\nWe are especially interested in how Mm changes w.r.t. input in terms of Dm, i.e., we express LRR in terms of Dm. By construction, each input dim can be represented as a linear combination via a vector of coefficients cim that multiplies Dm so that d i m =Dmc i m. Since Mmdm = UmΣmV T mdm, the loss LRR of layer m can be rewritten as\nLRRm = (MTmMmdm − dm)2 = (VmΣTmΣmV Tmdm − dm)2\n= (VmΣ T mΣmV T mDmcm −Dmcm)2,\n(6)\nwhere we can assume that the coefficient vector cm is accumulated over the training data set size t via cm = ∑t i=1 c i m, since eventually every single datum inDm will contribute to LRRm . The central component of equation 6 is V TmDm. For a successful minimization, Vm needs to retain those w i m with the largest cm coefficients. As Vm is typically severely limited in terms of its representational capabilities by the number of adjustable weights in a network, it needs to focus on the most important eigenvectors in terms of cm in order to establish a small distance to Dmcm. Thus, features that appear multiple times in the input data with a corresponding factor in cm will more strongly contribute to minimizing LRRm . To summarize, Vm is driven towards containing r orthogonal vectors wim that represent the most frequent features of the input data, i.e., the dominant features. Additionally, due to the column vectors of Vm being mutually orthogonal, Mm is encouraged to extract different features from the input. By the sake of being distinct and representative for the data set, these features have the potential to be useful for new inference tasks. The feature vectors embedded inMm can be extracted from the network weights in practical settings, as we will demonstrate below.\nRealization in Neural Networks Calculating MTmMm is usually very expensive due to the dimensionality of Mm. Instead of building it explicitly, we constrain intermediate results to realize equation 3 when training. A regular training typically starts with a chosen network structure and\nUnder review as a conference paper at ICLR 2021\ntrains the model weights for a given task via a suitable loss function. Our approach fully retains this setup and adds a second pass that reverses the initial structure while reusing all weights and biases. E.g., for a typical fully connected layer in the forward pass with dm+1 = Mmdm +bm, the reverse pass operation is given by d ′\nm = M T m(dm+1 − bm), where d\n′ m denotes the reconstructed input.\na m\no un\nt o f s\nh a\nre d\nin fo\nrm a\ntio n\nb et\nw e\ne n\nla ye\nr a\nn d\namount of shared information between layer ℒ and 𝑋\n𝐿𝑎𝑦𝑒𝑟 1 2 3 𝑚 𝑛 𝐿𝑎𝑦𝑒𝑟 𝑚 𝒅𝒎 𝒅𝒎 𝟏\n𝒅𝒎 𝟏𝒅𝒎\n𝒅𝟏\n𝒅𝟏\n|\n𝑓\n𝐵𝑁 𝑐𝑜𝑛𝑣 𝑓 (𝑀 , 𝒃𝒎)\n𝑑𝑒𝑐𝑜𝑛𝑣\n(𝑀 , −𝒃𝒎) 𝐵𝑁\n𝑤𝑒𝑖𝑔ℎ𝑡 𝑠ℎ𝑎𝑟𝑖𝑛𝑔\n𝒗 𝒗 𝒗 0.427 0.400 0.489\n0.420 0.394 0.445 0.213 0.216 0.153 𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n𝒗\n𝒗\n𝒅\n…\n…\n(50 images)\n(50 images)\n(1,0)\n(0,1)\n𝒅 𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n𝑎 𝑀𝑁𝐼𝑆𝑇 𝑇𝑒𝑠𝑡 𝑤𝑖𝑡ℎ 𝑙𝑖𝑛𝑒𝑎𝑟 𝑚𝑜𝑑𝑒𝑙\n𝑏 𝑃𝑒𝑎𝑘 𝑇𝑒𝑠𝑡 𝑤𝑖𝑡ℎ 𝐵𝑁 𝑎𝑛𝑑 𝑅𝑒𝐿𝑈 𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛\n(0,0,0,0,0,0,0,1,0,0) (0,0,1,0,0,0,0,0,0,0)\n0.499\n0.502\n0.499\n0.491\n0.233\n0.202\n𝒅 𝒅\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑅𝑅\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1.1\n0 2 4 6 8 10 12\nmore information about both 𝑋&𝑌 more information about 𝑌 only\n𝐼(𝑋; 𝒟 )\n𝐼( 𝒟\n;𝑌 )\nless shared information between layer ℒ and 𝑋\nle s s\nsh ar\ned in\nfo rm\nat io n b et w ee n la ye r ℒ a n d\n𝑌\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1.1\n0 2 4 6 8 10 12\n𝑂𝑟𝑡\n𝑅𝑅 𝑅𝑅 𝑆𝑡𝑑\n𝐼(𝑋; 𝒟 )\n𝐼( 𝒟\n;𝑌 )\n0.9\n0.92\n0.94\n0.96\n0.98\n1\n1.02\n0 2 4 6 8 10 12\n𝑂𝑟𝑡\n𝑅𝑅 𝑅𝑅 𝑆𝑡𝑑\n𝐼(𝑋; 𝒟 )\n𝐼( 𝒟\n;𝑌 )\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1.1\n0 2 4 6 8 10 12\n𝑂𝑟𝑡\n𝑅𝑅 𝑅𝑅 𝑆𝑡𝑑\n𝐼(𝑋; 𝒟 )\n𝒟 𝒟 𝒟 𝒟 𝒟 𝒟\n𝑆𝑡𝑑\n𝐼( 𝒟\n;𝑌 )\nmore information about 𝑋 only\n(a) Mutual Information Plane, How to Read (b) Mutual Information for Task A (c) After fine-tuning for A (d) After fine-tuning for B\nno information\n…\n…\n…\n…\n𝒅𝟐\n𝒅𝟐\n|\n𝒅𝟑\n𝒅𝟑\n|\n𝒅𝒏\n𝒅𝒏\n| 𝒐𝒖𝒕𝒑𝒖𝒕\nLayers of 𝑅𝑅 models exhibit strong MI with in- & output\n𝒗 𝒗 0.43 0.49\n0.42 0.45 0.21 0.22 𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n𝒗\n𝒗\n𝒅\n(1,0)\n(0,1)\n𝒅\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n𝑎 𝐿𝑖𝑛𝑒𝑎𝑟 𝑚𝑜𝑑𝑒𝑙\n𝑏 𝑊𝑖𝑡ℎ 𝐵𝑁 𝑎𝑛𝑑 𝑅𝑒𝐿𝑈\n7 2\n0.50\n0.50\n0.50\n0.49\n0.23\n0.20\n𝒅 𝒅\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑅𝑅\n……(50 imgs)\n……(50 imgs)\nOur goal with the reverse pass is to transpose all operations of the forward pass to obtain identical intermediate activations between the layers with matching dimensionality. We can then constrain the intermediate results of each layer of the forward pass to match the results of the backward pass, as illustrated in figure 2. Unlike greedy layer-wise autoencoder pretraining, which trains each layer separately and only constrains d1 and d ′\n1, we jointly train all layers and constrain all intermediate results. Due to the symmetric structure of the two passes, we can use a simple L2 difference to drive the network towards aligning the results:\nLRR = n∑\nm=1\nλm\n∥∥∥dm − d′m∥∥∥2\nF . (7)\nHere dm denotes the input of layer m in the forward pass and d ′\nm the output of layer m for the reverse pass. λm denotes a scaling factor for the loss of layer m, which, however, is typically constant in our tests across all layers. Note that with our notation, d1 and d ′\n1 refer to the input data, and the reconstructed input, respectively.\nNext, we show how this setup realizes the regularization from equation 3. For clarity, we use a fully connected layer with bias. In a neural network with n hidden layers, the forward process for a layer m is given by dm+1 = Mmdm +bm,, with d1 and dn+1 denoting in- and output, respectively. For our pretraining, we build a reverse pass network with transposed operations starting with the final output where dn+1 = d ′ n+1, and the intermediate results d ′ m+1:\nd ′\nm = M T m(d\n′ m+1 − bm), (8) which yields ∥∥∥dm − d′m∥∥∥2 F = ∥∥MTmMmdm − dm∥∥2F . When this difference is minimized via equation 7, we obtain activated intermediate content during the reverse pass that reconstructs the values computed in the forward pass, i.e. d ′\nm+1 = dm+1 holds. As in equation 10 the reverse pass activation d ′\nm depends on dm+1 ′, this formulation yields a full reverse pass from output to input, which\nwe use for most training runs below. In this case\nd ′\nm = M T m(d\n′ m+1 − bm) = MTm(dm+1 − bm) = MTmMmdm , (9)\nwhich is consistent with equation 3, and satisfies the original constraint MTmMmdm − dm = 0. This version is preferable if a unique path from output to input exists. For architectures where the path is not unique, e.g., in the presence of additive residual connections, we use a local formulation\nd ′\nm = M T m(dm+1 − bm), (10)\nwhich employs dm+1 for jointly constraining all intermediate activations in the reverse pass.\nUp to now, the discussion focused on simplified neural networks without activation functions or extensions such as batch normalization (BN). While we leave incorporating such extensions for future work, our experiments consistently show that the inherent properties of our pretraining remain valid: even with activations and BN, our approach successfully extracts dominant structures and yields improved generalization. In the appendix, we give details on how to ensure that the latent space content for forward and reverse pass is aligned such that differences can be minimized.\nTo summarize, we realize the loss formulation of equation 7 to minimize ∑n\nm=1((M T mMm−I)dm)2\nwithout explicitly having to construct MTmMm. Following the notation above, we will refer to networks trained with the added reverse structure and the additional loss terms as RR variants. We consider two variants for the reverse pass: a local pretraining equation 10 using the datum dm+1 of a given layer, and a full version via equation 8 which uses d ′\nm+1 incoming from the next layer during the reverse pass.\nEmbedding Singular Values Below, Std denotes a regular training run (in orange color in graphs below), while RR denotes our models (in green). Pre and Ort will denote regular autoencoder pretraining and orthogonality, respectively, while a subscript will denote the task variant the model was trained for, e.g., StdT for task T. While we typically use all layers of a network in the constraints, a reduced variant that we compare to below only applies the constraint for the input data, i.e., m=1. A network trained with this variant, denoted by RR1A, is effectively trained to only reconstruct the input. It contains no constraints for the inner activations and layers of the network. For the Ort models, we use the Spectral Restricted Isometry Property algorithm (Bansal et al., 2018).\nWe verify that the column vectors of Vm of models from RR training contain the dominant features of the input with the help of a classification test, employing a single fully connected layer, i.e. d2 = M1d1, with batch normalization and activation. To quantify this similarity, we compute an LPIPS distance (Zhang et al., 2018b) between vim and the training data (lower values being better).\n𝒗\n𝒗 0.43\n0.49\n0.42\n0.45 0.21\n0.22\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n𝑎 𝐿𝑖𝑛𝑒𝑎𝑟 𝑚𝑜𝑑𝑒𝑙\n7\n2\n𝒅 𝒅\n𝐋𝐏𝐈𝐏𝐒𝐏𝐫𝐞\n0.14\n0.31\n𝒗\n𝒗\n1,0 𝑎𝑛𝑑 (0,1)\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n0.50\n0.50\n0.50\n0.49\n0.23\n0.20\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑅𝑅\n…… ……\n𝑃𝑟𝑒 𝐋𝐏𝐈𝐏𝐒𝐏𝐫𝐞\n0.24\n0.40\n+\nWe employ a training data set constructed from two dominant classes (a peak in the top left, and bottom right quadrant, respectively), augmented with noise in the form of random scribbles. Based on the analysis above, we expect the RR training to extract the two dominant peaks during training. The LPIPS measurements confirm our SVD argumentation above, with average scores of 0.217±0.022 for RR, 0.319±0.114 for Pre, 0.495± 0.006 for Ort, and 0.500± 0.002 for Std. I.e., the RR model fares significantly better than the others. At the same time, the peaks are clearly visible for RR models, an example is shown in figure 3(b), while the other models fail to extract structures that resemble the input. Thus, by training with the full network and the original training objective, our pretraining yields structures that are interpretable and be inspected by humans.\nThe results above experimentally confirm our formulation of the RR loss and its ability to extract dominant and generalizing structures from the training data. Next, we will focus on quantified metrics and turn to measurements in terms of mutual information to illustrate the behavior of our pretraining for deeper networks." }, { "heading": "3 EVALUATION IN TERMS OF MUTUAL INFORMATION", "text": "As our approach hinges on the introduction of the reverse pass, we will show that it succeeds in terms of establishing mutual information (MI) between the input and the constrained intermediates inside a network. More formally, MI I(X;Y ) of random variablesX and Y measures how different the joint distribution ofX and Y is w.r.t. the product of their marginal distributions, i.e., the Kullback-Leibler divergence I(X;Y ) = DKL[P(X,Y )||PXPY ]. (Tishby & Zaslavsky, 2015) proposed MI plane to analyze trained models, which show the MI between the input X and activations of a layer Dm, i.e., I(X;Dm) and I(Dm;Y ), i.e., MI of layer Dm with output Y . These two quantities indicate how much information about the in- and output distributions are retained at each layer, and we use them to show to which extent our pretraining succeeds at incorporating information about the inputs throughout training.\nThe following tests employ networks with six fully connected layers with the objective to learn the mapping from 12 binary inputs to 2 binary output digits (Shwartz-Ziv & Tishby, 2017), with results accumulated over five runs. We compare the versions StdA, PreA, OrtA, RRA, and a variant of the latter: RR1A, i.e. a version where only the input d1 is constrained to be reconstructed. While figure 4a) visually summarizes the content of the MI planes, the graph in (b) highlights that training with the RR loss correlates input and output distributions across all layers: the cluster of green points in the center of the graph shows that all layers contain balanced MI between in- as well as output and the activations of each layer. RR1A fares slightly worse, while StdA and OrtA almost exclusively focus on the output with I(Dm;Y ) being close to one. PreA instead only focuses on reconstructing inputs. Thus, the early layers cluster in the right-top corner, while the last layer I(D7;Y ) fails to\nalign with the outputs. Once we continue fine-tuning these models without regularization, the MI naturally shifts towards the output, as shown in figure 4 (c). Here, RRAA outperforms the other models in terms of final performance. Likewise, RRAB performs best for a transfer task B with switched output digits, as shown in graph (d). The final performance for both tasks across all runs is summarized in figure 5. These graphs visualize that the proposed pretraining succeeds in robustly establishing mutual information between inputs and targets across a full network, in addition to extracting reusable features.\nMI has received attention recently as a learning objective, e.g., in the form of the InfoGAN approach (Chen et al., 2016) for learning disentangled and interpretable latent representations. While MI is typically challenging to assess and estimate (WaltersWilliams & Li, 2009), the results above show that our approach provides a straightforward and robust way for including it as a learning objective. In this way, we can, e.g., reproduce the disentangling results from (Chen et al., 2016), which are shown in figure 1(c). A generative model with our pretraining extracts intuitive latent dimensions for the different digits, line thickness, and orientation without any additional modifications of the loss function. The joint training of the full network with the proposed reverse structure, including non-linearities and normalization, yields a natural and intuitive decomposition." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "We now turn to a broad range of network structures, i.e., CNNs, Autoencoders, and GANs, with a variety of data sets and tasks to show our approach succeeds in improving inference accuracy and generality for modern day applications and architectures.\nTransfer-learning Benchmarks We first evaluate our approach with two state-of-the-art benchmarks for transfer learning. The first one uses the texture-shape data set from (Geirhos et al., 2018), which contains challenging images of various shapes combined with patterns and textures to be classified. The results below are given for 10 runs each. For the stylized data shown in figure 6 (a), the accuracy of PreTS is low with 20.8%. This result is in line with observations in previous work and confirms the detrimental effect of classical pretraining. StdTS yields a performance of 44.2%, and OrtTS improves the performance to 47.0%, while RRTS yields a performance of 54.7% (see figure 6b). Thus, the accuracy of RRTS is 162.98% higher than PreTS, 23.76% higher than StdTS, and 16.38% higher than OrtTS. To assess generality, we also apply the models to new data without\n0.208\n0.085\n0.159\n0.442\n0.158\n0.331\n0.470\n0.203\n0.372\n0.547\n0.237\n0.408\nS T Y L I Z E D D A T A E D G E D A T A F I L L E D D A T A\nEdge data setStylized data set Filled data set\nAccuracy Comparisons of 𝑂𝑟𝑡 , 𝑆𝑡𝑑 and 𝑅𝑅\n𝑆𝑡𝑑 𝑅𝑅\n𝑂𝑟𝑡\nEdge data\nFilled data\nStylized data\n(𝑎) (𝑏) (𝑐) (𝑑)\n𝑃𝑟𝑒\nFigure 6: (a) Examples from texture-shape data set. (b, c, d) Texture-shape test accuracy comparisons of PreTS, OrtTS, StdTS and RRTS for different data sets.\n𝑂𝑢𝑡𝑝𝑢𝑡\n𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑆𝑡𝑑 𝑅𝑅\n𝑅𝑒𝑐𝑜𝑛𝑠𝑡𝑟𝑢𝑐𝑡𝑒𝑑 𝐼𝑛𝑝𝑢𝑡 (𝒅 )\n𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑆𝑡𝑑 𝑅𝑅\nre-training, i.e. an edge and a filled data set, also shown in figure 6 (a). For the edge data set, RRTS outperforms PreTS, StdTS and OrtTS by 178.82%, 50% and 16.75%, respectively,\nIt is worth pointing out that the additional constraints of our training approach lead to moderately increased requirements for memory and computations, e.g., 41.86% more time per epoch than regular training for the texture-shape test. On the other hand, it allows us to train smaller models: we can reduce the weight count by 32% for the texture-shape case while still being on-par with OrtTS in terms of classification performance. By comparison, regular layer-wise pretraining requires a significant overhead and fundamental changes to the training process. Our pretraining fully integrates with existing training methodologies and can easily be deactivated via λm = 0.\nAs a second test case, we use a CIFAR-based task transfer (Recht et al., 2019) that measures how well models trained on the original CIFAR 10, generalize to a new data set (CIFAR 10.1) collected according to the same principles as the original one. Here we use a Resnet110 with 110 layers and 1.7 million parameters, Due to the consistently low performance of the Pre models (Alberti et al., 2017), we focus on Std, Ort and RR for this test case. In terms of accuracy across 5 runs, OrtC10 outperforms StdC10 by 0.39%, while RRC10 outperforms OrtC10 by another 0.28% in terms of absolute test accuracy (figure 7). This increase for RR training matches the gains reported for orthogonality in previous work (Bansal et al., 2018), thus showing that our approach yields substantial practical improvements over the latter. It is especially interesting how well performance for CIFAR 10 translates into transfer performance for CIFAR 10.1. Here, RRC10 still outperforms OrtC10 and StdC10 by 0.22% and 0.95%, respectively. Hence, the models from our pretraining very successfully translate gains in performance from the original task to the new one, indicating that the models have successfully learned a set of more general features. To summarize, both benchmark cases confirm that the proposed pretraining benefits generalization.\nGenerative Adversarial Models In this section, we employ our pretraining in the context of generative models for transferring from synthetic to real-world data from the ScalarFlow data set (Eckert et al., 2019). As super-resolution taskA, we first use a fully-convolutional generator network, adversarially trained with a discriminator network on the synthetic flow data. While regular pretraining is\nmore amenable to generative tasks than orthogonal regularization, it can not be directly combined with adversarial training. Hence, we pretrain a model Pre for a reconstruction task at high-resolution without discriminator instead. Figure 8 a) demonstrates that our method works well in conjunction with the GAN training: As shown in the bottom row, the trained generator succeeds in recovering the input via the reverse pass without modifications. A regular model StdA, only yields a black image in this case. For PreA, the layer-wise nature of the pretraining severely limits its capabilities to learn the correct data distribution (Zhou et al., 2014), leading to a low performance.\nWe now mirror the generator model from the previous task to evaluate an autoencoder structure that we apply to two different data sets: the synthetic smoke data used for the GAN training (task B1), and a real-world RGB data set of smoke clouds (task B2). Thus both variants represent transfer tasks, the second one being more difficult due to the changed data distribution. The resulting losses, summarized in figure 8 b), show that RR training performs best for both autoencoder tasks: the L2 loss of RRAB1 is 68.88% lower than StdAB1 , while it is 13.3% lower for task B2. The proposed pretraining also clearly outperforms the Pre variants. Within this series of tests, the RR performance for task B2 is especially encouraging, as this task represents a synthetic to real transfer.\nWeather Forecasting Pretraining is particularly attractive in situations where the amount of data for training is severely limited. Weather forecasting is such a case, as systematic and accurate data for many relevant quantities are only available for approximately 50 years. We target three-day forecasts of pressure, ground temperature, and mid-level atmospheric temperature based on a public benchmark dataset (Rasp et al., 2020). This dataset contains worldwide observations from ERA5 (Hersbach et al., 2020) in six-hour intervals with a 5.625◦ resolution. For the joint inference of atmospheric pressure (500 hPa geopotential, Z500), ground temperature (T2M), and atmospheric temperature (at 850 hPa, T850), we use a convolutional ResNet architecture with 19 residual blocks. As regular pretraining is not compatible with residual connections, we omit it here.\nWe train a model regular model (about 6.36M trainable parameters) with data from 1979 to 2015, and compare its inference accuracy across all datapoints from years 2017 and 2018 to a similar model that employs our pretraining. While the regular model was trained for 25 epochs, the RR model was pretrained for 10 epochs and fine-tuned for another 15 epochs. Across all three physical quantities, the RR model clearly outperforms the regular model, as summarized in figure 1 (d) and figure 9 (details are given in the appendix). Especially for the latitude-weighted RMSE of Z500, it yields improvements of 5.5%. These improvements point to an improved generalization of the RR model via the pretraining and highlight its importance for domains where data is scarce." }, { "heading": "5 CONCLUSIONS", "text": "We have proposed a novel pretraining approach inspired by classic methods for unsupervised autoencoder pretraining and orthogonality constraints. In contrast to the classical methods, we employ a constrained reverse pass for the full non-linear network structure and include the original learning objective. We have shown for a wide range of scenarios, from mutual information, over transfer learning benchmarks to weather forecasting, that the proposed pretraining yields networks with better generalizing capabilities. Our training approach is general, easy to integrate, and imposes no requirements regarding network structure or training methods. Most importantly, our results show that unsupervised pretraining has not lost its relevance in today’s deep learning environment.\nAs future work, we believe it will be exciting to evaluate our approach in additional contexts, e.g., for temporal predictions (Hochreiter & Schmidhuber, 1997; Cho et al., 2014), and for training explainable and interpretable models (Zeiler & Fergus, 2014; Chen et al., 2016; Du et al., 2018)." }, { "heading": "A APPENDIX", "text": "To ensure reproducibility, source code and data for all tests will be published. Runtimes were measured on a machine with Nvidia GeForce GTX 1080 Ti GPUs and an Intel Core i7-6850K CPU." }, { "heading": "A.1 DISCUSSION OF RELATED WORK", "text": "Greedy layer-wise pretraining was first proposed by (Bengio et al., 2007), and influenced a large number of follow up works, providing a crucial method for enabling stable training runs of deeper networks. A detailed evaluation was performed by (Erhan et al., 2010), also highlighting cases were it can be detrimental. These problems were later on also detailed in other works, e.g., by (Alberti et al., 2017). The transferability of learned features was likewise a topic of interest for transfer learning applications (Yosinski et al., 2014). Sharing similarities with our approach, (Rasmus et al., 2015) combined supervised and unsupervised learning objectives, but focused on denoising autoencoders and a layer-wise approach without weight sharing. We demonstrate the importance of leveraging state-of-the-art methods for training deep networks, i.e. without decomposing or modifying the network structure. This not only improves performance, but also very significantly simplifies the adoption of the pretraining pass in new application settings.\nExtending the classic viewpoint of unsupervised autoencoder pretraining, several prior methods employed ”hard orthogonal constraints” to improve weight orthogonality via singular value decomposition (SVD) at training time (Huang et al., 2018; Jia et al., 2017; Ozay & Okatani, 2016). Bansal et al. (Bansal et al., 2018) additionally investigated efficient formulations of the orthogonality constraints. In practice, these constraints are difficult to satisfy, and correspondingly only weakly imposed. In addition, these methods focus on improving performance for a known, given task. This means the training process only extracts features that the network considers useful for improving the performance of the current task, not necessarily improving generalization or transfer performance (Torrey & Shavlik, 2010). While our approach shares similarities with SVD-based constraints, it can be realized with a very efficient L2-based formulation, and takes the full input distribution into account.\nRecovering all input information from hidden representations of a network is generally very difficult (Dinh et al., 2016; Mahendran & Vedaldi, 2016), due to the loss of information throughout the layer transformations. In this context, (Tishby & Zaslavsky, 2015) proposed the information bottleneck principle, which states that for an optimal representation, information unrelated to the current task is omitted. This highlights the common specialization of conventional training approaches.\nReversed network architectures were proposed in previous work (Ardizzone et al., 2018; Jacobsen et al., 2018; Gomez et al., 2017), but mainly focus on how to make a network fully invertible via augmenting the network with special structures. As a consequence, the path from input to output is different from the reverse path that translates output to input. Besides, the augmented structures of these approaches can be challenging to apply to general network architectures. In contrast, our approach fully preserves an existing architecture for the backward path, and does not require any operations that were not part of the source network. As such, it can easily be applied in new settings, e.g., adversarial training (Goodfellow et al., 2014). While methods using reverse connections were previously proposed (Zhang et al., 2018a; Teng & Choromanska, 2019), these modules primarily focus on transferring information between layers for a given task, and on auto-encoder structures for domain adaptation, respectively." }, { "heading": "A.2 PRETRAINING AND SINGULAR VALUE DECOMPOSITION", "text": "In this section we give a more detailed derivation of our loss formulation, extending Section 3 of the main paper. As explained there, our loss formulation aims for minimizing\nLRR = n∑\nm=1\n(MTmMmd i m − dim)2, (11)\nwhere Mm ∈ Rs out m×s in m denotes the weight matrix of layer m, and data from the input data set Dm is denoted by dim ⊂ Rs in m , i = 1, 2, ..., t. Here t denotes the number of samples in the input data set.\nMinimizing equation 11 is mathematically equivalent to\nMTmMmd i m − dim = 0 (12)\nfor all dim. Hence, perfectly fulfilling equation 11 would require all d i m to be eigenvectors of MTmMm with corresponding eigenvalues being 1. As in Sec. 3 of the main paper, we make use of an auxiliary orthonormal basis Bm : 〈 w1m, ...,w q m 〉 , for which q (with q ≤ t) denotes the number of linearly independent entries inDm. While Bm never has to be explicitly constructed for our method, it can, e.g., be obtained via Gram-Schmidt. The matrix consisting of the vectors in Bm is denoted by Dm.\nSince the whm(h = 1, 2, ...q) necessarily can be expressed as linear combinations of d i m, equation 11 similarly requires whm to be eigenvectors of M T mMm with corresponding eigenvalues being 1, i.e.:\nMTmMmw h m −whm = 0 (13)\nWe denote the vector of coefficients to express dim via Dm with c i m, i.e. d i m = Dmc i m. Then equation 12 can be rewritten as:\nMTmMmDmc i m −Dmcim = 0 (14)\nVia an SVD of the matrix Mm in equation 14 we obtain\nMTmMmDmcm −Dmcm\n= q∑ h=1 MTmMmw h mcmh −whmcmh\n= q∑ h=1 VmΣ T mΣmV T mw h mcmh −whmcmh\n(15)\nwhere the coefficient vector cm is accumulated over the training data set size t via cm = ∑t i=1 c i m. Here we assume that over the course of a typical training run eventually every single datum in Dm will contribute to LRRm . This form of the loss highlights that minimizing LRR requires an alignment of VmΣTmΣmV T mw h mcmh and w h mcmh .\nBy construction, Σm contains the square roots of the eigenvalues of MTmMm as its diagonal entries. The matrix has rank r = rank(MTmMm), and since all eigenvalues are required to be 1 by equation 13, the multiplication with Σm in equation 15 effectively performs a selection of r column vectors from Vm. Hence, we can focus on the interaction between the basis vectors wm and the r active column vectors of Vm:\nVmΣ T mΣmV T mw h mcmh −whmcmh\n= cmh(VmΣ T mΣmV T mw h m −whm)\n= cmh( r∑ f=1 (vfm) Twhmv f m −whm).\n(16)\nAs Vm is obtained via an SVD it contains r orthogonal eigenvectors of MTmMm. equation 13 requires w1m, ...,w q m to be eigenvectors of M T mMm, but since typically the dimension of the input data set is much larger than the dimension of the weight matrix, i.e. r ≤ q, in practice only r vectors from Bm can fulfill equation 13. This means the vectors v1m, ...,vrm in Vm are a subset of the orthonormal basis vectors Bm : 〈 w1m, ...,w q m 〉 with (whm)\n2 = 1. Then for any whm we have{ (vfm) Twhm = 1, if v f m = w h m\n(vfm) Twhm = 0, otherwise.\n(17)\nThus if Vm contains whm, we have r∑\nf=1\n(vfm) Twhmv f m = w h m, (18)\nand we trivially fulfill the constraint\ncmh( r∑ f=1 (vfm) Twhmv f m −whm) = 0. (19)\nHowever, due to r being smaller than q in practice, Vm typically can not include all vectors from Bm. Thus, if Vm does not contain whm, we have (vfm)Twhm = 0 for every vector vfm in Vm, which means\nr∑ f=1 (vfm) Twhmv f m = 0. (20)\nAs a consequence, the constraint equation 12 is only partially fulfilled:\ncmh( r∑ f=1 (vfm) Twhmv f m −whm) = −cmhwhm . (21)\nAs the whm have unit length, the factors cm determine the contribution of a datum to the overall loss. A feature whm that appears multiple times in the input data will have a correspondingly larger factor in cm and hence will more strongly contribute to LRR. The L2 formulation of equation 11 leads to the largest contributors being minimized most strongly, and hence the repeating features of the data, i.e., dominant features, need to be represented in Vm to minimize the loss. Interestingly, this argumentation holds when additional loss terms are present, e.g., a loss term for classification. In such a case, the factors cm will be skewed towards those components that fulfill the additional loss terms, i.e. favor basis vectors whm that contain information for about the loss terms. This, e.g., leads to clear digit structures being embedded in the weight matrices for the MNIST example below.\nIn summary, to minimize LRR, Vm is driven towards containing r orthogonal vectors whm which represent the most frequent features of the input data, i.e. the dominant features. It is worth emphasizing that above Bm is only an auxiliary basis, i.e., the derivation does not depend on any particular choice of Bm." }, { "heading": "A.3 EXAMPLES OF NETWORK ARCHITECTURES WITH PRETRAINING", "text": "While the proposed pretraining is significantly more easy to integrate into training pipelines than classic autoencoder pretraining, there are subtleties w.r.t. the order of the operations in the reverse pass that we clarify with examples in the following sections. To specify NN architectures, we use the following notation: C(k, l, q), and D(k, l, q) denote convolutional and deconvolutional operations, respectively, while fully connected layers are denoted with F (l), where k, l, q denote kernel size, output channels and stride size, respectively. The bias of a CNN layer is denoted with b. I/O(z) denote input/output, their dimensionality is given by z. Ir denotes the input of the reverse pass network. tanh, relu, lrelu denote hyperbolic tangent, ReLU, and leaky ReLU activation functions (AF), where we typically use a leaky tangent of 0.2 for the negative half-space. UP , MP and BN denote 2× nearest-neighbor up-sampling, max pooling with 2 × 2 filters and stride 2, and batch normalization, respectively.\nBelow we provide additional examples how to realize the pretraining loss Lrr in a neural network architecture. As explained in the main document, the constraint equation 11 is formulated via\nLrr = n∑\nm=1\nλm ∥∥∥dm − d′m∥∥∥2 F , (22)\nwith dm, and λm denoting the vector of activated intermediate data in layer m from the forward pass, and a scaling factor, respectively. d ′\nm denotes the activations of layer m from the reverse pass. E.g., let Lm() denote the operations of a layer m in the foward pass, and L′m() the corresponding operations for the reverse pass. Then dm+1 = Lm(dm), and d ′ m = L ′ m(d ′ m+1).\nWhen equation 22 is minimized, we obtain activated intermediate content during the reverse pass that reconstructs the values computed in the forward pass, i.e. d ′ m+1 = dm+1 holds. Then d ′\nm can be reconstructed from the incoming activations from the reverse pass, i.e., d ′\nm+1, or from the output of layer m, i.e., dm+1. Using d ′ m+1 results in a global coupling of input and output throughout\na m\no un\nt o f s\nh a\nre d\nin fo\nrm a\ntio n\nb et\nw e\ne n\nla ye\nr a\nn d\namount of shared information between layer ℒ and 𝑋\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1.1\n0 2 4 6 8 10 12\nmore information about both 𝑋&𝑌 more information about 𝑌 only\n𝐼(𝑋; 𝒟 )\n𝐼( 𝒟\n;𝑌 )\nless shared information between layer ℒ and 𝑋\nle s s\nsh ar\ned in\nfo rm\nat io n b et w ee n la ye r ℒ a n d\n𝑌\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1.1\n0 2 4 6 8 10 12\n𝑂𝑟𝑡\n𝑅𝑅 𝑅𝑅 𝑆𝑡𝑑\n𝐼(𝑋; 𝒟 )\n𝐼( 𝒟\n;𝑌 )\n0.9\n0.92\n0.94\n0.96\n0.98\n1\n1.02\n0 2 4 6 8 10 12\n𝑂𝑟𝑡\n𝑅𝑅 𝑅𝑅 𝑆𝑡𝑑\n𝐼(𝑋; 𝒟 )\n𝐼( 𝒟\n;𝑌 )\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1.1\n0 2 4 6 8 10 12\n𝑂𝑟𝑡\n𝑅𝑅 𝑅𝑅 𝑆𝑡𝑑\n𝐼(𝑋; 𝒟 )\n𝒟 𝒟 𝒟 𝒟 𝒟 𝒟\n𝑆𝑡𝑑\n𝐼( 𝒟\n;𝑌 )\nmore information about 𝑋 only\n(a) Mutual Information Plane, How to Read (b) Mutual Information for Task A (c) After fine-tuning for A (d) After fine-tuning for B\nno information\nLayers of 𝑅𝑅 models exhibit strong MI with in- & output\nall layers, i.e., the full loss variant. On the other hand, dm+1 yields a variant that ensures local reversibility of each layer, and yields a very similar performance, as we will demonstrate below. We employ this local loss for networks without a unique, i.e., bijective, connection between two layers. Intuitively, whe inputs cannot be reliably reconstructed from outputs.\nFull Network Pretraining: An illustration of a CNN structure with AF and BN and a full loss is shown in figure 10. While the construction of the reverse pass is straight-forward for all standard operations, i.e., fully connected layers, convolutions, pooling, etc., slight adjustments are necessary for AF and BN. It is crucial for our formulation that dm and d ′\nm contain the same latent space content in terms of range and dimensionality, such that they can be compared in the loss. Hence, we use the BN parameters and the AF of layer m − from the forward pass for layer m in the reverse pass. An example is shown in figure 14.\nTo illustrate this setup, we consider an example network employing convolutions with mixed AFs, BN, and MP. Let the network receives a field of 322 scalar values as input. From this input, 20, 40, and 60 feature maps are extracted in the first three layers. Besides, the kernel sizes are decreased from 5× 5 to 3× 3. To clarify the structure, we use ReLU activation for the first convolution, while the second one uses a hyperbolic tangent, and the third one a sigmoid function. With the notation outlined above, the first three layers of the network are\nI(32, 32, 1) = d1 → C1(5, 20, 1) + b1 → BN1 → relu → d2 →MP → C2(4, 40, 1) + b2 → BN2 → tanh → d3 →MP → C3(3, 60, 1) + b3 → BN3 → sigm → d4 → ...\n(23)\nThe reverse pass for evaluating the loss re-uses all weights of the forward pass and ensures that all intermediate vectors of activations, dm and d ′\nm, have the same size and content in terms of normalization and non-linearity. We always consider states after activation for Lrr. Thus, dm denotes activations before pooling in the forward pass and d ′\nm contains data after up-sampling in the reverse pass, in order to ensure matching dimensionality. Thus, the last three layers of the reverse network for computing Lrr take the form:\n...→ d ′\n4 → −b3 → D3(3, 40, 1)→ BN2 → tanh→ UP\n→ d ′\n3 → −b2 → D2(4, 20, 1)→ BN1 → relu→ UP\n→ d ′\n2 → −b1 → D1(5, 3, 1)\n→ d ′\n1 = O(32, 32, 1).\n(24)\nHere, the de-convolutions Dx in the reverse network share weights with Cx in the forward network. I.e., the 4× 4× 20× 40 weight matrix of C2 is reused in its transposed form as a 4× 4× 40× 20 matrix in D2. Additionally, it becomes apparent that AF and BN of layer 3 from the forward pass do not appear in the listing of the three last layers of the reverse pass. This is caused by the fact that both are required to establish the latent space of the fourth layer. Instead, d3 in our example represents the activations after the second layer (with BN2 and tanh), and hence the reverse pass for d ′ 3 reuses both functions. This ensures that dm and d ′\nm contain the same latent space content in terms of range and dimensionality, and can be compared in equation 22.\nFor the reverse pass, we additionally found it beneficial to employ an AF for the very last layer if the output space has suitable content. E.g., for inputs in the form of RGB data we employ an additional activation with a ReLU function for the output to ensure the network generates only positive values.\nLocalized Pretraining: In the example above, we use a full pretraining with d ′\nm+1 to reconstruct the activations d ′\nm. The full structure establishes a slightly stronger relationship among the loss terms of different layers, and allows earlier layers to decrease the accumulated loss of later layers. However, if the architecture of the original network makes use of operations between layers that are not bijective, we instead use the local loss. E.g., this happens for residual connections with an addition or non-invertible pooling operations such as max-pooling. In the former, we cannot uniquely determine the b, c in a = b + c given only a. And unless special care is taken (Bruna et al., 2013), the source neuron of an output is not known for regular max-pooling operations. Note that our loss formulation has no problems with irreversible operations within a layer, e.g., most convolutional or fully-connected layers typically are not fully invertible. In all these cases the loss will drive the network towards a state that is as-invertible-as-possible for the given input data set. However, this requires a reliable vector of target activations in order to apply the constraints. If the connection betweeen layers is not bijective, we cannot reconstruct this target for the constraints, as in the examples given above.\nIn such cases, we regard every layer as an individual unit to which we apply the constraints by building a localized reverse pass. For example, given a simple convolutional architecture with\nd1 → C1(5, 20, 1) + b1 = d2 (25) in the forward pass, we calculate d ′\n1 with\n(d2 − b1)→ D1(5, 3, 1) = d ′\n1, (26) We, e.g., use this local loss in the Resnet110 network below. It is important to note that despite being closer to regular autoencoder pretraining, this formulation still incorporates all non-linearities of the original network structure, and jointly trains full networks while taking into account the original learning objective.\nA.4 MNIST AND PEAK TESTS\n𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑷𝒓𝒆\n0.312\n0.392\n0.298\n0.334\n0.231\n0.114\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒\n(a) One hidden layer, without BN, without AF\n𝒅 ∈ ℝ × , 𝑀 ∈ ℝ × , 𝒅 ∈ ℝ ×\n𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑷𝒓𝒆\n0.499\n0.502\n0.499\n0.491\n0.239\n0.400\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒\n(b) One hidden layer, with BN, with ReLU AF\n𝒅 ∈ ℝ × , 𝑀 ∈ ℝ × , 𝒅 ∈ ℝ ×\n𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n0.356\n0.437\n0.405\n0.433\n0.196\n0.267\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒\n… … …\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n0.200\n0.110\n𝑅𝑅\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n0.233\n0.202\n𝑅𝑅\n𝒅 ∈ ℝ × , 𝑀 ∈ ℝ × ,\n(c) Two hidden layers, with BN, with ReLU AF\n𝑀 ∈ ℝ × , 𝒅 ∈ ℝ ×\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n0.175\n0.181\n𝑅𝑅\n…\n𝒗\n0.502\n0.491\n0.400\n𝒗\n0.440\n0.395\n0.524\n𝒗\n0.401\n0.377\n0.417\n𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑷𝒓𝒆\n0.395\n0.528\n0.420\n0.465\n0.225\n0.441\n(4)\n𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑷𝒓𝒆\n0.582\n0.447\n0.389\n0.411\n0.214\n0.370\n(5)\n0.3190.500 0.495\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒 Avg. LPIPS\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒 Avg. LPIPS 0.377 0.426 0.408\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒 Avg. LPIPS 0.3360.407 0.412\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒 Avg. LPIPS 0.3330.462 0.443\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒 Avg. LPIPS 0.2920.514 0.400\nSTDEV LPIPS 0.002 0.006 0.114 0.020 0.017 0.209 STDEV LPIPS 0.009 0.050 0.115 STDEV LPIPS\n0.094 0.032 0.153 STDEV LPIPS\n0.095 0.016 0.110 STDEV LPIPS\n0.202\n0.217\n𝑅𝑅\n0.022\n0.244\n𝑅𝑅 0.243\n0.001\n0.219\n𝑅𝑅 0.183\n0.051\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n0.142\n0.232\n𝑅𝑅 0.187\n0.064\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n0.306\n0.240\n𝑅𝑅\n0.273\n0.047\nBelow we give details for the peak tests from Sec. 3 of the main paper and show additional tests with the MNIST data set.\nPeak Test: For the Peak test we generated a data set of 110 images shown in figure 12. 55 images contain a peak located in the upper left corner of the image. The other 55 contain a peak located in the bottom right corner. We added random scribbles in the images to complicate the task. All 110 images were labeled with a one-hot encoding of the two possible positions of the peak. We use 100 images as training data set, and the remaining 10 for testing. All peak models are trained for 5000 epochs with a learning rate of 0.0001, with λ = 1e − 6 for RRA. To draw reliable conclusions, we show results for five repeated runs here. The neural network in this case contains one fully connected layer, with BN and ReLU activation. The results are shown in figure 13, with both peak modes being consistently embedded into the weight matrix of RRA, while regular and orthogonal training show primarily random singular vectors.\nWe also use different network architectures in figure 14 to verify that the dominant features are successfully extracted when using more complex\n16\ntraining data test data\nFigure 12: Data set used for the peak tests.\n𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑷𝒓𝒆 0.312\n0.392\n0.298\n0.334\n0.231\n0.114\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒\n(a) One hidden layer, without BN, without AF\n𝒅 ∈ ℝ × , 𝑀 ∈ ℝ × , 𝒅 ∈ ℝ ×\n𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑷𝒓𝒆 0.499\n0.502\n0.499\n0.491\n0.239\n0.400\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒\n(b) One hidden layer, with BN, with ReLU AF\n𝒅 ∈ ℝ × , 𝑀 ∈ ℝ × , 𝒅 ∈ ℝ ×\n𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹 0.356\n0.437\n0.405\n0.433\n0.196\n0.267\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒\n𝒅 ∈ ℝ × , 𝑀 ∈ ℝ × (c) Two hidden layers, with BN, with ReLU AF\n𝑀 ∈ ℝ × , 𝒅 ∈ ℝ ×\n… … …\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹 0.200\n0.110\n𝑅𝑅\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹 0.233\n0.202\n𝑅𝑅\n𝒅 ∈ ℝ × , 𝑀 ∈ ℝ × ,\n(c) Two hidden layers, with BN, with ReLU AF\n𝑀 ∈ ℝ × , 𝒅 ∈ ℝ ×\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹 0.175\n0.181\n𝑅𝑅\n… 𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑷𝒓𝒆 0.499\n0.502\n0.499\n0.491\n0.239\n0.400\n𝑅𝑒𝑝𝑒𝑎𝑡𝑒𝑑 𝑃𝑒𝑎𝑘 𝑇𝑒𝑠𝑡 𝑤𝑖𝑡ℎ 𝐵𝑁, 𝑎𝑛𝑑 𝑅𝑒𝐿𝑈 𝐴𝐹 (1)\n𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑷𝒓𝒆 0.412\n0.440\n0.420\n0.395\n0.229\n0.524\n(2)\n𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑷𝒓𝒆 0.414\n0.401\n0.448\n0.377\n0.255\n0.417\n(3)\n𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑷𝒓𝒆 0.395\n0.528\n0.420\n0.465\n0.225\n0.441\n(4)\n𝒗\n𝒗\n𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑷𝒓𝒆 0.582\n0.447\n0.389\n0.411\n0.214\n0.370\n(5)\n0.3190.500 0.495\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒 Avg. LPIPS\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒 Avg. LPIPS 0.377 0.426 0.408\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒 Avg. LPIPS 0.3360.407 0.412\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒 Avg. LPIPS 0.3330.462 0.443\n𝑆𝑡𝑑 𝑂𝑟𝑡 𝑃𝑟𝑒 Avg. LPIPS 0.2920.514 0.400\nSTDEV LPIPS 0.002 0.006 0.114 0.020 0.017 0.209 STDEV LPIPS 0.009 0.050 0.115 STDEV LPIPS\n0.094 0.032 0.153 STDEV LPIPS\n0.095 0.016 0.110 STDEV LPIPS\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹 0.233\n0.202\n0.217\n𝑅𝑅\n0.022\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n0.243\n0.244\n𝑅𝑅 0.243\n0.001\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n0.147\n0.219\n𝑅𝑅 0.183\n0.051\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n0.142\n0.232\n𝑅𝑅 0.187\n0.064\n𝐋𝐏𝐈𝐏𝐒𝑹𝑹\n0.306\n0.240\n𝑅𝑅\n0.273\n0.047\nFigure 13: Five repeated tests with the peak data shown in Sec. 3 of the main paper. RRA robustly extracts dominant features from the data set. The two singular vectors strongly resemble the two peak modes of the training data. This is confirmed by the LPIPS measurements.\nnetwork structures. Even for two layers with BN and ReLU activations, our pretraining clearly extracts the two modes of the training data. The visual resemblance is slightly reduced in this case, as the network has the freedom to embed the features in both layers. Across all three cases (for which we performed 5 runs each), our pretraining clearly outperforms regular training and the orthogonality constraint in terms of extracting and embedding the dominant structures of the training data set in the weight matrix.\nMNIST Test: We additionally verify that the column vectors of Vm of models from RR training contain the dominant features of the input with MNIST tests, which employ a single fully connected layer, i.e. d2 = M1d1. In the first MNIST test, the training data consists only of 2 different images. All MNIST models are trained for 1000 epochs with a learning rate of 0.0001, and λ = 1e − 5 for RRA. After training, we compute the SVD for M1. SVDs of the weight matrices of trained models can be seen in figure 11. The LPIPS scores show that features embedded in the weights of RR are consistently closer to the training data set than all other methods, i.e., regular training Std, classic autoencoder pretraining Pre, and regularization via orthogonalization Ort. While the vectors of Std and Ort contain no recognizable structures.\nOverall, our experiments confirm the motivation of our pretraining formulation. They additionally show that employing an SVD of the network weights after our pretraining yields a simple and convenient method to give humans intuition about the features learned by a network." }, { "heading": "B MUTUAL INFORMATION", "text": "This section gives details of the mutual information and disentangled representation tests from Sec. 4 of the main paper." }, { "heading": "B.1 MUTUAL INFORMATION TEST", "text": "Mutual information (MI) measures the dependence of two random variables, i.e., higher MI means that there is more shared information between two parameters. More formally, the mutual information I(X;Y ) of random variables X and Y measures how different the joint distribution of X and Y is w.r.t. the product of their marginal distributions, i.e., the Kullback-Leibler divergence I(X;Y ) = KL[P(X,Y )||PXPY ], where KL denotes the Kullback-Leibler divergence. Let I(X;Dm) denote the mutual information between the activations of a layer Dm and input X. Similarly I(Dm;Y ) denotes the MI between layer m and the output Y . We use MI planes in the main paper, which show I(X;Dm) and I(Dm;Y ) in a 2D graph for the activations of each layer Dm of a network after training. This visualizes how much information about input and output distribution is retained at each layer, and how these relationships change within the network. For regular training, the information bottleneck principle (Tishby & Zaslavsky, 2015) states that early layers contain more information about the input, i.e., show high values for I(X;Dm) and I(Dm;Y ). Hence in the MI plane visualizations, these layers are often visible at the top-right corner. Later layers typically share a large amount of information with the output after training, i.e. show large I(Dm;Y ) values, and correlate less with the input (low I(X;Dm)). Thus, they typically show up in the top-left corner of the MI plane graphs.\nTraining Details: We use the same numerical studies as in (Shwartz-Ziv & Tishby, 2017) as task A, i.e. a regular feed-forward neural network with 6 fully-connected layers. The input variable X contains 12 binary digits that represent 12 uniformly distributed points on a 2D sphere. The learning objective is to discover binary decision rules which are invariant under O(3) rotations of the sphere. X has 4096 different patterns, which are divided into 64 disjoint orbits of the rotation group, forming a minimal sufficient partition for spherically symmetric rules (Kazhdan et al., 2003). To generate the input-output distribution P (X,Y ), We apply the stochastic rule p(y = 1|x) = Ψ(f(x) − θ), (x ∈ X, y ∈ Y ), where Ψ is a standard sigmoidal function Ψ(u) = 1/(1 + exp(−γu)), following (Shwartz-Ziv & Tishby, 2017). We then use a spherically symmetric real valued function of the pattern f(x), evaluated through its spherical harmonics power spectrum (Kazhdan et al., 2003), and compare with a threshold θ, which was selected to make p(y = 1) = ∑ x p(y = 1|x)p(x) ≈ 0.5, with uniform p(x). γ is high enough to keep the mutual information I(X;Y ) ≈ 0.99 bits.\nFor the transfer learning taskB, we reverse output labels to check whether the model learned specific or generalizing features. E.g., if the output is [0,1] in the original data set, we swap the entries to [1,0]. 80% of the data (3277 data pairs) are used for training and rests (819 data pairs) are used for testing.\nFor the MI comparison in Fig. 4 of the main paper, we discuss models before and after fine-tuning separately, in order to illustrate the effects of regularization. We include a model with greedy layerwise pretraining Pre, a regular model StdA, one with orthogonality constraints OrtA, and our regular model RRA, all before fine-tuning. For the model RRA all layers are constrained to be recovered in the backward pass. We additionally include the version RR1A, i.e. a model trained with only one loss term λ1|d1 − d ′\n1|2, which means that only the input is constrained to be recovered. Thus, RR1A represents a simplified version of our approach which receives no constraints that intermediate results of the forward and backward pass should match. For OrtA, we used the Spectral Restricted Isometry Property (SRIP) regularization (Bansal et al., 2018),\nLSRIP = βσ(WTW − I), (27)\nwhere W is the kernel, I denotes an identity matrix, and β represents the regularization coefficient. σ(W ) = supz∈Rn,z 6=0 ‖Wz‖ ‖z‖ denotes the spectral norm of W .\nAs explained in the main text, all layers of the first stage, i.e. from RRA, RR 1 A, OrtA , PreA and StdA are reused for training the fine-tuned models without regularization, i.e. RRAA, RR 1 AA, OrtAA , PreAA and StdAA. Likewise, all layers of the transfer task models RRAB, RR 1 AB, OrtAB , PreAB and StdAB are initialized from the models of the first training stage.\nAnalysis of Results: We first compare the version only constraining input and output reconstruction (RR1A) and the full loss version RRA. Fig. 4(b) of the main paper shows that all points of RRA are located in a central region of the MI place, which means that all layers successfully encode information about the inputs as well as the outputs. This also indicates that every layer contains a similar amount of information about X and Y , and that the path from input to output is similar to the path from output to input. The points of RR1A, on the other hand, form a diagonal line. I.e., this network has different amounts of mutual information across its layers, and potentially a very different path for each direction. This difference in behavior is caused by the difference of the constraints in these two versions: RR1A is only constrained to be able to regenerate its input, while the full loss for RRA ensures that the network learns features which are beneficial for both directions. This test highlights the importance of the constraints throughout the depth of a network in our formulation. In contrast, the I(X;D) values of later layers for StdA and OrtA exhibit small values (points near the left side), while I(D;Y ) is high throughout. This indicates that the outputs were successfully encoded and that increasing amounts of information about the inputs are discarded. Hence, more specific features about the given output data-set are learned by StdA and OrtA. This shows that both models are highly specialized for the given task, and potentially perform worse when applied to new tasks. PreA only focuses on decreasing the reconstruction loss, which results in high I(X;D) values for early layers, and low I(D;Y ) values for later layers. During the fine-tuning phase for task A (i.e. regularizers being disabled), all models focus on the output and maximize I(D;Y ). There are differences in the distributions of the points along the y-axis, i.e., how much MI with the output is retained, as shown in Fig. 4(c) of the main paper. For model RRAA, the I(D;Y ) value is higher than for StdAA, OrtAA, PreAA and RR1AA, which means outputs of RRAA are more closely related to the outputs, i.e., the ground truth labels for task A. Thus, RRAA outperforms the other variants for the original task.\nIn the fine-tuning phase for task B, StdAB stands out with very low accuracy in Fig. 5 of the main paper. This model from a regular training run has large difficulties to adapt to the new task. PreA aims at extracting features from inputs and reconstructed them. PreAB outperforms StdAB, which means features helpful for task B are extracted by PreA, however, it’s hard to guide the feature extracting process. Model OrtAB also performs worse than StdB. RRAB shows the best performance in this setting, demonstrating that our loss formulation yielded more generic features, improving the performance for related tasks such as the inverted outputs for B.\nWe also analyze the two variants of our pretraining: the local variant lRRA and the full version RRA in terms of mutual information. figure 15 shows the MI planes for these two models, also\nshowing RR1A for comparison. Despite the local nature of lRRA it manages to establish MI for the majority of the layers, as indicated by the cluster of layers in the center of the MI plane. Only the first layer moves towards the top right corner, and the second layer is affected slightly. I.e., these layers exhibit a stronger relationship with the distribution of the outputs. Despite this, the overall performance when fine-tuning or for the task transfer remains largely unaffected, e.g., the lRRA still clearly outperforms RR1A. This confirms our choice to use the full pretraining when network connectivity permits, and employ the local version in all other cases." }, { "heading": "B.2 DISENTANGLED REPRESENTATIONS", "text": "The InfoGAN approach (Chen et al., 2016) demonstrated the possibility to control the output of generative models via maximizing mutual information between outputs and structured latent variables. However, mutual information is very hard to estimate in practice (Walters-Williams & Li, 2009). The previous section and Fig. 4(b) of the main paper demonstrated that models from our pretraining (both RR1A and RRA) can increase the mutual information between network inputs and outputs. Intuitively, the pretraining explicitly constrains the model to recover an input given an output, which directly translates into an increase of mutual information between input and output distributions compared to regular training runs. For highlighting how our pretraining can yield disentangled representations (as discussed in the later paragraphs of Sec. 4 of the main text), we follow the experimental setup of InfoGAN (Chen et al., 2016): the input dimension of our network is 74, containing 1 ten-dimensional category code c1, 2 continuous latent codes c2, c3 ∼ U(−1, 1) and 62 noise variables. Here, U denotes a uniform distribution.\nTraining Details: As InfoGAN focuses on structuring latent variables and thus only increases the mutual information between latent variables and the output, we also focus the pretraining on the corresponding latent variables. I.e., the goal is to maximize their mutual information with the output of the generative model. Hence, we train a model RR1 for which only latent dimensions c1, c2, c3 of the input layer are involved in the loss. We still employ a full reverse pass structure in the neural network architecture. c1 is a ten-dimensional category code, which is used for controlling the output digit category, while c2 and c3 are continuous latent codes, to represent (previously unknown) key properties of the digits, such as orientation or thickness. Building relationship between c1 and outputs is more difficult than for c2 or c3, since the 10 different digit outputs need to be encoded in a sinlge continuous variable c1. Thus, for the corresponding loss term for c1 we use a slightly larger λ factor (by 33%) than for c2 and c3. Details of our results are shown in figure 16. Models are trained using a GAN loss (Goodfellow et al., 2014) as the loss function for the outputs.\nAnalysis of Results: In figure 16 we show additional results for the disentangling test case. It is visible that our pretraining of the RR1 model yields distinct and meaningful latent space dimensions for c1,2,3. While c1 controls the digit, c2,3 control the style and orientation of the digits. For comparison, a regular training run with model Std does result in meaningful or visible changes when adjusting the latent space dimensions. This illustrates how strongly the pretraining can shape the\nUnder review as a conference paper at ICLR 2021\nVarying 𝑐\nV arying in put no ise\nV arying in put no ise\nVarying 𝑐\nV arying in put no ise\nVarying 𝑐\nV arying in put no ise\nVarying 𝑐\nlatent space, and in addition to an intuitive embedding of dominant features, yield a disentangled representation." }, { "heading": "C DETAILS OF EXPERIMENTAL RESULTS", "text": "C.1 TEXTURE-SHAPE BENCHMARK\n0.208\n0.085\n0.159\n0.442\n0.158\n0.331\n0.470\n0.203\n0.372\n0.547\n0.237\n0.408\nS T Y L I Z E D D A T A E D G E D A T A F I L L E D D A T A\nEdge data setStylized data set Filled data set\nAccuracy Comparisons of 𝑂𝑟𝑡 , 𝑆𝑡𝑑 and 𝑅𝑅\n𝑆𝑡𝑑 𝑅𝑅\n𝑂𝑟𝑡\nEdge data\nFilled data\nStylized data\n(𝑎) (𝑏) (𝑐) (𝑑)\n𝑃𝑟𝑒\nTraining Details: All training data of the texture-shape tests were obtained from (Geirhos et al., 2018). The stylized data set contains 1280 images, 1120 images are used as training data, and 160 as test data. Both edge and filled data sets contain 160 images each, all of which are used for testing only. All three sets (stylized, edge, and filled) contain data for 16 different classes.\nAnalysis of Results: For a detailed comparison, we list per-class accuracy of stylized data training runs for OrtTS, StdTS, PreTS and RRTS in figure 17. RRTS out-\nperforms the other three models for most of the classes. RRTS requires an additional 41.86% for training compared to StdTS, but yields a 23.76% higher performance. (Training times for these models are given in the supplementary document.) All models saturated, i.e. training StdTS or OrtTS longer does not increase classification accuracy any further. We also investigated how much we can reduce model size when using our pretraining in comparison to the baselines. A reduced model only uses 67.94% of the parameters, while still outperforming OrtTS." }, { "heading": "C.2 GENERATIVE ADVERSARIAL MODELS", "text": "Training Details: The data set of smoke simulation was generated with a Navier-Stokes solver from an open-source library (Thuerey & Pfaff, 2018). We generated 20 randomized simulations with 120 frames each, with 10% of the data being used for training. The low-resolution data were down-sampled from the high-resolution data by a factor of 4. Data augmentation, such as flipping and rotation was used in addition. As outlined in the main text, we consider building an autoencoder model for the synthetic data as task B1, and a generating samples from a real-world smoke data set as task B2. The smoke capture data set for B2 contains 2500 smoke images from the ScalarFlow data set (Eckert et al., 2019), and we again used 10% of these images as training data set.\nTask A: We use a fully convolutional CNN-based architecture for generator and discriminator networks. Note that the inputs of the discriminator contain high resolution data (64, 64, 1), as well as low resolution (16, 16, 1), which is up-sampled to (64, 64, 1) and concatenated with the high resolution data. In line with previous work (Xie et al., 2018), RRA and StdA are trained with a\nnon-saturating GAN loss, feature space loss and L2 loss as base loss function. All generator layers are involved in the pretraining loss. As greedy layer-wise autoencoder pretraining is not compatible with adversarial training, we pretrain PreA for reconstructing the high resolution data instead.\nTask B1: All encoder layers are initialized from RRA and StdA when training RRAB1 and StdAB1 . It is worth noting that the reverse pass of the generator is also constrained when training PreA and RRA. So both encoder and decoder are initialized with parameters from PreA and RRA when training PreAB1 and RRAB1 , respectively. This is not possible for a regular network like StdAB1 , as the weights obtained with a normal training run are not suitable to be transposed. Hence, the deconvolutions of StdAB1 are initialized randomly.\nTask B2: As the data set for the task B2 is substantially different and contains RBG images (instead of single channel gray-scale images), we choose the following setups for the RRA, PreA and StdA models: parameters from all six layers of StdA and RRA are reused for initializing decoder part of StdAB2 and RRAB2 , parameters from all six layers of PreA are reused for initializing the encoder part of PreAB2 . Specially, when initializing the last layer of PreAB2 , StdAB2 and RRAB2 , we copy and stack the parameters from the last layer of PreA, StdA and RRA, respectively, into three channels to match the dimenions of the outputs for taskB2. Here, the encoder part of RRAB2 and the decoder of PreAB2 are not initialized with RRA and PreA, due to the significant gap between training data sets of taskB1 and taskB2. Our experiments show that only initializing the decoder part of RRAB2 (avg. loss:1.56e7, std. dev.:3.81e5) outperforms initializing both encoder and decoder (avg. loss:1.82e7± 2.07e6), and only initializing the encoder part of PreAB2 (avg. loss:4.41e7 ± 6.36e6) outperforms initializing both encoder and decoder (avg. loss:9.42e7 ± 6.11e7). We believe the reason is that initializing both encoder and decoder part makes it more difficult to adjust the parameters for new data set that is very different from the data set of the source task.\nAnalysis of Results: Example outputs of PreAB1 , StdAB1 and RRAB1 are shown in figure 18. It is clearly visible that RRAB1 gives the best performance among these models. We similarly illustrate the behavior of the transfer learning task B2 for images of real-world fluids. This example\n387 1057 2070 1990 2005 2189\nlikewise uses an autoencoder structure. Visual comparisons are provided in figure 19, where RRAB2 generates results that are closer to the reference. Overall, these results demonstrate the benefits of our pretraining for GANs, and indicate its potential to obtain more generic features from synthetic data sets that can be used for tasks involving real-world data." }, { "heading": "C.3 WEATHER FORECASTING", "text": "Training Details: The weather forecasting scenario discussed in the main text follows the methodology of the WeatherBench benchmark (Rasp et al., 2020). This benchmark contains 40 years of data from the ERA5 reanalysis project Hersbach et al. (2020) which was re-sampled to a 5.625◦ resolution, yielding 32× 64 grid points in ca. two-hour intervals. Data from the year of 1979 to 2015 (i.e., 162114 samples) are used for training, the year of 2016 for validation. The last two years (2017 and 2018) are used as test data. All RMSE measurements are latitude-weighted to account for area distortions from the spherical projection.\nThe neural networks for the forecasting tasks employ a ResNet architecture with 19 layers, all of which contain 128 features with 3 × 3 kernels (apart from 7 × 7 in the first layer). All layers use batch normalization, leaky ReLU activation (tangent 0.3), and dropout with strength 0.1. As inputs, the model receives feature-wise concatenated data from the WeatherBench data for 3 consecutive time steps, i.e., t, t − 6h, and t − 12h, yielding 117 channels in total. The last convolution jointly generates all three output fields, i.e., pressure at 500 hPa (Z500), temperature at 850 hPa (T850), and the 2-meter temperature (T2M).\nAnalysis of Results: In addition to the quantitative results for both years of test data given in the main text, figure 20 and 21 contain additional example visualizations from the test data set. A visualization of the spatial error distribution w.r.t. ground truth result in also shown in figure 21. It becomes apparent that our pretraining achieves reduced errors across the whole range of samples. Both temperature targets contain a larger number of smaller scale features than the pressure fields. While the gains from our pretraining approach are not huge (on the order of 3% in both cases), they represent important steps forward. The learning objective is highly non-trivial, and the improvements were achieved with the same limited set of training data. Being very easy to integrate into existing training pipelines, these results indicate that the proposed pretraining methodology has the potential to yield improved learning results for a wide range of problem settings." } ]
2,020
null
SP:70fc08b1b6161c770b5019272c2eaa0d2e3c39ee
[ "This paper raises and studies concerns about the generalization of 3D human motion prediction approaches across unseen motion categories. The authors address this problem by augmenting existing architectures with a VAE framework. More precisely, an encoder network that is responsible for summarizing the seed sequence is shared by two decoders for the reconstruction of the seed motion and prediction of the future motion. Hence, the encoder is trained by using both the ELBO of a VAE and the objective of the original motion prediction task. " ]
The task of predicting human motion is complicated by the natural heterogeneity and compositionality of actions, necessitating robustness to distributional shifts as far as out-of-distribution (OoD). Here we formulate a new OoD benchmark based on the Human3.6M and CMU motion capture datasets, and introduce a hybrid framework for hardening discriminative architectures to OoD failure by augmenting them with a generative model. When applied to current state-of-theart discriminative models, we show that the proposed approach improves OoD robustness without sacrificing in-distribution performance, and can theoretically facilitate model interpretability. We suggest human motion predictors ought to be constructed with OoD challenges in mind, and provide an extensible general framework for hardening diverse discriminative architectures to extreme distributional shift.
[]
[ { "authors": [ "Emre Aksan", "Manuel Kaufmann", "Otmar Hilliges" ], "title": "Structured prediction helps 3d human motion modelling", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Alexandre Alahi", "Kratarth Goel", "Vignesh Ramanathan", "Alexandre Robicquet", "Li Fei-Fei", "Silvio Savarese" ], "title": "Social lstm: Human trajectory prediction in crowded spaces", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Dario Amodei", "Chris Olah", "Jacob Steinhardt", "Paul Christiano", "John Schulman", "Dan Mané" ], "title": "Concrete problems in ai safety", "venue": "arXiv preprint arXiv:1606.06565,", "year": 2016 }, { "authors": [ "Apratim Bhattacharyya", "Mario Fritz", "Bernt Schiele" ], "title": "Long-term on-board prediction of people in traffic scenes under uncertainty", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Anthony Bourached", "Parashkev Nachev" ], "title": "Unsupervised videographic analysis of rodent behaviour", "venue": "arXiv preprint arXiv:1910.11065,", "year": 2019 }, { "authors": [ "Judith Butepage", "Michael J Black", "Danica Kragic", "Hedvig Kjellstrom" ], "title": "Deep representation learning for human motion prediction and classification", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Yujun Cai", "Lin Huang", "Yiwei Wang", "Tat-Jen Cham", "Jianfei Cai", "Junsong Yuan", "Jun Liu", "Xu Yang", "Yiheng Zhu", "Xiaohui Shen" ], "title": "Learning progressive joint propagation for human motion prediction", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Zhe Cao", "Gines Hidalgo", "Tomas Simon", "Shih-En Wei", "Yaser Sheikh" ], "title": "Openpose: realtime multiperson 2d pose estimation using part affinity fields", "venue": "arXiv preprint arXiv:1812.08008,", "year": 2018 }, { "authors": [ "Chien-Yen Chang", "Belinda Lange", "Mi Zhang", "Sebastian Koenig", "Phil Requejo", "Noom Somboon", "Alexander A Sawchuk", "Albert A Rizzo" ], "title": "Towards pervasive physical rehabilitation using microsoft kinect", "venue": null, "year": 2012 }, { "authors": [ "Nutan Chen", "Justin Bayer", "Sebastian Urban", "Patrick Van Der Smagt" ], "title": "Efficient movement representation by embedding dynamic movement primitives in deep autoencoders", "venue": "In 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids),", "year": 2015 }, { "authors": [ "Erik Daxberger", "José Miguel Hernández-Lobato" ], "title": "Bayesian variational autoencoders for unsupervised out-of-distribution detection", "venue": "arXiv preprint arXiv:1912.05651,", "year": 2019 }, { "authors": [ "Katerina Fragkiadaki", "Sergey Levine", "Panna Felsen", "Jitendra Malik" ], "title": "Recurrent network models for human dynamics", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Evelien E Geertsema", "Roland D Thijs", "Therese Gutter", "Ben Vledder", "Johan B Arends", "Frans S Leijten", "Gerhard H Visser", "Stiliyan N Kalitzin" ], "title": "Automated video-based detection of nocturnal convulsive seizures in a residential care", "venue": "setting. Epilepsia,", "year": 2018 }, { "authors": [ "Anand Gopalakrishnan", "Ankur Mali", "Dan Kifer", "Lee Giles", "Alexander G Ororbia" ], "title": "A neural temporal model for human motion prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "F Sebastian Grassia" ], "title": "Practical parameterization of rotations using the exponential map", "venue": "Journal of graphics tools,", "year": 1998 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Jörn-Henrik Jacobsen", "David Duvenaud", "Mohammad Norouzi", "Kevin Swersky" ], "title": "Your classifier is secretly an energy based model and you should treat it like one", "venue": null, "year": 1912 }, { "authors": [ "Liang-Yan Gui", "Yu-Xiong Wang", "Xiaodan Liang", "José MF Moura" ], "title": "Adversarial geometryaware human motion prediction", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Liang-Yan Gui", "Kevin Zhang", "Yu-Xiong Wang", "Xiaodan Liang", "José MF Moura", "Manuela Veloso" ], "title": "Teaching robots to predict human motion", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2018 }, { "authors": [ "Xiao Guo", "Jongmoo Choi" ], "title": "Human motion prediction via learning local structure representations and temporal dependencies", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "arXiv preprint arXiv:1610.02136,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "arXiv preprint arXiv:1812.04606,", "year": 2018 }, { "authors": [ "Catalin Ionescu", "Fuxin Li", "Cristian Sminchisescu" ], "title": "Latent structured models for human pose estimation", "venue": "In 2011 International Conference on Computer Vision,", "year": 2011 }, { "authors": [ "Catalin Ionescu", "Dragos Papava", "Vlad Olaru", "Cristian Sminchisescu" ], "title": "Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Ashesh Jain", "Amir R Zamir", "Silvio Savarese", "Ashutosh Saxena" ], "title": "Structural-rnn: Deep learning on spatio-temporal graphs", "venue": "In Proceedings of the ieee conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Manish Kakar", "Håkan Nyström", "Lasse Rye Aarup", "Trine Jakobi Nøttrup", "Dag Rune Olsen" ], "title": "Respiratory motion prediction by using the adaptive neuro fuzzy inference system (anfis)", "venue": "Physics in Medicine & Biology,", "year": 2005 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer vision", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Daehee Kim", "J Paik" ], "title": "Gait recognition using active shape model and motion prediction", "venue": "IET Computer Vision,", "year": 2010 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Variational graph auto-encoders", "venue": "arXiv preprint arXiv:1611.07308,", "year": 2016 }, { "authors": [ "Daphne Koller", "Nir Friedman" ], "title": "Probabilistic graphical models: principles and techniques", "venue": "MIT press,", "year": 2009 }, { "authors": [ "Hema Koppula", "Ashutosh Saxena" ], "title": "Learning spatio-temporal structure from rgb-d videos for human activity detection and anticipation", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Hema Swetha Koppula", "Ashutosh Saxena" ], "title": "Anticipating human activities for reactive robotic response", "venue": "In IROS, pp. 2071", "year": 2013 }, { "authors": [ "Rynson WH Lau", "Addison Chan" ], "title": "Motion prediction for online gaming", "venue": "In International Workshop on Motion in Games,", "year": 2008 }, { "authors": [ "Andreas M Lehrmann", "Peter V Gehler", "Sebastian Nowozin" ], "title": "Efficient nonlinear markov models for human motion", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Chen Li", "Zhen Zhang", "Wee Sun Lee", "Gim Hee Lee" ], "title": "Convolutional sequence to sequence model for human dynamics", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Dongxu Li", "Cristian Rodriguez", "Xin Yu", "Hongdong Li" ], "title": "Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison", "venue": "In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), March 2020a", "year": 2020 }, { "authors": [ "Maosen Li", "Siheng Chen", "Yangheng Zhao", "Ya Zhang", "Yanfeng Wang", "Qi Tian" ], "title": "Dynamic multiscale graph neural networks for 3d skeleton based human motion prediction", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "Rayadurgam Srikant" ], "title": "Enhancing the reliability of out-of-distribution image detection in neural networks", "venue": "arXiv preprint arXiv:1706.02690,", "year": 2017 }, { "authors": [ "Zhuo Ma", "Xinglong Wang", "Ruijie Ma", "Zhuzhu Wang", "Jianfeng Ma" ], "title": "Integrating gaze tracking and head-motion prediction for mobile device authentication: A proof of concept", "venue": null, "year": 2018 }, { "authors": [ "Wei Mao", "Miaomiao Liu", "Mathieu Salzmann", "Hongdong Li" ], "title": "Learning trajectory dependencies for human motion prediction", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Julieta Martinez", "Michael J Black", "Javier Romero" ], "title": "On human motion prediction using recurrent neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Leland McInnes", "John Healy", "James Melville" ], "title": "Umap: Uniform manifold approximation and projection for dimension reduction", "venue": "arXiv preprint arXiv:1802.03426,", "year": 2018 }, { "authors": [ "Yuichiro Motegi", "Yuma Hijioka", "Makoto Murakami" ], "title": "Human motion generative model using variational autoencoder", "venue": "International Journal of Modeling and Optimization,", "year": 2018 }, { "authors": [ "Andriy Myronenko" ], "title": "3d mri brain tumor segmentation using autoencoder regularization", "venue": "In International MICCAI Brainlesion Workshop,", "year": 2018 }, { "authors": [ "Eric Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Dilan Gorur", "Balaji Lakshminarayanan" ], "title": "Do deep generative models know what they don’t know", "venue": "arXiv preprint arXiv:1810.09136,", "year": 2018 }, { "authors": [ "Brian Paden", "Michal Čáp", "Sze Zheng Yong", "Dmitry Yershov", "Emilio Frazzoli" ], "title": "A survey of motion planning and control techniques for self-driving urban vehicles", "venue": "IEEE Transactions on intelligent vehicles,", "year": 2016 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Dario Pavllo", "David Grangier", "Michael Auli" ], "title": "Quaternet: A quaternion-based recurrent model for human motion", "venue": "arXiv preprint arXiv:1805.06485,", "year": 2018 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Ahmadreza Reza Rofougaran", "Maryam Rofougaran", "Nambirajan Seshadri", "Brima B Ibrahim", "John Walley", "Jeyhan Karaoguz" ], "title": "Game console and gaming object with motion prediction modeling and methods for use therewith", "venue": "US Patent 9,943,760", "year": 2018 }, { "authors": [ "Akihiko Shirai", "Erik Geslin", "Simon Richir" ], "title": "Wiimedia: motion analysis methods and applications using a consumer video game controller", "venue": "In Proceedings of the 2007 ACM SIGGRAPH symposium on Video games,", "year": 2007 }, { "authors": [ "Ilya Sutskever", "Geoffrey E Hinton", "Graham W Taylor" ], "title": "The recurrent temporal restricted boltzmann machine", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "Graham W Taylor", "Geoffrey E Hinton", "Sam T Roweis" ], "title": "Modeling human motion using binary", "venue": null, "year": 2014 }, { "authors": [ "Mao Wei", "Liu Miaomiao", "Salzemann Mathieu" ], "title": "History repeats itself: Human motion prediction", "venue": "rehabilitation. Journal of neuroengineering and rehabilitation,", "year": 2014 }, { "authors": [ "A. say" ], "title": "It may be useful to know that the patient’s deviation from a classical case of A, is in the direction of condition, say, B. We trained the augmented GCN model discussed in the main text with all actions, for both datasets. We use Uniform Manifold Approximation and Projection (UMAP) (McInnes et al., 2018) to project the latent space of the trained GCN models onto 2 dimensions for all samples in the dataset", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Human motion is naturally intelligible as a time-varying graph of connected joints constrained by locomotor anatomy and physiology. Its prediction allows the anticipation of actions with applications across healthcare (Geertsema et al., 2018; Kakar et al., 2005), physical rehabilitation and training (Chang et al., 2012; Webster & Celik, 2014), robotics (Koppula & Saxena, 2013b;a; Gui et al., 2018b), navigation (Paden et al., 2016; Alahi et al., 2016; Bhattacharyya et al., 2018; Wang et al., 2019), manufacture (Švec et al., 2014), entertainment (Shirai et al., 2007; Rofougaran et al., 2018; Lau & Chan, 2008), and security (Kim & Paik, 2010; Ma et al., 2018).\nThe favoured approach to predicting movements over time has been purely inductive, relying on the history of a specific class of movement to predict its future. For example, state space models (Koller & Friedman, 2009) enjoyed early success for simple, common or cyclic motions (Taylor et al., 2007; Sutskever et al., 2009; Lehrmann et al., 2014). The range, diversity and complexity of human motion has encouraged a shift to more expressive, deep neural network architectures (Fragkiadaki et al., 2015; Butepage et al., 2017; Martinez et al., 2017; Li et al., 2018; Aksan et al., 2019; Mao et al., 2019; Li et al., 2020b; Cai et al., 2020), but still within a simple inductive framework.\nThis approach would be adequate were actions both sharply distinct and highly stereotyped. But their complex, compositional nature means that within one category of action the kinematics may vary substantially, while between two categories they may barely differ. Moreover, few real-world tasks restrict the plausible repertoire to a small number of classes–distinct or otherwise–that could be explicitly learnt. Rather, any action may be drawn from a great diversity of possibilities–both kinematic and teleological–that shape the characteristics of the underlying movements. This has two crucial implications. First, any modelling approach that lacks awareness of the full space of motion possibilities will be vulnerable to poor generalisation and brittle performance in the face of kinematic anomalies. Second, the very notion of In-Distribution (ID) testing becomes moot, for the relations between different actions and their kinematic signatures are plausibly determinable only across the entire domain of action. A test here arguably needs to be Out-of-Distribution (OoD) if it is to be considered a robust test at all.\nThese considerations are amplified by the nature of real-world applications of kinematic modelling, such as anticipating arbitrary deviations from expected motor behaviour early enough for an automatic intervention to mitigate them. Most urgent in the domain of autonomous driving (Bhattacharyya et al., 2018; Wang et al., 2019), such safety concerns are of the highest importance, and\nare best addressed within the fundamental modelling framework. Indeed, Amodei et al. (2016) cites the ability to recognize our own ignorance as a safety mechanism that must be a core component in safe AI. Nonetheless, to our knowledge, current predictive models of human kinematics neither quantify OoD performance nor are designed with it in mind. There is therefore a need for two frameworks, applicable across the domain of action modelling: one for hardening a predictive model to anomalous cases, and another for quantifying OoD performance with established benchmark datasets. General frameworks are here desirable in preference to new models, for the field is evolving so rapidly greater impact can be achieved by introducing mechanisms that can be applied to a breadth of candidate architectures, even if they are demonstrated in only a subset. Our approach here is founded on combining a latent variable generative model with a standard predictive model, illustrated with the current state-of-the-art discriminative architecture (Mao et al., 2019; Wei et al., 2020), a strategy that has produced state-of-the-art in the medical imaging domain Myronenko (2018). Our aim is to achieve robust performance within a realistic, low-volume, high-heterogeneity data regime by providing a general mechanism for enhancing a discriminative architecture with a generative model.\nIn short, our contributions to the problem of achieving robustness to distributional shift in human motion prediction are as follows:\n1. We provide a framework to benchmark OoD performance on the most widely used opensource motion capture datasets: Human3.6M (Ionescu et al., 2013), and CMU-Mocap1, and evaluate state-of-the-art models on it.\n2. We present a framework for hardening deep feed-forward models to OoD samples. We show that the hardened models are fast to train, and exhibit substantially improved OoD performance with minimal impact on ID performance.\nWe begin section 2 with a brief review of human motion prediction with deep neural networks, and of OoD generalisation using generative models. In section 3, we define a framework for benchmarking OoD performance using open-source multi-action datasets. We introduce in section 4 the discriminative models that we harden using a generative branch to achieve a state-of-the-art (SOTA) OoD benchmark. We then turn in section 5 to the architecture of the generative model and the overall objective function. Section 6 presents our experiments and results. We conclude in section 7 with a summary of our results, current limitations, and caveats, and future directions for developing robust and reliable OoD performance and a quantifiable awareness of unfamiliar behaviour." }, { "heading": "2 RELATED WORK", "text": "Deep-network based human motion prediction. Historically, sequence-to-sequence prediction using Recurrent Neural Networks (RNNs) have been the de facto standard for human motion prediction (Fragkiadaki et al., 2015; Jain et al., 2016; Martinez et al., 2017; Pavllo et al., 2018; Gui et al., 2018a; Guo & Choi, 2019; Gopalakrishnan et al., 2019; Li et al., 2020b). Currently, the SOTA is dominated by feed forward models (Butepage et al., 2017; Li et al., 2018; Mao et al., 2019; Wei et al., 2020). These are inherently faster and easier to train than RNNs. The jury is still out, however, on the optimal way to handle temporality for human motion prediction. Meanwhile, recent trends have overwhelmingly shown that graph-based approaches are an effective means to encode the spatial dependencies between joints (Mao et al., 2019; Wei et al., 2020), or sets of joints (Li et al., 2020b). In this study, we consider the SOTA models that have graph-based approaches with a feed forward mechanism as presented by (Mao et al., 2019), and the subsequent extension which leverages motion attention, Wei et al. (2020). We show that these may be augmented to improve robustness to OoD samples.\nGenerative models for Out-of-Distribution prediction and detection. Despite the power of deep neural networks for prediction in complex domains (LeCun et al., 2015), they face several challenges that limits their suitability for safety-critical applications. Amodei et al. (2016) list robustness to distributional shift as one of the five major challenges to AI safety. Deep generative models, have been used extensively for detection of OoD inputs and have been shown to generalise\n1t http://mocap.cs.cmu.edu/\nwell in such scenarios (Hendrycks & Gimpel, 2016; Liang et al., 2017; Hendrycks et al., 2018). While recent work has showed some failures in simple OoD detection using density estimates from deep generative models (Nalisnick et al., 2018; Daxberger & Hernández-Lobato, 2019), they remain a prime candidate for anomaly detection (Kendall & Gal, 2017; Grathwohl et al., 2019; Daxberger & Hernández-Lobato, 2019).\nMyronenko (2018) use a Variational Autoencoder (VAE) (Kingma & Welling, 2013) to regularise an encoder-decoder architecture with the specific aim of better generalisation. By simultaneously using the encoder as the recognition model of the VAE, the model is encouraged to base its segmentations on a complete picture of the data, rather than on a reductive representation that is more likely to be fitted to the training data. Furthermore, the original loss and the VAE’s loss are combined as a weighted sum such that the discriminator’s objective still dominates. Further work may also reveal useful interpretability of behaviour (via visualisation of the latent space as in Bourached & Nachev (2019)), generation of novel motion (Motegi et al., 2018), or reconstruction of missing joints as in Chen et al. (2015)." }, { "heading": "3 QUANTIFYING OUT-OF-DISTRIBUTION PERFORMANCE OF HUMAN MOTION PREDICTORS", "text": "Even a very compact representation of the human body such as OpenPose’s 17 joint parameterisation Cao et al. (2018) explodes to unmanageable complexity when a temporal dimension is introduced of the scale and granularity necessary to distinguish between different kinds of action: typically many seconds, sampled at hundredths of a second. Moreover, though there are anatomical and physiological constraints on the space of licit joint configurations, and their trajectories, the repertoire of possibility remains vast, and the kinematic demarcations of teleologically different actions remain indistinct. Thus, no practically obtainable dataset may realistically represent the possible distance between instances. To simulate OoD data we first need ID data that can be varied in its quantity and heterogeneity, closely replicating cases where a particular kinematic morphology may be rare, and therefore undersampled, and cases where kinematic morphologies are both highly variable within a defined class and similar across classes. Such replication needs to accentuate the challenging aspects of each scenario.\nWe therefore propose to evaluate OoD performance where only a single action, drawn from a single action distribution, is available for training and hyperparameter search, and testing is carried out on the remaining classes. In appendix A, to show that the action categories we have chosen can be distinguished at the time scales on which our trajectories are encoded, we train a simple classifier and show it separates the selected ID action from the others with high accuracy (100% precision and recall for the CMU dataset). Performance over the remaining set of actions may thus be considered OoD." }, { "heading": "4 BACKGROUND", "text": "Here we describe the current SOTA model proposed by Mao et al. (2019) (GCN). We then describe the extension by Wei et al. (2020) (attention-GCN) which antecedes the GCN prediction model with motion attention." }, { "heading": "4.1 PROBLEM FORMULATION", "text": "We are given a motion sequence X1:N = (x1,x2,x3, · · · ,xN ) consisting of N consecutive human poses, where xi ∈ RK , with K the number of parameters describing each pose. The goal is to predict the poses XN+1:N+T for the subsequent T time steps." }, { "heading": "4.2 DCT-BASED TEMPORAL ENCODING", "text": "The input is transformed using Discrete Cosine Transformations (DCT). In this way each resulting coefficient encodes information of the entire sequence at a particular temporal frequency. Furthermore, the option to remove high or low frequencies is provided. Given a joint, k, the position of k over N time steps is given by the trajectory vector: xk = [xk,1, . . . , xk,N ] where we convert to a\nDCT vector of the form: Ck = [Ck,1, . . . , Ck,N ] where Ck,l represents the lth DCT coefficient. For δl1 ∈ RN = [1, 0, · · · , 0], these coefficients may be computed as\nCk,l =\n√ 2\nN N∑ n=1 xk,n 1√ 1 + δl1 cos ( π 2N (2n− 1)(l − 1) ) . (1)\nIf no frequencies are cropped, the DCT is invertible via the Inverse Discrete Cosine Transform (IDCT):\nxk,l =\n√ 2\nN N∑ l=1 Ck,l 1√ 1 + δl1 cos ( π 2N (2n− 1)(l − 1) ) . (2)\nMao et al. use the DCT transform with a graph convolutional network architecture to predict the output sequence. This is achieved by having an equal length input-output sequence, where the input is the DCT transformation of xk = [xk,1, . . . , xk,N , xk,N+1, . . . , xk,N+T ], here [xk,1, . . . , xk,N ] is the observed sequence and [xk,N+1, . . . , xk,N+T ] are replicas of xk,N (ie xk,n = xk,N for n ≥ N ). The target is now simply the ground truth xk." }, { "heading": "4.3 GRAPH CONVOLUTIONAL NETWORK", "text": "Suppose C ∈ RK×(N+T ) is defined on a graph with k nodes and N +T dimensions, then we define a graph convolutional network to respect this structure. First we define a Graph Convolutional Layer (GCL) that, as input, takes the activation of the previous layer (A[l−1]), where l is the current layer.\nGCL(A[l−1]) = SA[l−1]W + b (3)\nwhere A[0] = C ∈ RK×(N+T ), and S ∈ RK×K is a layer-specific learnable normalised graph laplacian that represents connections between joints, W ∈ Rn[l−1]×n[l] are the learnable inter-layer weightings and b ∈ Rn[l] are the learnable biases where n[l] are the number of hidden units in layer l." }, { "heading": "4.4 NETWORK STRUCTURE AND LOSS", "text": "The network consists of 12 Graph Convolutional Blocks (GCBs), each containing 2 GCLs with skip (or residual) connections, see figure 7. Additionally, there is one GCL at the beginning of the network, and one at the end. n[l] = 256, for each layer, l. There is one final skip connection from the DCT inputs to the DCT outputs, which greatly reduces train time. The model has around 2.6M parameters. Hyperbolic tangent functions are used as the activation function. Batch normalisation is applied before each activation.\nThe outputs are converted back to their original coordinate system using the IDCT (equation 2) to be compared to the ground truth. The loss used for joint angles is the average l1 distance between the ground-truth joint angles, and the predicted ones. Thus, the joint angle loss is:\n`a = 1\nK(N + T ) N+T∑ n=1 K∑ k=1 |x̂k,n − xk,n| (4)\nwhere x̂k,n is the predicted kth joint at timestep n and xk,n is the corresponding ground truth.\nThis is separately trained on 3D joint coordinate prediction making use of the Mean Per Joint Position Error (MPJPE), as proposed in Ionescu et al. (2013) and used in Mao et al. (2019); Wei et al. (2020). This is defined, for each training example, as\n`m = 1\nJ(N + T ) N+T∑ n=1 J∑ j=1 ‖p̂j,n − pj,n‖2 (5)\nwhere p̂j,n ∈ R3 denotes the predicted jth joint position in frame n. And pj,n is the corresponding ground truth, while J is the number of joints in the skeleton." }, { "heading": "4.5 MOTION ATTENTION EXTENSION", "text": "Wei et al. (2020) extend this model by summing multiple DCT transformations from different sections of the motion history with weightings learned via an attention mechanism. For this extension, the above model (the GCN) along with the anteceding motion attention is trained end-to-end. We refer to this as the attention-GCN." }, { "heading": "5 OUR APPROACH", "text": "Myronenko (2018) augment an encoder-decoder discriminative model by using the encoder as a recognition model for a Variational Autoencoder (VAE), (Kingma & Welling, 2013; Rezende et al., 2014). Myronenko (2018) show this to be a very effective regulariser. Here, we also use a VAE, but for conjugacy with the discriminator, we use graph convolutional layers in the decoder. This can be compared to the Variational Graph Autoencoder (VGAE), proposed by Kipf & Welling (2016). However, Kipf & Welling’s application is a link prediction task in citation networks and thus it is desired to model only connectivity in the latent space. Here we model connectivity, position, and temporal frequency. To reflect this distinction, the layers immediately before, and after, the latent space are fully connected creating a homogenous latent space.\nThe generative model sets a precedence for information that can be modelled causally, while leaving elements of the discriminative machinery, such as skip connections, to capture correlations that remain useful for prediction but are not necessarily persuant to the objective of the generative model. In addition to performing the role of regularisation in general, we show that we gain robustness to distributional shift across similar, but different, actions that are likely to share generative properties. The architecture may be considered with the visual aid in figure 1." }, { "heading": "5.1 VARIATIONAL AUTOENCODER (VAE) BRANCH AND LOSS", "text": "Here we define the first 6 GCB blocks as our VAE recognition model, with a latent variable z ∈ RK×nz = N(µz, σz), where µz ∈ RK×nz , σz ∈ RK×nz . nz = 8, or 32 depending on training stability.\nThe KL divergence between the latent space distribution and a spherical Gaussian N(0, I) is given by:\n`l = KL(q(Z|C)||q(Z)) = 1\n2 nz∑ 1 ( µz 2 + σz 2 − 1− log((σz)2) ) . (6)\nThe decoder part of the VAE has the same structure as the discriminative branch; 6 GCBs. We parametrise the output neurons as µ ∈ RK×(N+T ), and log(σ2) ∈ RK×(N+T ). We can now model the reconstruction of inputs as samples of a maximum likelihood of a Gaussian distribution which constitutes the second term of the negative Variational Lower Bound (VLB) of the VAE:\n`G = log(p(C|Z)) = − 1\n2 N+T∑ n=1 K∑ l=1 ( log(σ2k,l) + log(2π) + |Ck,l − µk,l|2 elog(σ 2 k,l) ) , (7)\nwhere Ck,l are the DCT coefficients of the ground truth." }, { "heading": "5.2 TRAINING", "text": "We train the entire network together with the additional of the negative VLB:\n` = 1\n(N + T )K N+T∑ n=1 K∑ k=1\n|x̂k,n − xk,n|︸ ︷︷ ︸ Discriminitive loss −λ (`G − `l)︸ ︷︷ ︸ VLB . (8)\nHere λ is a hyperparameter of the model. The overall network is ≈ 3.4M parameters. The number of parameters varies slightly as per the number of joints, K, since this is reflected in the size of the graph in each layer (k = 48 for H3.6M, K = 64 for CMU joint angles, and K = J = 75 for CMU Cartesian coordinates). Furthermore, once trained, the generative model is not required for prediction and hence for this purpose is as compact as the original models." }, { "heading": "6 EXPERIMENTS", "text": "" }, { "heading": "6.1 DATASETS AND EXPERIMENTAL SETUP", "text": "Human3.6M (H3.6M) The H3.6M dataset (Ionescu et al., 2011; 2013), so called as it contains a selection of 3.6 million 3D human poses and corresponding images, consists of seven actors each performing 15 actions, such as walking, eating, discussion, sitting, and talking on the phone. Martinez et al. (2017); Mao et al. (2019); Li et al. (2020b) all follow the same training and evaluation procedure: training their motion prediction model on 6 (5 for train and 1 for cross-validation) of the actors, for each action, and evaluate metrics on the final actor, subject 5. For easy comparison to these ID baselines, we maintain the same train; cross-validation; and test splits. However, we use the single, most well-defined action (see appendix A), walking, for train and cross-validation, and we report test error on all the remaining actions from subject 5. In this way we conduct all parameter selection based on ID performance.\nCMU motion capture (CMU-mocap) The CMU dataset consists of 5 general classes of actions. Similarly to (Li et al., 2018; 2020a; Mao et al., 2019) we use 8 detailed actions from these classes: ’basketball’, ’basketball signal’, ’directing traffic’ ’jumping, ’running’, ’soccer’, ’walking’, and ’window washing’. We use two representations, a 64-dimensional vector that gives an exponential map representation (Grassia, 1998) of the joint angle, and a 75-dimensional vector that gives the 3D Cartesian coordinates of 25 joints. We do not tune any hyperparameters on this dataset and use only a train and test set with the same split as is common in the literature (Martinez et al., 2017; Mao et al., 2019).\nModel configuration We implemented the model in PyTorch (Paszke et al., 2017) using the ADAM optimiser (Kingma & Ba, 2014). The learning rate was set to 0.0005 for all experiments where, unlike Mao et al. (2019); Wei et al. (2020), we did not decay the learning rate as it was hypothesised that the dynamic relationship between the discriminative and generative loss would make this redundant. The batch size was 16. For numerical stability, gradients were clipped to a maximum `2-norm of 1 and log(σ̂2) and values were clamped between -20 and 3. Code for all experiments is available at the following anonymized link: https://anonymous.4open.science/r/11a7a2b5-da1343f8-80de-51e526913dd2/\nBaseline comparison Both Mao et al. (2019) (GCN), and Wei et al. (2020) (attention-GCN) use this same Graph Convolutional Network (GCN) architecture with DCT inputs. In particular, Wei et al. (2020) increase the amount of history accounted for by the GCN by adding a motion attention mechanism to weight the DCT coefficients from different sections of the history prior to being inputted to the GCN. We compare against both of these baselines on OoD actions. For attentionGCN we leave the attention mechanism preceding the GCN unchanged such that the generative branch of the model is reconstructing the weighted DCT inputs to the GCN, and the whole network is end-to-end differentiable.\nHyperparameter search Since a new term has been introduced to the loss function, it was necessary to determine a sensible weighting between the discriminative and generative models. In Myronenko (2018), this weighting was arbitrarily set to 0.1. It is natural that the optimum value here will relate to the other regularisation parameters in the model. Thus, we conducted random hyperparameter search for pdrop and λ in the ranges pdrop = [0, 0.5] on a linear scale, and λ = [10, 0.00001] on a logarithmic scale. For fair comparison we also conducted hyperparameter search on GCN, for values of the dropout probability (pdrop) between 0.1 and 0.9. For each model, 25 experiments were run and the optimum values were selected on the lowest ID validation error. The hyperparameter search was conducted only for the GCN model on short-term predictions for the H3.6M dataset and used for all future experiments hence demonstrating generalisability of the architecture." }, { "heading": "6.2 RESULTS", "text": "Consistent with the literature we report short-term (< 500ms) and long-term (> 500ms) predictions. In comparison to GCN, we take short term history into account (10 frames, 400ms) for both\ndatasets to predict both short- and long-term motion. In comparison to attention-GCN, we take long term history (50 frames, 2 seconds) to predict the next 10 frames, and predict futher into the future by recursively applying the predictions as input to the model as in Wei et al. (2020). In this way a single short term prediction model may produce long term predictions.\nWe use Euclidean distance between the predicted and ground-truth joint angles for the Euler angle representation. For 3D joint coordinate representation we use the MPJPE as used for training (equation 5). Table 1 reports the joint angle error for the short term predictions on the H3.6M dataset. Here we found the optimum hyperparameters to be pdrop = 0.5 for GCN, and λ = 0.003, with pdrop = 0.3 for our augmentation of GCN. The latter of which was used for all future experiments, where for our augmentation of attention-GCN we removed dropout altogether. On average, our model performs convincingly better both ID and OoD. Here the generative branch works well as both a regulariser for small datasets and by creating robustness to distributional shifts. We see similar and consistent results for long-term predictions in table 2.\nFrom tables 3 and 4, we can see that the superior OoD performance generalises to the CMU dataset with the same hyperparameter settings with a similar trend of the difference being larger for longer predictions for both joint angles and 3D joint coordinates. For each of these experiments nz = 8.\nTable 5, shows that the effectiveness of the generative branch generalises to the very recent motion attention architecture. For attention-GCN we used nz = 32. Here, interestingly short term predictions are poor but long term predictions are consistently better. This supports our assertion that information relevant to generative mechanisms are more intrinsic to the causal model and thus, here, when the predicted output is recursively used, more useful information is available for the future predictions.\nWalking (ID) Eating (OoD) Smoking (OoD) Average (of 14 for OoD) milliseconds 560 720 880 1000 560 720 880 1000 560 720 880 1000 560 720 880 1000 att-GCN (OoD) 55.4 60.5 65.2 68.7 87.6 103.6 113.2 120.3 81.7 93.7 102.9 108.7 112.1 129.6 140.3 147.8 ours (OoD) 58.7 60.6 65.5 69.1 81.7 94.4 102.7 109.3 80.6 89.9 99.2 104.1 113.1 127.7 137.9 145.3\nTable 5: Long-term prediction of 3D joint positions on H3.6M. Here, ours is also trained with the attention-GCN model. Full table in appendix, table 9." }, { "heading": "7 CONCLUSION", "text": "We draw attention to the need for robustness to distributional shifts in predicting human motion, and propose a framework for its evaluation based on major open source datasets. We demonstrate that state-of-the-art discriminative architectures can be hardened to extreme distributional shifts by augmentation with a generative model, combining low in-distribution predictive error with maximal generalisability. The introduction of a surveyable latent space further provides a mechanism for model perspicuity and interpretability, and explicit estimates of uncertainty facilitate the detection of anomalies: both characteristics are of substantial value in emerging applications of motion prediction, such as autonomous driving, where safety is paramount. Our investigation argues for wider use of generative models in behavioural modelling, and shows it can be done with minimal or no performance penalty, within hybrid architectures of potentially diverse constitution." }, { "heading": "B FULL RESULTS", "text": "" }, { "heading": "C LATENT SPACE OF THE VAE", "text": "One of the advantages of having a generative model involved is that we have a latent variable which represents a distribution over deterministic encodings of the data. We considered the question of whether or not the VAE was learning anything interpretable with its latent variable as was the case in Kipf & Welling (2016).\nThe purpose of this investigation was two-fold. First to determine if the generative model was learning a comprehensive internal state, or just a non-linear average state as is common to see in the training of VAE like architectures. The result of this should suggest a key direction of future work. Second, an interpretable latent space may be of paramount usefulness for future applications of human motion prediction. Namely, if dimensionality reduction of the latent space to an inspectable number of dimensions yields actions, or behaviour that are close together if kinematically or teleolgically similar, as in Bourached & Nachev (2019), then human experts may find unbounded potential application for a interpretation that is both quantifiable and qualitatively comparable to all other classes within their domain of interest. For example, a medical doctor may consider a patient to have unusual symptoms for condition, say, A. It may be useful to know that the patient’s deviation from a classical case of A, is in the direction of condition, say, B.\nWe trained the augmented GCN model discussed in the main text with all actions, for both datasets. We use Uniform Manifold Approximation and Projection (UMAP) (McInnes et al., 2018) to project the latent space of the trained GCN models onto 2 dimensions for all samples in the dataset for each dataset independently. From figure 6 we can see that for both models the 2D project relatively closely resembles a spherical gaussian. Further, we can see from figure 6b that the action walking does not occupy a discernible domain of the latent space. This result is further verified by using the same classifier as used in appendix A, which achieved no better than chance when using the latent variables as input rather than the raw data input.\nThis result implies that the benefit observed in the main text is by using the generative model is significant even if the generative model has poor performance itself. In this case we can be sure that the reconstructions are at least not good enough to distinguish between actions. It is hence natural for future work to investigate if the improvement on OoD performance is greater if trained in such a way as to ensure that the generative model performs well. There are multiple avenues through which such an objective might be achieve. Pre-training the generative model being one of the salient candidates.\nD ARCHITECTURE DIAGRAMS\nG C L (\n2 0 K ) =\nG C B (\n2 0 K ) =" } ]
2,020
null
SP:8f1c7fabe235bdf095007948007509102dd0c126
[ "The authors address the problem of discrete keypoint matching. For an input pair of images, the task is to match the unannotated (but given as part of the input) keypoints. The main contribution is identifying the bottleneck of the current SOTA algorithm: a fixed connectivity construction given by Delauney triangulation. By replacing this with an end-to-end learnable algorithm, they outperform SOTA with a decent margin." ]
Graph matching (GM) has been traditionally modeled as a deterministic optimization problem characterized by an affinity matrix under pre-defined graph topology. Though there have been several attempts on learning more effective node-level affinity/representation for matching, they still heavily rely on the initial graph structure/topology which is typically obtained through heuristic ways (e.g. Delaunay or k-nearest) and will not be adjusted during the learning process to adapt to problem-specific patterns. We argue that such a mechanism for learning on the fixed topology may restrict the potential of a GM solver for specific tasks, and propose to learn latent graph topology in replacement of the fixed topology as input. To this end, we devise two types of latent graph generation procedures in a deterministic and generative fashion, respectively. Particularly, the generative procedure emphasizes the across-graph consistency and thus can be viewed as a matching-guided generative model. Our methods show superior performance over previous state-of-the-arts on public benchmarks.
[]
[ { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Christopher M Bishop" ], "title": "Pattern recognition and machine learning", "venue": "springer,", "year": 2006 }, { "authors": [ "Aleksandar Bojchevski", "Oleksandr Shchur", "Daniel Zügner", "Stephan Günnemann" ], "title": "Netgan: Generating graphs via random walks", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Lubomir Bourdev", "Jitendra Malik" ], "title": "Poselets: Body part detectors trained using 3d human pose annotations", "venue": "In ICCV,", "year": 2009 }, { "authors": [ "T. Caetano", "J. McAuley", "L. Cheng", "Q. Le", "A.J. Smola" ], "title": "Learning graph matching", "venue": null, "year": 2009 }, { "authors": [ "Vı́ctor Campos", "Brendan Jou", "Xavier Giró-i Nieto", "Jordi Torres", "Shih-Fu Chang" ], "title": "Skip rnn: Learning to skip state updates in recurrent neural networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "M. Cho", "K.M. Lee" ], "title": "Progressive graph matching: Making a move of graphs via probabilistic voting", "venue": "In CVPR,", "year": 2012 }, { "authors": [ "M. Cho", "J. Lee", "K.M. Lee" ], "title": "Reweighted random walks for graph matching", "venue": "In ECCV,", "year": 2010 }, { "authors": [ "M. Cho", "K. Alahari", "J. Ponce" ], "title": "Learning graphs to match", "venue": "In ICCV,", "year": 2013 }, { "authors": [ "Junyoung Chung", "Sungjin Ahn", "Yoshua Bengio" ], "title": "Hierarchical multiscale recurrent neural networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Nicola De Cao", "Thomas Kipf" ], "title": "Molgan: An implicit generative model for small molecular graphs", "venue": "arXiv preprint arXiv:1805.11973,", "year": 2018 }, { "authors": [ "Xingbo Du", "Junchi Yan", "Hongyuan Zha" ], "title": "Joint link prediction and network alignment via cross-graph embedding", "venue": null, "year": 2019 }, { "authors": [ "Xingbo Du", "Junchi Yan", "Rui Zhang", "Hongyuan Zha" ], "title": "Cross-network skip-gram embedding for joint network alignment and link prediction", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2020 }, { "authors": [ "A. Egozi", "Y. Keller", "H. Guterman" ], "title": "A probabilistic approach to spectral graph matching", "venue": null, "year": 2013 }, { "authors": [ "Paul Erdos", "Alfred Renyi" ], "title": "On random graphs i", "venue": "In Publicationes Mathematicae Debrecen", "year": 1959 }, { "authors": [ "Mark Everingham", "Luc Gool", "Christopher K. Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes (voc) challenge", "venue": "Int. J. Comput. Vision,", "year": 2010 }, { "authors": [ "Matthias Fey", "Jan Eric Lenssen", "Frank Weichert", "Heinrich Müller" ], "title": "Splinecnn: Fast geometric deep learning with continuous b-spline kernels", "venue": null, "year": 2018 }, { "authors": [ "Matthias Fey", "Jan E Lenssen", "Christopher Morris", "Jonathan Masci", "Nils M Kriege" ], "title": "Deep graph matching consensus", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Rafael Gómez-Bombarelli", "Jennifer N Wei", "David Duvenaud", "José Miguel Hernández-Lobato", "Benjamı́n Sánchez-Lengeling", "Dennis Sheberla", "Jorge Aguilera-Iparraguirre", "Timothy D Hirzel", "Ryan P Adams", "Alán Aspuru-Guzik" ], "title": "Automatic chemical design using a data-driven continuous representation of molecules", "venue": "ACS central science,", "year": 2018 }, { "authors": [ "Mark Heimann", "Haoming Shen", "Tara Safavi", "Danai Koutra" ], "title": "Regal: Representation learning-based graph alignment", "venue": "In Proceedings of the 27th ACM International Conference on Information and Knowledge Management,", "year": 2018 }, { "authors": [ "Jiayi Huang", "Mostofa Patwary", "Gregory Diamos" ], "title": "Coloring big graphs with alphagozero", "venue": null, "year": 2019 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Bo Jiang", "Jin Tang", "Chris Ding", "Yihong Gong", "Bin Luo" ], "title": "Graph matching via multiplicative update algorithm", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Daniel D Johnson" ], "title": "Learning graphical state transitions", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Variational graph auto-encoders", "venue": "arXiv preprint arXiv:1611.07308,", "year": 2016 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Woute Kool", "Max Welling" ], "title": "Attention solves your tsp", "venue": null, "year": 2018 }, { "authors": [ "E.M. Loiola", "N.M. de Abreu", "P.O. Boaventura-Netto", "P. Hahn", "T. Querido" ], "title": "A survey for the quadratic assignment problem", "venue": "EJOR, pp", "year": 2007 }, { "authors": [ "Juhong Min", "Jongmin Lee", "Jean Ponce", "Minsu Cho" ], "title": "Spair-71k: A large-scale benchmark for semantic correspondence", "venue": "arXiv preprint arXiv:1908.10543,", "year": 2019 }, { "authors": [ "Radford M Neal", "Geoffrey E Hinton" ], "title": "A view of the em algorithm that justifies incremental, sparse, and other variants", "venue": "In Learning in graphical models,", "year": 1998 }, { "authors": [ "A. Nowak", "S. Villar", "A. Bandeira", "J. Bruna" ], "title": "Revised note on learning quadratic assignment with graph neural networks", "venue": "DSW,", "year": 2018 }, { "authors": [ "Shirui Pan", "Ruiqi Hu", "Guodong Long", "Jing Jiang", "Lina Yao", "Chengqi Zhang" ], "title": "Adversarially regularized graph autoencoder for graph embedding", "venue": null, "year": 2018 }, { "authors": [ "Les Piegl", "Wayne Tiller" ], "title": "The NURBS book", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Marin Vlastelica Pogancic", "Anselm Paulus", "Vı́t Musil", "Georg Martius", "Michal Rolı́nek" ], "title": "Differentiation of black-box combinatorial solvers", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Michal Rolı́nek", "Paul Swoboda", "Dominik Zietlow", "Anselm Paulus", "Vı́t Musil", "Georg Martius" ], "title": "Deep graph matching via blackbox differentiation of combinatorial solvers", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Christian Schellewald", "Christoph Schnörr" ], "title": "Probabilistic subgraph matching based on convex relaxation", "venue": "In EMMCVPR,", "year": 2005 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Paul Swoboda", "Carsten Rother", "Hassan Abu Alhaija", "Dagmar Kainmuller", "Bogdan Savchynskyy" ], "title": "A study of lagrangean decompositions and dual ascent solvers for graph matching", "venue": null, "year": 2017 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph Attention Networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Hongwei Wang", "Jia Wang", "Jialin Wang", "Miao Zhao", "Weinan Zhang", "Fuzheng Zhang", "Xing Xie", "Minyi Guo" ], "title": "Graphgan: Graph representation learning with generative adversarial nets", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "R. Wang", "J. Yan", "X. Yang" ], "title": "Learning combinatorial embedding networks for deep graph matching", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Runzhong Wang", "Junchi Yan", "Xiaokang Yang" ], "title": "Neural graph matching network: Learning lawler’s quadratic assignment problem with extension to hypergraph and multiple-graph matching", "venue": "arXiv preprint arXiv:1911.11308,", "year": 2019 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Hao Xiong", "Junchi Yan" ], "title": "Btwalk: Branching tree random walk for multi-order structured network embedding", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2020 }, { "authors": [ "Tianshu Yu", "Junchi Yan", "Yilin Wang", "Wei Liu" ], "title": "Generalizing graph matching beyond quadratic assignment model", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Tianshu Yu", "Runzhong Wang", "Junchi Yan", "Baoxin Li" ], "title": "Learning deep graph matching with channel-independent embedding and hungarian attention", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "A. Zanfir", "C. Sminchisescu" ], "title": "Deep learning of graph matching", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Si Zhang", "Hanghang Tong" ], "title": "Final: Fast attributed network alignment", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Zhen Zhang", "Wee Sun Lee" ], "title": "Deep graphical feature learning for the feature matching problem", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "F. Zhou", "F. Torre" ], "title": "Factorized graph matching", "venue": "IEEE PAMI,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Being a long standing NP-hard problem (Loiola et al., 2007), graph matching (GM) has received persistent attention from the machine learning and optimization communities for many years. Concretely, for two graphs with n nodes for each, graph matching seeks to solve1:\nmax z\nz>Mz s.t. Z ∈ {0, 1}n×n, Hz = 1 (1)\nwhere the affinity matrix M ∈ Rn 2×n2\n+ encodes node (diagonal elements) and edge (off-diagonal) affinities/similarities and z is the column-wise vectorization form of the permutation matrix Z. H is a selection matrix ensuring each row and column of Z summing to 1. 1 is a column vector filled with 1. Eq. (1) is the so-called quadratic assignment problem (QAP) (Cho et al., 2010). Maximizing Eq. (1) amounts to maximizing the sum of the similarity induced by matching vector Z. While Eq. (1) does not encode the topology of graphs, Zhou & Torre (2016) further propose to factorize M to explicitly incorporate topology matrix, where a connectivity matrix A ∈ {0, 1}n×n is used to indicate the topology of a single graph (Aij = 1 if there exists an edge between nodes i and j; Aij = 0 otherwise). To ease the computation, Eq. (1) is typically relaxed by letting z ∈ [0, 1]n 2 and keeping other parts of Eq. (1) intact. Traditional solvers to such relaxed problem generally fall into the categories of iterative update (Cho et al., 2010; Jiang et al., 2017) or numerical continuation (Zhou & Torre, 2016; Yu et al., 2018), where the solvers are developed under two key assumptions: 1) Affinity M is pre-computed with some non-negative metrics, e.g. Gaussian kernel, L2-distance or Manhattan distance; 2) Graph topology is pre-defined as input either in dense (Schellewald & Schnörr, 2005) or sparse (Zhou & Torre, 2016) fashion. There have been several successful attempts towards adjusting the first assumption by leveraging the power of deep networks to learn more effective graph representation for GM (Wang et al., 2019a; Yu et al., 2020; Fey et al., 2020). However, to our best knowledge, there is little previous work questioning and addressing the problem regarding the second assumption in the context of learning-based graph matching2. For example, existing\n1Without loss of generality, we discuss graph matching under the setting of equal number of nodes without outliers. The unequal case can be readily handled by introducing extra constraints or dummy nodes. Bipartite matching and graph isomorphism are subsets of this quadratic formulation (Loiola et al., 2007).\n2There are some loosely related works (Du et al., 2019; 2020) on network alignment and link prediction without learning, which will be discussed in detail in the related works.\nstandard pipeline of keypoint matching in computer vision will construct initial topology by Delaunay triangulation or k-nearest neighbors. Then this topology will be freezed throughout the subsequent learning and matching procedures. In this sense, the construction of graph topology is peeled from matching task as a pre-processing stage. More examples can be found beyond the vision communities such as in social network alignment (Zhang & Tong, 2016; Heimann et al., 2018; Xiong & Yan, 2020) assuming fixed network structure for individual node matching in two networks.\nWe argue that freezing graph topology for matching can hinder the capacity of graph matching solvers. For a pre-defined graph topology, the linked nodes sometimes result in less meaningful interaction, especially under the message-passing mechanism in graph neural networks (Kipf & Welling, 2017). We give a schematic demonstration in Fig. 1. Though some earlier attempts (Cho & Lee, 2012; Cho et al., 2013) seek to adjust the graph topology under traditional non-deep learning setting, such procedures cannot be readily integrated into end-to-end deep learning frameworks due to undifferentiable nature. Building upon the hypothesis that there exists some latent topology better than heuristically created one for GM, our aim is to learn it (or its distribution) for GM. Indeed, jointly solving matching and graph topology learning can be intimidating due to the combinatorial nature, which calls for more advanced approaches.\nIn this paper, we propose an end-to-end framework to jointly learn the latent graph topology and perform GM, termed as deep latent graph matching (DLGM). We leverage the power of graph generative model to automatically produce graph topology from given features and their geometric relations, under specific locality prior. Different from generative learning on singleton graphs (Kipf & Welling, 2016; Bojchevski et al., 2018), our graph generative learning is performed in a pairwise fashion, leading to a novel matching-guided generative paradigm. The source code will be made publicly available.\nContributions: 1) We explore a new direction for more flexible GM by actively learning latent topology, in contrast to previous works using fixed topology as input; 2) Under this setting, we propose a deterministic optimization approach to learn graph topology for matching; 3) We further present a generative way to produce latent topology under a probabilistic interpretation by ExpectationMaximization. This framework can also adapt to other problems where graph topology is the latent structure to infer; 4) Our method achieves state-ofthe-art performance on public benchmarks." }, { "heading": "2 RELATED WORKS", "text": "In this section, we first discuss existing works for graph topology and matching updating whose motivation is a bit similar to ours while the technique is largely different. Then we discuss relevant works in learning graph matching and generative graph models from the technical perspective.\nTopology updating and matching. There are a few works for joint graph topology updating and matching, in the context of network alignment. Specifically, given two initial networks for matching, Du et al. (2019) show how to alternatively perform link prediction within each network and node matching across networks based on the observation that these two tasks can benefit to each other. In their extension (Du et al., 2020), a skip-gram embedding framework is further established under the same problem setting. In fact, these works involve a random-walk based node embedding updating and classification based link prediction modules and the whole algorithm runs in a one-shot optimization fashion. There is neither explicit training dataset nor trained matching model (except\nfor the link classifier), which bears less flavor of machine learning. In contrast, our method involves training an explicit model for topology recovery and matching solving. Specifically, our deterministic technique (see Sec. 3.4.1) solves graph topology and matching in one-shot, while the proposed generative method alternatively estimates the topology and matching (see Sec. 3.4.2). Our approach allows to fully leverage multiple training samples in many applications like computer vision to boost the performance on test set. Moreover, the combinatorial nature of the matching problem is not addressed in (Du et al., 2019; 2020), and they adopt a greedy selection strategy instead. While we develop a principled combinatorial learning approach to this challenge. Also their methods rely on a considerable amount of seed matchings, yet this paper directly learns the latent topology from scratch which is more challenging and seldom studied.\nLearning of graph matching. Early non-deep learning-based methods seek to learn effective metric (e.g. weighted Euclid distance) for node and edge features or affinity kernel (e.g. Gaussian kernel) in a parametric fashion (Caetano et al., 2009; Cho et al., 2013). Recent deep graph matching methods have shown how to extracte more dedicated feature representation. The work (Zanfir & Sminchisescu, 2018) adopts VGG16 (Simonyan & Zisserman, 2014) as the backbone for feature extraction on images. Other efforts have been witnessed in developing more advanced pipelines, where graph embedding (Wang et al., 2019a; Yu et al., 2020; Fey et al., 2020) and geometric learning (Zhang & Lee, 2019; Fey et al., 2020) are involved. Rolı́nek et al. (2020) study the way of incorporating traditional non-differentiable combinatorial solvers, by introducing a differentiatiable blackbox GM solver (Pogancic et al., 2020). Recent works in tackling combinatorial problem with deep learning (Huang et al., 2019; Kool & Welling, 2018) also inspire developing combinatorial deep solvers, for GM problems formulated by both Koopmans-Beckmann’s QAP (Nowak et al., 2018; Wang et al., 2019a) and Lawler’s QAP (Wang et al., 2019b). Specifically, Wang et al. (2019a) devise a permutation loss for supervised learning, with an improvement in Yu et al. (2020) via Hungarian attention. Wang et al. (2019b) solve the most general Lawler’s QAP with graph embedding technique.\nGenerative graph model. Early generative models for graph can date back to (Erdos & Renyi, 1959), in which edges are generated with fixed probability. Recently, Kipf & Welling (2016) present a graph generative model by re-parameterizing the edge probability from Gaussian noise. Johnson (2017) propose to generate graph in an incremental fashion, and in each iteration a portion of the graph is produced. Gómez-Bombarelli et al. (2018) utilized recurrent neural network to generate graph from a sequence of molecule representation. Adversarial graph generation is considered in (Pan et al., 2018; Wang et al., 2018; Bojchevski et al., 2018). Specifically, Wang et al. (2018); Bojchevski et al. (2018) seek to unify graph generative model and generative adversarial networks. In parallel, reinforcement learning has been adopted to generate discrete graphs (De Cao & Kipf, 2018)." }, { "heading": "3 LEARNING LATENT TOPOLOGY FOR GM", "text": "In this section, we describe details of the proposed framework with two specific algorithms derived from deterministic and generative perspectives, respectively. Both algorithms are motivated by the hypothesis that there exists some latent topology more suitable for matching rather than a fixed one. Note the proposed deterministic algorithm performs a standard forward-backward pass to jointly learn the topology and matching, while our generative algorithm consists of an alternative optimization procedure between estimating latent topology and learning matching under an Expectation-Maximization (EM) interpretation. In general, the generative algorithm assumes that a latent topology is sampled from a latent distribution, where the expected matching accuracy sufficing this distribution is maximized. Therefore, we expect to learn a topology generator sufficing such distribution. We reformulate GM into Bayesian fashion for consistent discussion in Sec. 3.1, detail deterministic/generative latent module in Sec. 3.2 and discuss the loss functions from a probabilistic perspective in Sec. 3.3. We finally elaborate on the holistic framework and the optimization procedure for both algorithms (deterministic and generative) in Sec. 3.4." }, { "heading": "3.1 PROBLEM DEFINITION AND BACKGROUND", "text": "GM problem can be viewed as a Bayesian variant of Eq. (1). In general, let G(s) and G(t) represent the initial source and target graphs for matching, respectively. We represent graph as G := {X,E,A}, where X ∈ Rn×d1 is the representation of n nodes with dimension d1. E ∈ Rm×d2 are features of\nm edges and A ∈ {0, 1}n×n is initial connectivity (i.e. topology) matrix by heuristics e.g. Delaunay triangulation. For notational brevity, we assume d1 and d2 keep intact after updating the features across each convolutional layers of GNN (i.e., feature dimensions of both nodes and edges will not change after each layer’s update). Denote the matching Z ∈ {0, 1}n×n between two graphs, where Zij = 1 indicates a correspondence exists between node i in G(s) and node j in G(t), and Zij = 0 otherwise. Given training samples {Zk,G(s)k ,G (t) k } with k = 1, 2, ..., N , the objective of learning-based GM aims to maximize the likelihood:\nmax θ ∏ k Pθ ( Zk|G(s)k ,G (t) k ) (2)\nwhere θ denotes model parameters. Pθ(·) measures the probability of matching Zk given the k-th pair, and is instantiated via a network parameterized by θ.\nBeing a generic module for producing latent topology, our method can be flexibly integrated into existing deep GM frameworks. We build up our method based on state-of-the-art (Rolı́nek et al., 2020), which utilizes SplineCNN (Fey et al., 2018) for node/edge representation learning. SplineCNN is a specific graph neural networks which updates a node representation via a weighted summation of its neighbors. The update rule at node i of a standard SplineCNN reads:\n(x ∗ g)(i) = 1 |N (i)| d1∑ l=1 ∑ j∈N (i) xl(j) · gl(e(i, j)) (3)\nwhere xl(j) performs the convolution on node j and outputs a d1-dimensional feature. gl(·) delivers the message weight given the edge feature e(i, j). N (i) refers to i’s neighboring nodes Summation over neighbors follows the topology A. Since our algorithm learns to generate topology, we need to explicitly express Eq. (3) in a differentiable way w.r.t. A. To this end, we rewrite Eq. (3) as:\n(x ∗ g|A) = ( ◦G)X̂ (4)\nwhere  is the normalized connectivity with each row normalized by the degree |N (i)| (see Eq. (3)) of the corresponding node i. G and X̂ correspond to outputs of gl(·) and xl(·) operators, respectively. (· ◦ ·) is the Hadamard product. With Eq. (4), we thus can perform back-propagation on connectivity/topology A. See more details in Appendix A.2." }, { "heading": "3.2 LATENT TOPOLOGY LEARNING", "text": "Existing learning-based graph matching algorithms consider A to be fixed throughout the computation without questioning if the input topology is optimal or not. This can be problematic since input graph construction is heuristic, and it never takes into account how suitable it is for the subsequent GM task. In our framework, instead of utilizing a fixed pre-defined topology, we consider to produce latent topology under two settings: 1) a deterministic and 2) a generative way. The former is often more efficient while the latter method can be more accurate at the cost of exploring more latent topology. Note both methods produce discrete topology to verify our hypothesis about the existence of more suitable discrete latent topology for GM problem. The followings describe two deep structures.\nDeterministic learning: Given input features X and initial topology A, the deterministic way of generating latent topology A ∈ {0, 1}n×n is3:\nAij = Rounding(sigmoid(y > i Wyj)) with Y = GCN(X,A) (5)\nwhere GCN(·) is the graph convolutional networks (GCN) (Kipf & Welling, 2017) and yi corresponds to the feature of node i in feature map Y. W is the learnable parameter matrix. Note function Rounding(·) is undifferentiable, and will be discussed in Sec. 3.4.1. Generative learning: We reparameterize the representation as:\nP (yi|X,A) = N (yi|µi,diag(σ2)) (6) 3We consider the case when only node feature E and topology A are necessary. Edge feature E can be\nreadily integrated as another input.\nwith µ = GCNµ(X,A) and σ = GCNσ(X,A) are two GCNs producing mean and covariance. It is equivalent to sampling a random vector from i.i.d. uniform distribution s ∼ U(0,1), then applying y = µ+ s · σ, where (·) is element-wise product. Similar as Eq. (5) by introducing learnable parameter W, the generative latent topology is sampled following i.i.d. distribution over each edge (i, j):\nP (A|Y) = ∏ i ∏ j P (Aij |yi,yj) with P (Aij = 1|yi,yj) = sigmoid(y>i Wyj) (7)\nSince sigmoid(·) maps any input into (0, 1), Eq. (7) can be interpreted as the probability of sampling edge (i, j). As the sampling procedure is undifferentiable, we apply Gumbel-softmax trick (Jang et al., 2017) as another reparameterization procedure. As such, a latent graph topology A can be sampled fully from distribution P (A) and the procedure becomes differentiable." }, { "heading": "3.3 LOSS FUNCTIONS", "text": "In this section, we explain three loss functions and the behind motivation: matching loss, locality loss and consistency loss. The corresponding probabilistic interpretation of each loss function can be found in Sec. 3.4.2. These functions are selectively activated in DLGM-D and DLGM-G (see Sec. 3.4). In DLGM-G, different loss functions are activated in inference and learning steps.\ni) Matching loss. This common term measures how the predicted matching Ẑ diverges from groundtruth Z. Following Rolı́nek et al. (2020), we adopt Hamming distance on node-wise matching:\nLM = Hamming(Ẑ,Z) (8)\nii) Locality loss. This loss is devised to account for the general prior that the produced/learnt graph topology should advocate local connection rather than distant one, since two nodes may have less meaningful interaction once they are too distant from each other. In this sense, locality loss serves as a prior or regularizer in GM. As shown in multiple GM methods (Yu et al., 2018; Wang et al., 2019a; Fey et al., 2020), Delaunay triangulation is an effective way to deliver good locality. Therefore in our method, the locality loss is the Hamming distance between the initial topology A (obtained from Delaunay) and predicted topology A for both source graph and target graph:\nLL = Hamming(A(s),A(s)) + Hamming(A(t),A(t)) (9)\nWe emphasize that locality loss serves as a prior for latent graph. It focuses on advocating locality, but not reconstructing the initial Delaunay triangulation (as in Graph VAE (Kipf & Welling, 2016)).\niii) Consistency loss. One can imagine that a GM solver is likely to deliver better performance if two graphs in a training pair are similar. In particular, we anticipate the latent topology A(s) and A(t) to be isomorphic under a specific matching, since isomorphic topological structures tend to be easier to match. Driven by this consideration, we devise the consistency loss which measures the level of isomorphism between latent topology A(s) and A(t):\nLC(·|Z) = |Z>A(s)Z−A(t)|+ |ZA(t)Z> −A(s)| (10)\nNote Z does not necessarily refer to the ground-truth, but can be any predicted matching. In this sense, latent topology A(s) and A(t) can be generated jointly given the matching Z as guidance information. This term can also serve as a consistency prior or regularization. We given a schematic example showing the merit of introducing consistency loss in Fig. 2(b)." }, { "heading": "3.4 FRAMEWORK", "text": "A schematic diagram of our framework is given in Fig. 2(a) which consists of a singleton pipeline for processing a single image. It consists of three essential modules: a feature backbone (NB), a latent topology module (NG) and a feature refinement module (NR). Specifically, module NG corresponds to Sec. 3.2 with deterministic or generative implementations. Note the geometric relation of keypoints provide some prior for generating topology A. We employ VGG16 (Simonyan & Zisserman, 2014)\nas NB and feed the produced node feature X and edge feature E to NG. NB also produces a global feature for each image. After generating the latent topology A, we pass over X and E together with A to NR (SplineCNN (Fey et al., 2018)). The holistic pipeline handling pairwise graph inputs can be found in Fig. 4 in Appendix A.1 which consists of two copies of singleton pipeline processing source and target data (in a Siamese fashion), respectively. Then the outputs of two singleton pipelines are formulated into affinity matrix, followed by a differentiable Blackbox GM solver (Pogancic et al., 2020) with message-passing mechanism (Swoboda et al., 2017). Note once without NG, the holistic pipeline with only NB +NR is identical to the method in (Rolı́nek et al., 2020). Readers are referred to this strong baseline (Rolı́nek et al., 2020) for more mutual algorithmic details.\n3.4.1 OPTIMIZATION WITH DETERMINISTIC LATENT GRAPH\nWe show how to optimize with deterministic latent graph module, where the topology A is produced by Eq. (5). The objective of matching conditioned on the produced latent topology A becomes:\nmax ∏ k P ( Zk|A(s)k ,A (t) k ,G (s) k ,G (t) k ) (11)\nEq. (11) can be optimized with standard back-propagation with three loss terms activated, except for the Rounding function (see Eq. (5)), which makes the procedure undifferentiable. To address this, we use straight-through operator (Bengio et al., 2013) which performs a standard rounding during the forward pass but approximates it with the gradient of identity during the backward pass on [0, 1]:\n∂Rounding(x)/∂x = 1 (12)\nThough there exist some unbiased gradient estimators (e.g., REINFORCE (Williams, 1992)), the biased straight-through estimator proved to be more efficient and has been successfully applied in several applications (Chung et al., 2017; Campos et al., 2018). All the network modules (NG +NB + NR) are simultaneously learned during the training. All three losses are activated in the learning procedure (see Sec. 3.3), which are applied on the predicted matching Ẑ, the latent topology A(s) and A(t). We term the algorithm under this setting DLGM-D.\n3.4.2 OPTIMIZATION WITH GENERATIVE LATENT GRAPH\nSee more details in Appendix A.3. In this setting, the source and target latent topology A(s) and A(t) are sampled according to Eq. (6) and (7). The objective becomes:\nmax ∏ k ∫ A (s) k ,A (t) k Pθ ( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k ) (13)\nUnfortunately, directly optimizing Eq. (13) is difficult due to the integration over A which is intractable. Instead, we maximize the evidence lower bound (ELBO) (Bishop, 2006) as follows:\nlogPθ(Z|G(s),G(t)) ≥ EQφ(A(s),A(t)|G(s),G(t)) [ logPθ(Z,A (s),A(t)|G(s),G(t))− logQφ(A(s),A(t)|G(s),G(t)) ] (14)\nwhere Qφ(A(s),A(t)|G(s),G(t)) can be any joint distribution of A(s) and A(t) given the input graphs G(s) and G(t). Equality of Eq. (14) holds when Qφ(A(s),A(t)|G(s),G(t)) = Pθ(A\n(s),A(t)|Z,G(s),G(t)). For tractability, we rationally introduce the independence by assuming that we can use an identical latent topology module Qφ (corresponding to NG in Fig. 2(a)) to separately handle each input graph:\nQφ(A (s),A(t)|G(s),G(t)) = Qφ(A(s)|G(s))Qφ(A(t)|G(t)) (15)\nwhich can greatly simplify the model complexity. Then we can utilize a neural network to model Qφ (similar to modeling Pθ). The optimization of Eq. (14) is studied in (Neal & Hinton, 1998), known as the Expectation-Maximization (EM) algorithm. Optimization of Eq. (14) alternates between E-step and M-step. During E-step (inference), Pθ is fixed and the algorithm seeks to find an optimal Qφ to approximate the true posterior distribution (see Appendix A.3 for explanation):\nPθ(A (s),A(t)|Z,G(s),G(t)) (16)\nDuring M-step (learning), Qφ is instead fixed and algorithm alters to maximize the likelihood: EQφ(A(s)|G(s)),Qφ(A(t)|G(t)) [ logPθ(Z,A (s),A(t)|G(s),G(t)) ] ∝ −LM (17)\nWe give more details on the inference and learning steps as follows.\nInference. This step focuses on deriving posterior distribution Pθ(A(s),A(t)|Z,G(s),G(t)) using its approximation Qφ. To this end, we fix the parameters θ in modules NB and NR, and only update the parameters φ in module NG corresponding to Qφ. As stated in Sec. 3.2, we employ the Gumbel-softmax trick for sampling discrete A (Jang et al., 2017). To this end, we can formulate a 2D vector aij = [P (Aij = 1), 1− P (Aij = 1)]>. Then the sampling becomes:\nsoftmax (log(aij) + hij ; τ) (18) where hij is a random 2D vector from Gumbel distribution, and τ is a small temperature parameter. We further impose prior on latent topology A given A through locality loss:\nlog ∏ i,j P (Aij |Aij) ∝ −LL(A,A) (19)\nwhich is to preserve the locality in initial topology A. It should also be noted that Z is the predicted matching from current Pθ, as Qφ is an approximation. Besides, we also anticipate two generated topology A(s) and A(t) from a graph pair should be similar (isomorphic) given matching Z:\nlogP ( A(s),A(t)|Z ) ∝ −LC ( A(s),A(t)|Z ) (20)\nIn summary, we activate locality loss and consistency loss during the inference step, where the latter loss is conditioned with the predicted matching rather than the ground-truth. Note that the inference step involves twice re-parameterization tricks corresponding to Eq. (6) and (18), respectively. While the first generates the continuous topology distribution under edge independence assumption, the second performs discrete sampling sufficing the generated topology distribution.\nLearning. This step optimizes Pθ by fixing Qφ. We sample discrete graph topologies As completely from the probability of edge P (Aij = 1). Once latent topology As are sampled, we feed them to module NR together with the node-level features from NB . Only NB and NR are updated in this step, and only matching loss LM is activated. Remark. Note for each pair of graphs in training, we use an identical random vector s for generating both graphs’ topology (see Eq. (6)). We pretrain the network Pθ before alternativly training Pθ and Qφ. During pretraining, we activate NB + NR modules and LM loss during pretraining, and feed the network the topology obtained from Delaunay as the latent topology. After pretraining, the optimization will switch between inference and learning steps until convergence. We term the setting of generative latent graph matching as DLGM-G and summarize it in Alg. 1.\nmethod aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv Ave BBGM-max 35.5 68.6 46.7 36.1 85.4 58.1 25.6 51.7 27.3 51.0 46.0 46.7 48.9 58.9 29.6 93.6 42.6 35.3 70.7 79.5 51.9\nBBGM 42.7 70.9 57.5 46.6 85.8 64.1 51.0 63.8 42.4 63.7 47.9 61.5 63.4 69.0 46.1 94.2 57.4 39.0 78.0 82.7 61.4 DLGM-D (ours) 42.5 71.8 57.8 46.8 86.9 70.3 53.4 66.7 53.8 67.6 64.7 64.6 65.2 70.1 47.9 95.5 59.6 47.7 77.7 82.6 63.9 DLGM-G (ours) 43.8 72.9 58.5 47.4 86.4 71.2 53.1 66.9 54.6 67.8 64.9 65.7 66.9 70.8 47.4 96.5 61.4 48.4 77.5 83.9 64.8" }, { "heading": "4 EXPERIMENT", "text": "We conduct experiments on datasets including Pascal VOC with Berkeley annotation (Everingham et al., 2010; Bourdev & Malik, 2009), Willow ObjectClass (Cho et al., 2013) and SPair-71K (Min et al., 2019). We report the per-category and average performance. The objective of all experiments is to maximize the average matching accuracy. Both our DLGM-D and DLGM-G are tested.\nPeer methods. We conduct comparison experiments against the following algorithms: 1) GMN (Zanfir & Sminchisescu, 2018), which is a seminal work incorporating graph matching into deep learning framework equipped with a spectral solver (Egozi et al., 2013); 2) PCA (Wang et al., 2019a). This method treats graph matching as feature matching problem and employs GCN (Kipf & Welling, 2017) to learn better features; 3) CIE1/GAT-H (Yu et al., 2020). This paper develops a novel embedding and attention mechanism, where GAT-H is the version by replacing the basic embedding block with Graph Attention Networks (Veličković et al., 2018); 4) DGMC (Fey et al., 2020). This method devises a post-processing step by emphasizing the neighborhood similarity; 5) BBGM (Rolı́nek et al., 2020). It integrates a differentiable linear combinatorial solver (Pogancic et al., 2020) into a deep learning framework and achieves state-of-the-art performance.\nResults on Pascal VOC. The dataset (Everingham et al., 2010; Bourdev & Malik, 2009) consists of 7,020 training images and 1,682 testing images with 20 classes in total, together with the object bounding boxing for each. Following the data preparation in (Wang et al., 2019a), each object within the bounding box is cropped and resized to 256× 256. The number of nodes per graph ranges from 6 to 23. We further follow (Rolı́nek et al., 2020) under two evaluating metrics: 1) Accuracy: this is the standard metric evaluated on the keypoints by filtering out the outliers; 2) F1-score: this metric is evaluated without keypoint filtering, being the harmonic mean of precision and recall.\nExperimental results on the two setting are shown in Tab. 1 and Tab. 2. The proposed method under either settings of DLGM-D and DLGM-G outperforms counterparts by accuracy and f1-score. DLGM-G generally outperforms DLGM-D. Discussion can be found in Appendix A.5.\nQuality of generated topology. We further show the consistency/locality curve vs epoch in Fig. 3, since both consistency and locality losses can somewhat reflect the quality of topology generation. It shows that both locality and consistency losses descend during the training. Note that the consistency loss with Delaunay triangulation (green dashed line) is far more larger than our generated ones (blue/red dashed line). This clearly supports the claim that our method generates similar (more isomorphic) typologies, as well as preserving locality.\nResults on Willow Object. The benchmark (Cho et al., 2013) consists of 256 images in 5 categories, where two categories (car and motorbike) are subsets selected from Pascal VOC. Following the preparation protocol in Wang et al. (2019a), we crop the image within the object bounding box and resize it to 256 × 256. Since the dataset is relatively small, we conduct the experiment to verify the transfer ability of different methods under two settings: 1) trained on Pascal VOC and directly applied to Willow (Pt); 2) trained on Pascal VOC then finetuned on Willow (Wt). Results under the two settings are shown in Tab. 3. Since this dataset is relatively small, further improvement is difficult. It is shown both DLGM-D and DLGM-G have good transfer ability.\nResults on SPair-71K. The dataset (Min et al., 2019) is much larger than Pascal VOC and WillowObject since it consists of 70,958 image pairs collected from Pascal VOC 2012 and Pascal 3D+ (53,340 for training, 5,384 for validation and 12,234 for testing). It improves Pascal VOC by removing ambiguous categories sofa and dining table. This dataset is considered to contain more difficult matching instances and higher annotation quality. Results are summarized in Tab. 4. Our method consistently improves the matching performance, agreeing with the results in Pascal VOC and Willow." }, { "heading": "5 CONCLUSION", "text": "Graph matching involves two essential factors: the affinity model and topology. By incorporating learning paradigm for affinity/feature, the performance of matching on public datasets has significantly been improved. However, there has been little previous work exploring more effective topology for matching. In this paper, we argue that learning a more effective graph topology can significantly improve the matching, thus being essential. To this end, we propose to incorporate a latent topology module under an end-to-end deep network framework that learns to produce better graph topology. We also present the interpretation and optimization of topology module in both deterministic and generative perspectives, respectively. Experimental results show that, by learning the latent graph, the matching performance can be consistently and significantly enhanced on several public datasets." }, { "heading": "A APPENDIX", "text": "A.1 HOLISTIC PIPELINE\nWe show the holistic pipeline of our framework in Fig. 4 consisting of two “singleton pipelines” (see introduction part of Sec. 3 for more details). In general, the holistic pipeline follows the convention in a series of deep graph matching methods by utilizing an identical singleton pipeline to extract features, then exploits the produced features to perform matching (Yu et al., 2020; Wang et al., 2019a; Fey et al., 2020; Rolı́nek et al., 2020). Except for the topology module NG, all others parts of our network are the same as those in Rolı́nek et al. (2020).\nA.2 SPLINECNN\nSplineCNN is a method to perform graph-based representation learning via convolution operators defined based on B-splines (Fey et al., 2018). The initial input to SplineCNN is G = {X,E,A}, where X ∈ Gn×d1 and A ∈ {0, 1}n×n indicate node features and topology, respectively (same as in Sec. 3.1). E ∈ [0, 1]n×n×d2 is so-called pseudo-coordinates and can be viewed as n2 × d2dimensional edge features for a fully connected graph (in case m = n2, see Sec. 3.1). Let normalized edge feature e(i, j) = Ei,j,: ∈ [0, 1]d2 if a directed edge (i, j) exists (Ai,j = 1), and 0 otherwise (Ai,j = 0). Note topology A fully carries the information of N (i) which defines the neighborhood\nof node i. During the learning, X and E will be updated while topology A will not. Therefore SplineCNN is a geometric graph embedding method without adjusting the latent graph topology.\nB-spline is employed as basic kernel in SplineCNN, where a basis function has only support on a specific real-valued interval (Piegl & Tiller, 2012). Let ((Nq1,i)1≤i≤k1 , ..., (N q d,i)1≤i≤kd2 ) be d2 B-spline bases with degree q. The kernel size is defined in k = (k1, ..., kd2). In SplineCNN, the continuous kernel function gl : [a1, b1]× ...× [ad2 , bd2 ]→ G is defined as:\ngl(e) = ∑ p∈P wp,l ·Bp(e) (21)\nwhere P = (Nq1,i)i × ... × (N q d,i)i is the B-spline bases (Piegl & Tiller, 2012) and wp,l is the trainable parameter corresponding to the lth node feature in X, with Bp being the product of the basis functions in P:\nBp = d∏ i=1 Nqi,pi(ei) (22)\nwhere e is the pseudo-coordinate in E. Then, given the kernel function g = (g1, ..., gd1) and the node feature X ∈ Gn×d1 , one layer of the convolution at node i in SplineCNN reads (same as Eq. (3)):\n(x ∗ g)(i) = 1 |N (i)| d1∑ l=1 ∑ j∈N (i) xl(j) · gl(e(i, j)) (23)\nwhere xl(j) indicates the convolved node feature value of node j at lth dimension. This formulation can be tensorized into Eq. (4) with explicit topology matrix A. In this sense, we can back-propagate the gradient of A. Reader are referred to Fey et al. (2018) for more comprehensive understanding of this method.\nA.3 DERIVATION OF DLGM-D\nWe give more details of the optimization on DLGM-D in this section. This part also interprets some basic formulation conversion (e.g. from Eq. (2) to its Bayesian form). First, we assume there is no latent topology A(s) and A(s) at the current stage. In this case, the objective of GM is simply:\nmax ∏ k Pθ ( Zk|G(s)k ,G (t) k ) (24)\nwhere Pθ measures the probability of a matching Zk given graph pair G(s)k and G (t) k . If we impose the latent topology A(s) and A(t), as well as some distribution over them, then Eq. (24) can be equivalently expressed as:\nmax ∏ k Pθ ( Zk|G(s)k ,G (t) k ) = max ∏ k ∫ A (s) k ,A (t) k Pθ ( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k ) (25)\nwhere Pθ ( Zk|G(s)k ,G (t) k ) is the marginal distribution of Pθ ( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k ) with respect\nto Zk, since A (s) k and A (t) k are integrated over some distribution. Herein we can impose another distribution of the topology Qφ(A (s) k ,A (t) k |G (s) k ,G (t) k ) characterized by parameter φ, then we have:\nlog ∫ A\n(s) k ,A (t) k\nPθ ( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k ) = log\n∫ A\n(s) k ,A (t) k\nPθ ( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k ) Qφ(A(s)k ,A(t)k |G(s)k ,G(t)k ) Qφ(A (s) k ,A (t) k |G (s) k ,G (t) k )\n= log E Qφ(A (s) k ,A (t) k |G (s) k ,G (t) k ) Pθ ( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k ) Qφ(A (s) k ,A (t) k |G (s) k ,G (t) k ) ≥E\nQφ(A (s) k ,A (t) k |G (s) k ,G (t) k )\n[ logPθ(Z,A (s),A(t)|G(s),G(t))− logQφ(A(s),A(t)|G(s),G(t)) ]\n(26)\nwhere the final step is derived from Jensen’s inequality. Since optimizating Eq. 25 is difficult, we can alter to maximize the right hand side of inequality of Eq. (26) instead, which is the Evidence Lower Bound (ELBO) (Bishop, 2006). Since two input graphs are handled separately by two identical subroutines (see Fig. 2a), we can then impose the independence of topology A(s)k and A (t) k : Qφ(A (s),A(t)|G(s),G(t)) = Qφ(A(s)|G(s))Qφ(A(t)|G(t)). In this sense, we can utilize the same parameter φ to characterize two identical neural networks (generators) for modeling Qφ.\nAssuming θ is fixed, ELBO is determined by Qφ. According to Jensen’s inequality, equality of Eq. (26) holds when:\nPθ ( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k ) Qφ ( A (s) k ,A (t) k |G (s) k ,G (t) k\n) = c (27) where c 6= 0 is a constant. We then have:∫\nA (s) k ,A (t) k\nPθ ( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k ) = c ∫ A\n(s) k ,A (t) k\nQφ\n( A\n(s) k ,A (t) k |G (s) k ,G (t) k\n) (28)\nAs Qφ is a distribution, we have:∫ A\n(s) k ,A (t) k\nQφ\n( A\n(s) k ,A (t) k |G (s) k ,G (t) k\n) = 1 (29)\nTherefore, we have: ∫ A\n(s) k ,A (t) k\nPθ ( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k ) = c (30)\nWe now have:\nQφ\n( A\n(s) k ,A (t) k |G (s) k ,G (t) k\n) = Pθ\n( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k ) c\n= Pθ\n( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k ) ∫ A\n(s) k ,A (t) k\nPθ ( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k ) = Pθ ( Zk,A (s) k ,A (t) k |G (s) k ,G (t) k\n) Pθ ( Zk|G(s)k ,G (t) k\n) =Pθ ( A (s) k ,A (t) k |Zk,G (s) k ,G (t) k )\n(31)\nEq. (31) shows that, once θ is fixed, maximizing ELBO amounts to finding a distribution Qφ approximating the posterior probability Pθ ( A (s) k ,A (t) k |Zk,G (s) k ,G (t) k ) . This can be done by training the generatorQφ to produce latent topology A given graph pair and the matching Z. This corresponds to the Inference part in Sec. 3.4.2.\nA.4 ABLATION STUDY\nIn this part, we evaluate the performance of DLGM-D and DLGM-G by selectively deactivating different loss functions (refer Sec. 3.3 for more details of the functions). We also conduct the test on DLGM-G using different sample size of the generator. This ablation test is conducted on Pascal VOC dataset and average accuracy is reported in Tab. 5.\nWe first test the performance of both settings of DLGM by selectively activate the designated loss functions. Experimental results are summarized in Tab. 5a. As matching loss LM is essential for GM task, we constantly activate this loss for all settings. We see that the proposed novel losses LC and LL can consistently enhance the matching performance. Besides, DLGM-G indeed delivers better performance than DLGM-D under fair comparison.\nWe then test the impact of sample size from the generator Qφ under DLGM-G. Experimental results are summarized in Tab. 5b. We see that along with the increasing sample size, the average accuracy ascends. The performance becomes stable when the sample size reaches over 16.\nTable 5: Ablation test on Pascal VOC dataset. (a) Selectively deactivating loss functions on Pascal VOC. LM , LC and LL are selectively activated in DLGM-D and DLGM-G. “full” indicates all loss functions are activated. Average accuracy (%) is reported. (b) Average matching accuracy under different sampling sizes from the generator Qφ with “full” DLGM-G setting.\n(a) On losses\nmethod Ave DLGM-D (LM + LC) 79.8 DLGM-D (LM + LL) 79.5 DLGM-G (LM + LC) 80.9 DLGM-G (LM + LL) 80.4\nDLGM-D (full) 82.9 DLGM-G (full) 83.8\n(b) On sample size\n#Sample Ave 1 82.5 2 83.2 4 83.2 8 83.5\n16 83.8 32 83.7\nClass\naero & bike\nbird & boat\nbottle & bus\ncar & cat\nchair & cow\ntable & dog\nhorse & mbike\nperson & plant\nsheep & sofa\ntrain & tv\nTable 6: Matching examples of DLGM-G on 20 classes of Pascal VOC. The coloring of graphs and matchings follows the principle of Fig. 1 in the manuscript. Zoom in for better view.\nA.5 MORE VISUAL EXAMPLES AND ANALYSIS\nWe show more visual examples of matchings and generated topology using DLGM-G on Pascal VOC in Tab. 6 and Tab. 7, respectively. Each table follows distinct coloring regulation which will be detailed as follows:\n• Tab. 6. For each class, the left and right images corresponds to Delaunay triangulation. The image in the middle refers to the predicted matching and generated graph topology. Cyan solid and dashed lines correspond to correct and wrong matchings, respectively. Green dashed lines are the ground-truth matchings that are missed by our model. • Tab. 7. In this table, the leftmost and the rightmost columns correspond to original topology\nconstructed using Delaunay triangulation. The two columns in the middle are the generated topology using our method given Delaunay triangulation as prior. Blue edges are the edges that Delaunay and generated ones have in common. Green edges corresponds to the ones that are in Delaunay but not in generated topology, while red edges are the ones that are generated but not in Delaunay.\nWe give some analysis for the following questions." }, { "heading": "In what case a different graph is generated?", "text": "Since there are some generated graphs are identical to Delaunay, this question may naturally arise. We observe that, DLGM tends to produce an identical graph to Delaunay when objects are rarely with distortion and graphs are simple (e.g. tv, bottle and plant in Tab. 6 and last two rows in Tab. 7). However, when Delaunay is not sufficient to reveal the complex geometric relation or objects are with large distortion and feature diversity (e.g. cow and cat in Tab. 6 and person in Tab. 7), DLGM will resort to generating new topology with richer and stronger hint for graph matching. In other words, DLGM somewhat finds a way to identify if current instance pair is difficult or easy to match, and learns an adaptive strategy to handle these two cases.\nWhy DLGM-G delivers better performance than DLGM-D?\nIn general, DLGM-D is a deterministic gradient-based method. That is, the solution trajectory of DLGM-D almost follows the gradient direction at each iteration (with some variance from minibatch). Though it is assured to reach a local optima, only following gradient is too greedy since generated graph is coupled with predicted matching. Besides, as the topology is discrete, the optimal continuous solution will have a large objective score gap to its nearest discrete sampled solution once the landspace of the neural network is too sharp. On the other hand, DLGM-G performs discrete sampling under feasible graph distribution at each iteration, which generally but not fully follows the gradient direction. This procedure can thus find better discrete direction with probability, hence better exploring the searching space. This behavior is similar to Reinforcement Learning, but with much higher efficiency. Additionally, EM framework can guarantee the convergence (Bishop, 2006)." } ]
2,020
null
SP:879ce870f09e422aced7d008abc42fe5a8db29bc
[ "The paper proposes a method for stabilizing the training of GAN as well as overcoming the problem of mode collapse by optimizing several auxiliary models. The first step is to learn a latent space using an autoencoder. Then, this latent space is \"intervened\" by a predefined set of $K$ transformations to generate a set of distributions $p_k$. A classifier is then taught to distinguish between $p_k$. Eventually, the weights of the classifier are shared with those of the discriminator network to produce the desired stabilization/diversification effect. In other words, the authors propose to stabilize GANs by intervening with the discriminator. This is done by sharing its weights with a classifier that trains on a perturbed latent distribution that is somehow related to the original problem via the prior assumption imposed." ]
In this paper we propose a novel approach for stabilizing the training process of Generative Adversarial Networks as well as alleviating the mode collapse problem. The main idea is to incorporate a regularization term that we call intervention into the objective. We refer to the resulting generative model as Intervention Generative Adversarial Networks (IVGAN). By perturbing the latent representations of real images obtained from an auxiliary encoder network with Gaussian invariant interventions and penalizing the dissimilarity of the distributions of the resulting generated images, the intervention term provides more informative gradient for the generator, significantly improving training stability and encouraging modecovering behaviour. We demonstrate the performance of our approach via solid theoretical analysis and thorough evaluation on standard real-world datasets as well as the stacked MNIST dataset.
[]
[ { "authors": [ "M Arjovsky", "L Bottou" ], "title": "Towards principled methods for training generative adversarial networks. arxiv 2017", "venue": "arXiv preprint arXiv:1701.04862", "year": 2017 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Tong Che", "Yanran Li", "Athul Jacob", "Yoshua Bengio", "Wenjie Li" ], "title": "Mode regularized generative adversarial networks. 2016", "venue": null, "year": 2016 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron C. Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Simon Jenni", "Paolo Favaro" ], "title": "On stabilizing generative adversarial training with noise, 2019", "venue": null, "year": 2019 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "arXiv preprint arXiv:1802.05983,", "year": 2018 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Naveen Kodali", "James Hays", "Jacob Abernethy", "Zsolt Kira" ], "title": "On convergence and stability of gans", "venue": null, "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Anders Boesen Lindbo Larsen", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Autoencoding beyond pixels using a learned similarity", "venue": "metric. CoRR,", "year": 2015 }, { "authors": [ "Y. Lecun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Zinan Lin", "Ashish Khetan", "Giulia Fanti", "Sewoong Oh" ], "title": "Pacgan: The power of two samples in generative adversarial networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Xudong Mao", "Qing Li", "Haoran Xie", "Raymond Y.K. Lau", "Zhen Wang" ], "title": "Multi-class generative adversarial networks with the L2 loss function", "venue": "CoRR, abs/1611.04076,", "year": 2016 }, { "authors": [ "Luke Metz", "Ben Poole", "David Pfau", "Jascha Sohl-Dickstein" ], "title": "Unrolled generative adversarial networks", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Akash Srivastava", "Lazar Valkov", "Chris Russell", "Michael U Gutmann", "Charles Sutton" ], "title": "Veegan: Reducing mode collapse in gans using implicit variational learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ngoc-Trung Tran", "Tuan-Anh Bui", "Ngai-Man Cheung" ], "title": "Dist-gan: An improved gan using distance constraints", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Fisher Yu", "Yinda Zhang", "Shuran Song", "Ari Seff", "Jianxiong Xiao" ], "title": "Lsun: Construction of a largescale image dataset using deep learning with humans in the loop", "venue": "arXiv preprint arXiv:1506.03365,", "year": 2015 }, { "authors": [ "Zhiming Zhou", "Jiadong Liang", "Yuxuan Song", "Lantao Yu", "Hongwei Wang", "Weinan Zhang", "Yong Yu", "Zhihua Zhang" ], "title": "Lipschitz generative adversarial nets", "venue": null, "year": 1902 } ]
[ { "heading": "1 INTRODUCTION", "text": "As one of the most important advances in generative models in recent years, Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have been attracting great attention in the machine learning community. GANs aim to train a generator network that transforms simple vectors of noise to produce “realistic” samples from the data distribution. In the basic training process of GANs, a discriminator and a target generator are trained in an adversarial manner. The discriminator tries to distinguish the generated fake samples from the real ones, and the generator tries to fool the discriminator into believing the generated samples to be real.\nThough successful, there are two major challenges in training GANs: the instability of the training process and the mode collapse problem. To deal with these problems, one class of approaches focus on designing more informative objective functions (Salimans et al., 2016; Mao et al., 2016; Kodali et al., 2018; Arjovsky & Bottou; Arjovsky et al., 2017; Gulrajani et al., 2017; Zhou et al., 2019). For example, Mao et al. (2016) proposed Least Squares GAN (LSGAN) that uses the least squares loss to penalize the outlier point more harshly. Arjovsky & Bottou discussed the role of the Jensen-Shannon divergence in training GANs and proposed WGAN (Arjovsky et al., 2017) and WGAN-GP (Gulrajani et al., 2017) that use the more informative Wasserstein distance instead. Other approaches enforce proper constraints on latent space representations to better capture the data distribution (Makhzani et al., 2015; Larsen et al., 2015; Che et al., 2016; Tran et al., 2018). A representative work is the Adversarial Autoencoders (AAE) (Makhzani et al., 2015) which uses the discriminator to distinguish the latent representations generated by encoder from Gaussian noise. Larsen et al. (2015) employed image representation in the discriminator as the reconstruction basis of a VAE. Their method turns pixel-wise loss to feature-wise, which can capture the real distribution more simply when some form of invariance is induced. Different from VAE-GAN, Che et al. (2016) regarded the encoder as an auxiliary network, which can promote GANs to pay much attention on missing mode and derive an objective function similar to VAE-GAN. A more detailed discussion of related works can be found in Appendix C.\nIn this paper we propose a novel technique for GANs that improve both the training stability and the quality of generated images. The core of our approach is a regularization term based on the latent representations of real images provided by an encoder network. More specifically, we apply auxiliary intervention operations that preserve the standard Gaussian (e.g., the noise distribution) to these latent representations. The perturbed latent representations are then fed into the generator to produce intervened samples. We then introduce a classifier network to identify the right intervention operations that would have led to these intervened samples. The resulting negative cross-entropy loss\nis added as a regularizer to the objective when training the generator. We call this regularization term the intervention loss and our approach InterVention Generative Adversarial Nets (IVGAN).\nWe show that the intervention loss is equivalent with the JS-divergence among multiple intervened distributions. Furthermore, these intervened distributions interpolate between the original generative distribution of GAN and the data distribution, providing useful information for the generator that is previously unavailable in common GAN models (see a thorough analysis on a toy example in Example 1). We show empirically that our model can be trained efficiently by utilizing the parameter sharing strategy between the discriminator and the classifier. The models trained on the MNIST, CIFAR-10, LSUN and STL-10 datasets successfully generate diverse, visually appealing objects, outperforming state-of-the-art baseline methods such as WGAN-GP and MRGAN in terms of the Frèchet Inception Distance (FID) (proposed in (Heusel et al., 2017)). We also perform a series of experiments on the stacked MNIST dataset and the results show that our proposed method can also effectively alleviate the mode collapse problem. Moreover, an ablation study is conducted, which validates the effectiveness of the proposed intervention loss.\nIn summary, our work offers three major contributions as follows. (i) We propose a novel method that can improve GAN’s training as well as generating performance. (ii) We theoretically analyze our proposed model and give insights on how it makes the gradient of generator more informative and thus stabilizes GAN’s training. (iii) We evaluate the performance of our method on both standard real-world datasets and the stacked MNIST dataset by carefully designed expriments, showing that our approach is able to stabilize GAN’s training and improve the quality and diversity of generated samples as well." }, { "heading": "2 PRELIMINARIES", "text": "Generative adversarial nets The basic idea of GANs is to utilize a discriminator to continuously push a generator to map Gaussian noise to samples drawn according to an implicit data distribution. The objective function of the vanilla GAN takes the following form:\nmin G max D\n{ V (D,G) , Ex∼pdata log(D(x)) + Ez∼pz log(1−D(G(z))) } , (1)\nwhere pz is a prior distribution (e.g., the standard Gaussian). It can be easily seen that when the discriminator reaches its optimum, that is, D∗(x) = pdata(x)pdata(x)+pG(x) , the objective is equivalent to the Jensen-Shannon (JS) divergence between the generated distribution pG and data distribution pdata:\nJS(pG‖pdata) , 1\n2\n{ KL(pG‖\npG + pdata 2 ) +KL(pdata‖ pG + pdata 2 )\n} .\nMinimizing this JS divergence guarantees that the generated distribution converges to the data distribution given adequate model capacity.\nMulti-distribution JS divergence The JS divergence between two distributions p1 and p2 can be rewritten as\nJS(p1‖p2) = H( p1 + p2 2 )− 1 2 H(p1)− 1 2 H(p2),\nwhere H(p) denotes the entropy of distribution p. We observe that the JS-divergence can be interpreted as the entropy of the mean of the two distribution minus the mean of two distributions’ entropy. So it is immediate to generalize the JS-divergence to the setting of multiple distributions. In particular, we define the JS-divergence of p1, p2, . . . , pn with respect to weights π1, π2, . . . , πn ( ∑ πi = 1 and πi ≥ 0) as\nJSπ1,...,πn(p1, p2, . . . , pn) , H( n∑ i=1 πipi)− n∑ i=1 πiH(pi). (2)\nThe two-distribution case described above is actually a special case of the ‘multi-JS divergence’, where π1 = π2 = 12 . When πi > 0 ∀i, it can be found immediately by Jensen’s inequality that JSπ1,...,πn(p1, p2, . . . , pn) = 0 if and only if p1 = p2 = · · · = pn." }, { "heading": "3 METHODOLOGY", "text": "Training GAN has been challenging, especially when the generated distribution and the data distribution are far away from each other. In such cases, the discriminator often struggles to provide useful information for the generator, leading to instability and mode collapse problems. The key idea behind our approach is that we construct auxiliary intermediate distributions that interpolate between the generated distribution and the data distribution. To do that, we first introduce an encoder network and combine it with the generator to learn the latent representation of real images within the framework of a standard autoencoder. We then perturb these latent representations with carefully designed intervention operations before feeding them into the generator to create these auxiliary interpolating distributions. A classifier is used to distinguish the intervened samples, which leads to an intervention loss that penalizes the dissimilarity of these intervened distributions. The reconstruction loss and the intervention loss are added as regularization terms to the standard GAN loss for training. We start with an introduction of some notation and definitions. Definition 1 (Intervention). Let O be a transformation on the space of d-dimension random vectors and P be a probability distribution whose support is in Rd. We call O a P-intervention if for any d-dimensional random vector X , X ∼ P⇒ O(X) ∼ P.\nSince the noise distribution in GAN models is usually taken to be standard Gaussian, we use the standard Gaussian distribution as the default choice of P and abbreviate the P-intervention as intervention, unless otherwise claimed. One of the simplest groups of interventions is block substitution. Let Z ∈ Rd be a random vector, k ∈ N and k|d. We slice Z into k blocks so that every block is in R d k . A block substitution intervention Oi is to replace the ith block of Z with Gaussian noise, i = 1, . . . , dk . We will use block substitution interventions in the rest of the paper unless otherwise specified. Note that our theoretical analysis as well as the algorithmic framework do not depend on the specific choice of the intervention group.\nNotation We use E,G,D, f to represent encoder, generator, discriminator and classifier, respectively. Here and later, preal means the distribution of the real data X , and pz is the prior distribution of noise z defined on the latent space (usually is taken to be Gaussian). Let Oi, i = 1, . . . , k denote k different interventions and pi be the distribution of intervened sample Xi created from Oi (namely Xi = G(Oi(E(X)))).\nIntervention loss The intervention loss is the core of our approach. More specifically, given a latent representation z that is generated by an encoder network E, we sample an intervention Oi from a complete group S = {O1, . . . , Ok} and obtain the corresponding intervened latent variable Oi(z) with label ei. These perturbed latent representations are then fed into the generator to produce\nintervened samples. We then introduce an auxiliary classifier network to identify which intervention operations may lead to these intervened samples. The intervention loss LIV (G,E) is simply the resulting negative cross-entropy loss and we add that as a regularizer to the objective function when training the generator. As we can see, the intervention loss is used to penalize the dissimilarity of the distributions of the images generated by different intervention operations. Moreover, it can be noticed that the classifier and the combination of the generator and the encoder are playing a two-player adversarial game and we will train them in an adversarial manner. In particular, we define\nLIV (G,E) = −min f Vclass, where Vclass = Ei∼U([k])Ex′∼pi − eTi log f(x′). (3)\nTheorem 1 (Optimal Classifier). The optimal solution of the classifier is the conditional probability of label y given X ′, where X ′ is the intervened sample generated by the intervention operation sampled from S. And the minimum of the cross entropy loss is equivalent with the negative of the Jensen Shannon divergence among {p1, p2, ..., pk}. That is,\nf∗i (x) = pi(x)∑k j=1 pj(x) and LIV (G,E) = JS(p1, p2, ..., pk) + Const. (4)\nThe proof can be found in Appendix A.1. Clearly, the intervention loss is an approximation of the multi-JS divergence among the intervened distributions {pi : i ∈ [k]}. If the intervention reaches its global minimum, we have p1 = p2 = · · · = pk. And it reaches the maximum log k if and only if the supports of these k distributions do not intersect with each other. This way, the probability that the ‘multi’ JS-divergence has constant value is much smaller, which means the phenomenon of gradient vanishing should be rare in IVGAN. Moreover, as shown in the following example, due to these auxiliary intervened distributions, the intervention is able to provide more informative gradient for the generator that is not previously available in other GAN variants.\nExample 1 (Square fitting). Let X0 be a random vector with distribution U(α), where α = [− 12 , 1 2 ] × [− 1 2 , 1 2 ]. And X1 ∼ U(β), where β = [a− 12 , a+ 1 2 ]× [ 1 2 , 3 2 ] and 0 ≤ a ≤ 1. Assuming we have a perfect discriminator (or classifier), we compute the vanilla GAN loss (i.e. the JS-divergence) and the intervention loss between these two distributions, respectively,\n• JS(X0‖X1) = log 2.\n• In order to compute the intervention loss we need figure out two intervened samples’ distributions evolved from U(α) and U(β). Y1 ∼ U(γ1); γ1 = [− 12 , 1 2 ] × [ 1 2 , 3 2 ] and Y2 ∼\nU(γ2); γ2 = [a − 12 , a + 1 2 ] × [− 1 2 , 1 2 ]. Then the intervention loss is the multi JS-divergence among these four distributions:\nLIV = JS(X0;X1;Y1;Y2) = − ∫ Ac 1 4 log 1 4 dµ− ∫ A 1 2 log 1 2 dµ−H(X0) = log 2 2 [µ(Ac)+µ(A)]\n= log 2\n2 × 2(2− a)−H(X0) = −(log 2)a− Const.\nHere A is the shaded part in Figure 2 and Ac = {α ∪ β ∪ γ1 ∪ γ2} \\A. The most important observation is that the intervention loss is a function of parameter a and the traditional GAN loss is always constant. When we replace the JS with other f -divergence, the metric between U(α) and U(β) would still remain constant. Hence in this situation, we can not get any information from the standard JS for training of the generator but the intervention loss works well.\nAlgorithm 1 Intervention GAN Input learning rate α, regularization parameters λ and µ, dimension d of latent space, number k of blocks in which the hidden space is divided, minibatch size n, Hadamard multiplier ∗\n1: for number of training iterations do 2: Sample minibatch zj , j = 1, ..., n, zj ∼ pz 3: Sample minibatch xj , j = 1, ..., n, xj ∼ preal 4: for number of inner iteration do 5: wj ← E(xj), j = 1, ..., n 6: Sample Gaussian noise 7: Sample ij ∈ [k], j = 1, ..., n 8: x′j ← G(Oij (wj)) 9: Update the parameters of D by:\n10: θD ← θD − α2n∇θDLadv(θD) 11: Update the parameters of f by:\n12: θf ← θf + αn∇θf n∑ j=1 log fij (x ′ j) 13: Calculate LAdv and LIV 14: Update the parameter of G by: 15: θG ← θG + αn∇θG { L̂Adv + λL̂recon + µL̂IV\n} 16: Update the parameter of E by: 17: θE ← θE + αn∇θE { λL̂recon + µL̂IV }\nReconstruction loss In some sense we expect our encoder to be a reverse function of the generator. So it is necessary for the objective function to have a term to push the map composed of the Encoder and the Generator to have the ability to reconstruct the real samples. Not only that, we also hope that the representation can be reconstructed from samples in the pixel space.\nFormally, the reconstruction loss can be defined by the `p-norm (p ≥ 1) between the two samples, or in the from of the Wasserstein distance between samples if images are regarded as a histogram. Here we choose to use the `1-norm as the reconstruction loss:\nLrecon = EX∼preal‖G(E(X))−X‖1 +Ei∼U([k])Ex,z∼preal,pz‖E(G(Oi(z)))−Oi(z)‖1. (5)\nTheorem 2 (Inverse Distribution). Suppose the cumulative distribution function of Oi(z) is qi. For any given positive real number , there exist a δ > 0 such that if Lrecon +LIV ≤ δ , then ∀i, j ∈ [k], sup r ‖qi(r)− qj(r)‖ ≤ .\nThe proof is in A.2.\nAdversarial loss The intervention loss and reconstruction loss can be added as regularization terms to the adversarial loss in many GAN models, e.g., the binary cross entropy loss in vanilla GAN and the least square loss in LSGAN. In the experiments, we use LSGAN (Mao et al., 2016) and DCGAN (Radford et al., 2015) as our base models, and name the resulting IVGAN models IVLSGAN and IVDCGAN respectively.\nNow that we have introduced the essential components in the objective of IVGAN, we can write the loss function of the entire model:\nLmodel = LAdv + λLrecon + µLIV , (6)\nwhere λ and µ are the regularization coefficients for the reconstruction loss and the intervention loss respectively. We summarize the training procedure in Algorithm 1. A diagram of the full workflow of our framework can be found in Figire 3." }, { "heading": "4 EXPERIMENTS", "text": "In this section we conduct a series of experiments to study IVGAN from multiple aspects. First we evaluate IVGAN’s performance on standard real-world datasets. Then we show IVGAN’s ability to tackle the mode collapse problem on the stacked MNIST dataset. Finally, through an ablation study we investigate the performance of our proposed method under different settings of hyperparameters and demonstrate the effectiveness of the intervention loss.\nWe implement our models using PyTorch (Paszke et al., 2019) with the Adam optimizer (Kingma & Ba, 2015). Network architectures are fairly chosen among the baseline methods and IVGAN (see\nTable B.1 in the appendix for more details). The classifier we use to compute the intervention loss shares the parameters with the discriminator except for the output layer. All input images are resized to have 64× 64 pixels. We use 100-dimensional standard Gaussian distribution as the prior pz . We deploy the instance noise technique as in (Jenni & Favaro, 2019). One may check Appendix B.2 for detailed hyperparameter settings. All experiments are run on one single NVIDIA RTX 2080Ti GPU. Although IVGAN introduces extra computational complexities to the original framework of GANs, the training cost of IVGAN is within an acceptable range1 due to the application of strategies like parameter sharing.\nFigure 4: Random samples of generated images on MNIST, CIFAR-10, LSUN and STL-10.\nReal-world datasets experiments We first test IVGAN on four standard real-world datasets, including CIFAR-10 (Krizhevsky, 2009), MNIST (Lecun et al., 1998), STL-10 (Coates et al., 2011), and a subclass named “church_outdoor\" of the LSUN dataset (Yu et al., 2015), to investigate its training stability and quality of the generated images. We use the Frèchet Inception Distance (FID) (Heusel et al., 2017) to measure the performance of all methods.\nThe FID results are listed in Table 1, and the training curves of the baseline methods and IVGAN on four different datasets are shown in Figure 5. We see that on each datasets, the IVGAN counterparts obtain better FID scores than their corresponding baselines. Moreover, the figure of training curves also suggests the learning processes of IVDCGAN and IVLSGAN are smoother and steadier compared to DCGAN, LSGAN or MRGAN (Che et al., 2016), and converge faster than WGAN-GP. Samples of generated images on all datasets are presented in Figure 4.\nStacked MNIST experiments The original MNIST dataset contains 70K images of 28 × 28 handwritten digits. Following the same approaches in Metz et al. (2017); Srivastava et al. (2017); Lin et al. (2018), we increase the number of modes of the dataset from 10 to 1000 = 10× 10× 10 by\n1Empirically IVGANs are approximately 2 times slower than their corresponding baseline methods.\nstacking three random MNIST images into a 28×28×3 RGB image. The metric we use to evaluate a model’s robustness to mode collapse problem is the number of modes captured by the model, as well as the KL divergence between the generated distribution over modes and the true uniform distribution. The mode of a generated imaged is found from a pre-trained MNIST digit classifier.\nOur results are shown in Table 2. It can be seen that our model works very well to prevent the mode collapse problem. Both IVLSGAN and IVDCGAN are able to reach all 1,000 modes and greatly outperforms early approaches to mitigate mode collapse, such as VEEGAN (Srivastava et al., 2017), and Unrolled GAN (Metz et al., 2017). Moreover, the performance of our model is also comparable to method that is proposed more recently, such as the PacDCGAN (Lin et al., 2018). Figure 6 shows images generated randomly by our model as well as the baseline methods.\nAblation study Our ablation study is conducted on the CIFAR-10 dataset. First, we show the effectiveness of the intervention loss. We consider two cases, IVLSGAN without the intervention loss (µ = 0), and standard IVLSGAN (µ = 0.5). From Figure 7 we can find that the intervention loss makes the training process much smoother and leads to a lower FID score in the end.\nWe also investigate the performance of our model using different number of blocks for the block substitution interventions and different regularization coefficients for the intervention loss. The results are presented in Table 3. It can be noticed that to some extent our models’ performance is not sensitive to the choice of hyperparameters and performs well under several different hyperparameter settings. However, when the number of blocks or the scale of IV loss becomes too large the performance of our model gets worse." }, { "heading": "5 CONCLUSION", "text": "We have presented a novel model, intervention GAN (IVGAN), to stabilize the training process of GAN and alleviate the mode collapse problem. By introducing auxiliary Gaussian invariant interventions to the latent space of real images and feeding these perturbed latent representations into the generator, we have created intermediate distributions that interpolate between the generated distribution of GAN and the data distribution. The intervention loss based on these auxiliary intervened distributions, together with the reconstruction loss, are added as regularizers to the objective to provide more informative gradients for the generator, significantly improving GAN’s training stability and alleviating the mode collapse problem as well.\nWe have conducted a detailed theoretical analysis of our proposed approach, and illustrated the advantage of the proposed intervention loss on a toy example. Experiments on both real-world datasets and the stacked MNIST dataset demonstrate that, compared to the baseline methods, IVGAN variants are stabler and smoother during training, and are able to generate images of higher quality (achieving state-of-the-art FID scores) and diversity. We believe that our proposed approach can also be applied to other generative models such as Adversarial Autoencoders (Makhzani et al., 2015), which we leave to future work." }, { "heading": "A PROOFS", "text": "" }, { "heading": "A.1 PROOF OF THEOREM 1", "text": "Proof. The conditional probability ofX ′ given label can be written asP(X ′|ei) = pi(X ′), so further\nP(X ′, ei) = 1 kpi. And we denote the marginal distribution of x as p(x) = 1 k k∑ i=1 pi(x). Cause the activation function at the output layer of the classifier is softmax, we can rewrite the loss function into a more explicit form:\nVclass(f) = Ei∼U [k]Ex′∼pi − eTi log f(x′) = Ei∼U [k]Ex′∼pi − log fi(x)\n= 1\nk ∫ k∑ i=1 −pi(x) log fi(x)dx = ∫ p(x) { − k∑ i=1 p(ei|x) log fi(x) } dx.\nLet gi(x) = fi(x) p(ei|x) , then k∑ i=1 p(ei|x)gi(x) = 1. And notice that k∑ i=1 p(ei|x) = 1. By Jensen’s inequality, we have: k∑ i=1 −p(ei|x) log fi(x) = k∑ i=1 −p(ei|x) log[gi(x)p(ei|x)]\n= k∑ i=1 −p(ei|x) log gi(x) +H(p(·|x)) ≥ log k∑ i=1 p(ei|x)gi(x) +H(pi(·|x))\n= log 1 +H(p(·|x)) = H(p(·|x)). And Vclass(f∗) = ∫ p(x)H(pi(·|x))dx if and only if g∗i (x) = g∗j (x) for any i 6= j, which means\nthat f ∗ i (x) p(ei|x) = r ∀i ∈ [k], where r ∈ R. Notice that k∑ i=1 f∗i (x) = 1, it is not difficult to get that f∗i (x) = p(ei|x). The loss function becomes\n1\nk ∫ k∑ i=1 −pi(x) log p(ei|x)dx = −H(x) + k∑ i=1 1 k H(pi) + log k\n= −JS(p1, p2, ..., pk) + log k\n(7)" }, { "heading": "A.2 PROOF OF THEOREM 2", "text": "Proof. According to Theorem 1, for a given real number 1, we can find another δ1, when intervention loss is less than δ1, the distance between pi and pj under the measurement of JS-divergence is less than 1. And because JS-divergence and Total Variance distance (TV) are equivalent in the sense of convergence. So we can bound the TV-distance between pi and pj by their JS-divergence. Which means that ∫ |pi − pj |dx ≤ 0 when the intervention loss is less than 1 (we can according to the 0 to finding the appropriate 1). Using this conclusion we can deduce |P (E(G(Oi(z))) ≤ r)− P (E(G(Oj(z))) ≤ r)| ≤ 0, where r is an arbitrary vector inRd. Further, we have:\n|P (Oi(z) ≤ r)− P (Oj(z) ≤ r)| ≤ |P (Oi(z) ≤ r; ‖Oi(z)− E(G(Oi(z)))‖ > δ)| + |P (Oj(z) ≤ r; ‖Oj(z)− E(G(Oj(z)))‖ > δ)|+ |P (Oi(z) ≤ r; ‖Oi(z)− E(G(Oi(z)))‖ ≤ δ) − P (Oj(z) ≤ r; ‖Oj(z)− E(G(Oj(z)))‖ ≤ δ)|\n(8) We control the three terms on the right side of the inequality sign respectively.\nP (Oi(z) ≤ r; ‖Oi(z)− E(G(Oi(z)))‖ > δ)\n≤ P (‖Oi(z)− E(G(Oi(z)))‖ > δ) ≤ E‖Oi(z)− E(G(Oi(z)))‖\nδ\n(9)\nAnd the last term can be bounded by the reconstruction loss. The same trick can be used on P (Oj(z) ≤ r; ‖Oj(z)− E(G(Oj(z)))‖ > δ). Moreover, we have\nP (E(G(Oi(z))) ≤ r − δ)− P (‖Oi(z)− E(G(Oi(z)))‖ > δ) ≤P (Oi(z) ≤ r; ‖Oi(z)− E(G(Oi(z)))‖ ≤ δ) ≤ P (E(G(Oi(z))) ≤ r + δ)\n(10)\nNotice that lim δ→0 P (E(G(Oi(z))) ≤ r ± δ) = P (E(G(Oi(z))) ≤ r). Let si(r, δ) = |P (E(G(Oi(z))) ≤ r ± δ)) − P (E(G(Oi(z))) ≤ r)| then the last term of inequalityA.2 can be bounded as: |P (Oi(z) ≤ r;‖Oi(z)− E(G(Oi(z)))‖ ≤ δ)− P (Oj(z) ≤ r; ‖Oj(z)− E(G(Oj(z)))‖ ≤ δ)|\n≤|P (E(G(Oi(z))) ≤ r)− P (E(G(Oj(z))) ≤ r)|+ P (‖Oi(z)− E(G(Oi(z)))‖ > δ) + si(r, δ) + sj(r, δ)\n(11) Every term on the right hand of the inequality can be controlled close to 0 by the inequalities mentioned above." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "" }, { "heading": "B.1 NETWORK ARCHITECTURES", "text": "" }, { "heading": "B.2 HYPERPARAMETER SETTINGS", "text": "" }, { "heading": "C RELATED WORK", "text": "In order to address GAN’s unstable training and mode missing problems, many researchers have turned their attention to the latent representations of samples. Makhzani et al. (2015) proposed the Adversarial Autoencoder (AAE). As its name suggests, AAE is essentially a probabilistic autoencoder based on the framework of GANs. Unlike classical GAN models, in the setting of AAE the discriminator’s task is to distinguish the latent representations of real images that are generated by an encoder network from Gaussian noise. And the generator and the encoder are trained to fool the discriminator as well as reconstruct the input image from the encoded representations. However, the generator can only be trained by fitting the reverse of the encoder and cannot get any information from the latent representation.\nThe VAE-GAN (Larsen et al., 2015) combines the objective function from a VAE model with a GAN and utilizes the learned features in the discriminator for better image similarity metrics, which is of great help for the sample visual fidelity. Considering the opposite perspective, Che et al. (2016) claim that the whole learning process of a generative model can be divided into the manifold learning phase and the diffusion learning phase. And the former one is considered to be the source of the mode missing problem. (Che et al., 2016) then proposed Mode Regularized Generative Adversarial\nNets which introduce a reconstruction loss term to the training target of GAN to penalize the missing modes. It is shown that it actually ameliorates GAN’s ’mode missing’-prone weakness to some extent. However, both of them fail to fully excavate the impact of the interaction between VAEs and GANs.\nKim & Mnih (2018) proposed Factor VAE where a regularization term called total correlation penalty is added to the traditional VAE loss. The total correlation is essentially the Kullback-Leibler divergence between the joint distribution p(z1, z2, . . . , zd) and the product of marginal distribution p(zi). Because the closed forms of these two distribution are unavailable, Factor VAE uses adversarial training to approximate the likelihood ratio." } ]
2,020
null
SP:a9c70bdca13ee3800c633589a6ee028701e5bf51
[ "This work proposed a dimensionality reduction algorithm called Uniform Manifold Approximation with Two-phase Optimization (UMATO), which is an improved version of UMAP (Ref. [3] see below). UMATO has a two-phase optimization approach: global optimization to obtain the overall skeleton of data & local optimization to identify the local structures. " ]
We present a dimensionality reduction algorithm called Uniform Manifold Approximation with Two-phase Optimization (UMATO) which produces less biased global structures in the embedding results and is robust over diverse initialization methods than previous methods such as t-SNE and UMAP. We divide the optimization into two phases to alleviate the bias by establishing the global structure early using the representatives of the high-dimensional structures. The phases are 1) global optimization to obtain the overall skeleton of data and 2) local optimization to identify the regional characteristics of local areas. In our experiments with one synthetic and three real-world datasets, UMATO outperformed widely-used baseline algorithms, such as PCA, Isomap, t-SNE, UMAP, topological autoencoders and Anchor t-SNE, in terms of quality metrics and 2D projection results.
[]
[ { "authors": [ "Josh Barnes", "Piet Hut" ], "title": "A hierarchical o (n log n) force-calculation", "venue": null, "year": 1986 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi" ], "title": "Laplacian eigenmaps and spectral techniques for embedding and clustering", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Tess Brodie", "Elena Brenna", "Federica Sallusto" ], "title": "Omip-018: Chemokine receptor expression on human t helper cells", "venue": "Cytometry Part A,", "year": 2013 }, { "authors": [ "Frédéric Chazal", "David Cohen-Steiner", "Quentin Mérigot" ], "title": "Geometric inference for probability measures", "venue": "Foundations of Computational Mathematics,", "year": 2011 }, { "authors": [ "Frédéric Chazal", "Brittany Fasy", "Fabrizio Lecci", "Bertrand Michel", "Alessandro Rinaldo", "Larry Wasserman" ], "title": "Robust topological inference: Distance to a measure and kernel distance", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Tarin Clanuwat", "Mikel Bober-Irizar", "Asanobu Kitamoto", "Alex Lamb", "Kazuaki Yamamoto", "David Ha" ], "title": "Deep learning for classical japanese literature, 2018", "venue": null, "year": 2018 }, { "authors": [ "James Cook", "Ilya Sutskever", "Andriy Mnih", "Geoffrey Hinton" ], "title": "Visualizing similarity data with a mixture of maps", "venue": "In Artificial Intelligence and Statistics,", "year": 2007 }, { "authors": [ "Cong Fu", "Yonghui Zhang", "Deng Cai", "Xiang Ren" ], "title": "Atsne: Efficient and robust visualization on gpu through hierarchical optimization", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Dian Gong", "Xuemei Zhao", "Gerard Medioni" ], "title": "Robust multiple manifolds structure learning", "venue": "In Proceedings of the 29th International Conference on Machine Learning,", "year": 2012 }, { "authors": [ "Carmen Bravo González-Blas", "Liesbeth Minnoye", "Dafni Papasokrati", "Sara Aibar", "Gert Hulselmans", "Valerie Christiaens", "Kristofer Davie", "Jasper Wouters", "Stein Aerts" ], "title": "cistopic: cis-regulatory topic modeling on single-cell atac-seq data", "venue": "Nature methods,", "year": 2019 }, { "authors": [ "Antonio Gracia", "Santiago González", "Victor Robles", "Ernestina Menasalvas" ], "title": "A methodology to compare dimensionality reduction algorithms in terms of loss of quality", "venue": "Information Sciences,", "year": 2014 }, { "authors": [ "der Fabisch" ], "title": "scikit-optimize/scikit-optimize: v0.5.2, March 2018", "venue": "URL https://doi.org/", "year": 2018 }, { "authors": [ "Geoffrey E Hinton", "Sam Roweis" ], "title": "Stochastic neighbor embedding", "venue": "Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Yann LeCun", "Corinna Cortes. MNIST handwritten digit database." ], "title": "URL http://yann", "venue": "lecun.com/exdb/mnist/.", "year": 2010 }, { "authors": [ "John A Lee", "Michel Verleysen" ], "title": "Nonlinear dimensionality reduction", "venue": "Springer Science & Business Media,", "year": 2007 }, { "authors": [ "John A Lee", "Michel Verleysen" ], "title": "Quality assessment of dimensionality reduction", "venue": "Rank-based criteria. Neurocomputing,", "year": 2009 }, { "authors": [ "George C Linderman", "Manas Rachh", "Jeremy G Hoskins", "Stefan Steinerberger", "Yuval Kluger" ], "title": "Fast interpolation-based t-sne for improved visualization of single-cell rna-seq data", "venue": "Nature methods,", "year": 2019 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Leland McInnes", "John Healy", "James Melville" ], "title": "Umap: Uniform manifold approximation and projection for dimension reduction", "venue": "arXiv preprint arXiv:1802.03426,", "year": 2018 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Michael Moor", "Max Horn", "Bastian Rieck", "Karsten Borgwardt" ], "title": "Topological autoencoders", "venue": "In Proceedings of the 37th International Conference on Machine Learning (ICML), Proceedings of Machine Learning Research. PMLR,", "year": 2020 }, { "authors": [ "Corey J Nolet", "Victor Lafargue", "Edward Raff", "Thejaswi Nanditale", "Tim Oates", "John Zedlewski", "Joshua Patterson" ], "title": "Bringing umap closer to the speed of light with gpu acceleration", "venue": null, "year": 2008 }, { "authors": [ "Nicola Pezzotti", "Julian Thijssen", "Alexander Mordvintsev", "Thomas Höllt", "Baldur Van Lew", "Boudewijn PF Lelieveldt", "Elmar Eisemann", "Anna Vilanova" ], "title": "Gpgpu linear complexity tsne optimization", "venue": "IEEE transactions on visualization and computer graphics,", "year": 2019 }, { "authors": [ "Vin D Silva", "Joshua B Tenenbaum" ], "title": "Global versus local methods in nonlinear dimensionality reduction", "venue": "In Advances in neural information processing systems,", "year": 2003 }, { "authors": [ "Josef Spidlen", "Karin Breuer", "Chad Rosenberg", "Nikesh Kotecha", "Ryan R Brinkman" ], "title": "Flowrepository: A resource of annotated flow cytometry datasets associated with peer-reviewed publications", "venue": "Cytometry Part A,", "year": 2012 }, { "authors": [ "Jian Tang", "Meng Qu", "Mingzhe Wang", "Ming Zhang", "Jun Yan", "Qiaozhu Mei" ], "title": "Line: Large-scale information network embedding", "venue": "In Proceedings of the 24th international conference on world wide web,", "year": 2015 }, { "authors": [ "Jian Tang", "Jingzhou Liu", "Ming Zhang", "Qiaozhu Mei" ], "title": "Visualizing large-scale and highdimensional data", "venue": "In Proceedings of the 25th international conference on world wide web,", "year": 2016 }, { "authors": [ "Bosiljka Tasic", "Zizhen Yao", "Lucas T Graybuck", "Kimberly A Smith", "Thuc Nghi Nguyen", "Darren Bertagnolli", "Jeff Goldy", "Emma Garren", "Michael N Economo", "Sarada Viswanathan" ], "title": "Shared and distinct transcriptomic cell types across neocortical", "venue": "areas. Nature,", "year": 2018 }, { "authors": [ "Joshua B Tenenbaum", "Vin De Silva", "John C Langford" ], "title": "A global geometric framework for nonlinear dimensionality reduction", "venue": null, "year": 2000 }, { "authors": [ "Dmitry Ulyanov" ], "title": "Multicore-tsne. https://github.com/DmitryUlyanov/ Multicore-TSNE, 2016", "venue": null, "year": 2016 }, { "authors": [ "Koen Van den Berge", "Hector Roux De Bezieux", "Kelly Street", "Wouter Saelens", "Robrecht Cannoodt", "Yvan Saeys", "Sandrine Dudoit", "Lieven Clement" ], "title": "Trajectory-based differential expression analysis for single-cell sequencing data", "venue": "Nature communications,", "year": 2020 }, { "authors": [ "Laurens Van Der Maaten" ], "title": "Accelerating t-sne using tree-based algorithms", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Jarkko Venna", "Samuel Kaski" ], "title": "Neighborhood preservation in nonlinear projection methods: An experimental study", "venue": "In International Conference on Artificial Neural Networks,", "year": 2001 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "Lin Yan", "Yaodong Zhao", "Paul Rosen", "Carlos Scheidegger", "Bei Wang" ], "title": "Homology-preserving dimensionality reduction via manifold landmarking and tearing", "venue": "In Visualization in Data Science (VDS),", "year": 2018 }, { "authors": [ "Yuansheng Zhou", "Tatyana O Sharpee" ], "title": "Using global t-sne to preserve inter-cluster data structure. bioRxiv, 2018", "venue": null, "year": 2018 }, { "authors": [ "Lee", "Verleysen" ], "title": "The first two metrics are used to test the preservation of global structures and the last two metrics are suggested for the preservation of the local structures. Distance To a Measure considers the dispersion of high- and low-dimensional", "venue": null, "year": 2007 }, { "authors": [ "Moor" ], "title": "σ (z) where x is the point in high-dimensional space X and the z is the corresponding projected point in low-dimensional space Z, we can examines the similarity of two datasets. In our experiments, we used the Euclidean distance and the values were normalized between 0 and 1", "venue": "The σ ∈ R>0,", "year": 2020 }, { "authors": [ "scikit-optimize (Head" ], "title": "2018)) to find the best hyperparameters for topological autoencoders. UMATO has several hyperparameters such as the number of hub points, the number of epochs, and the learning rate for global and local optimization. In our experiments, we configured everything except the number of hub points to the same setting for UMATO. We used 200 hub points for the Spheres dataset and had 300 hubs", "venue": null, "year": 2018 }, { "authors": [ "Tasic" ], "title": "Each cell belongs to one of 133 clusters defined by Jaccard–Louvain clustering (for more than 4,000 cells) or a combination of k-means and Ward’s hierarchical clustering. Likewise, each cluster belongs to one of 4 classes: GABAergic (red/purple), Endothelial (brown), Glutamatergic (blue/green), Non-Neuronal (dark green). The embedding result for each method is given in Figure 11. In the case of t-SNE, clusters", "venue": null, "year": 2018 }, { "authors": [ "Van den Berge" ], "title": "data analysis where the researchers want to know the distance between samples González-Blas et al", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "We present a novel dimensionality reduction method, Uniform Manifold Approximation with Twophase Optimization (UMATO) to obtain less biased and robust embedding over diverse initialization methods. One effective way of understanding high-dimensional data in various domains is to reduce its dimensionality and investigate the projection in a lower-dimensional space. The limitation of previous approaches such as t-Stochastic Neighbor Embedding (t-SNE, Maaten & Hinton (2008)) and Uniform Manifold Approximation and Projection (UMAP, McInnes et al. (2018)) is that they are susceptible to different initialization methods, generating considerably different embedding results (Section 5.5).\nt-SNE adopts Kullback-Leibler (KL) divergence as its loss function. The fundamental limitation of the KL divergence is that the penalty for the points that are distant in the original space being close in the projected space is too little (Appendix B). This results in only the local manifolds being captured, while clusters that are far apart change their relative locations from run to run. Meanwhile, UMAP leverages the cross-entropy loss function, which is known to charge a penalty for points that are distant in the original space being close in the projection space and for points that are close in the original space being distant in the projection space (Appendix B). UMAP considers all points in the optimization at once with diverse sampling techniques (i.e., negative sampling and edge sampling). Although the approximation technique in UMAP optimization makes the computation much faster, this raises another problem that the clusters in the embedding become dispersed as the number of epochs increases (Appendix K), which can lead to misinterpretation. UMAP tried to alleviate this by using a fixed number (e.g., 200), which is ad hoc, and by applying a learning rate decay. However, the optimal number of epochs and decay schedule for each initialization method needs to be found in practice.\nTo solve the aforementioned problems, we avoid using approximation during the optimization process, which normally would result in greatly increased computational cost. Instead, we first run optimization only with a small number of points that represent the data (i.e., hub points). Finding the optimal projection for a small number of points using a cross-entropy function is relatively easy and robust, making the additional techniques employed in UMAP unnecessary. Furthermore, it is less sensitive to the initialization method used (Section 5.5). After capturing the overall skeleton of the high-dimensional structure, we gradually append the rest of the points in subsequent phases. Although the same approximation technique as UMAP is used for these points, as we have already embedded the hub points and use them as anchors, the projections become more robust and unbiased. The gradual addition of points can in fact be done in a single phase; we found additional phases\ndo not result in meaningful improvements in the performance but only in the increased computation time (Section 4.5). Therefore, we used only two phases in UMAP: global optimization to capture the global structures (i.e., the pairwise distances in a high-dimensional space) and local optimization to retain the local structures (i.e., the relationships between neighboring points in a high-dimensional space) of the data.\nWe compared UMATO with popular dimensionality reduction techniques including PCA, Isomap (Tenenbaum et al. (2000)), t-SNE, UMAP, topological autoencoders (Moor et al. (2020)) and At-SNE (Fu et al. (2019)). We used one synthetic (101-dimensional spheres) and three realworld (MNIST, Fashion MNIST, and Kuzushiji MNIST) datasets and analyzed the projection results with several quality metrics. In conclusion, UMATO demonstrated better performance than the baseline techniques in all datasets in terms of KLσ with different σ values, meaning that it reasonably preserved the density of data over diverse length scales. Finally, we presented the 2D projections of each dataset, including the replication of an experiment using the synthetic Spheres dataset introduced by Moor et al. (2020) where data points locally constitute multiple small balls globally contained in a larger sphere. Here, we demonstrate that UMATO can better preserve both structures compared to the baseline algorithms (Figure 3)." }, { "heading": "2 RELATED WORK", "text": "Dimensionality reduction. Most previous dimensionality reduction algorithms focused on preserving the data’s local structures. For example, Maaten & Hinton (2008) proposed t-SNE, focusing on the crowding problem with which the previous attempts (Hinton & Roweis (2002); Cook et al. (2007)) have struggled, to visualize high-dimensional data through projection produced by performing stochastic gradient descent on the KL divergence between two density functions in the original and projection spaces. Van Der Maaten (2014) accelerated t-SNE developing a variant of the Barnes-Hut algorithm (Barnes & Hut (1986)) and reduced the computational complexity from O(N2) into O(N logN). After that, grounded in Riemannian geometry and algebraic topology, McInnes et al. (2018) introduced UMAP as an alternative to t-SNE. Leveraging the cross-entropy function as its loss function, UMAP reduced the computation time by employing negative sampling from Word2Vec (Mikolov et al. (2013)) and edge sampling from LargeVis (Tang et al. (2015; 2016)) (Table 1). Moreover, they showed that UMAP can generate stable projection results compared to t-SNE over repetition.\nOn the other hand, there also exist algorithms that aim to capture the global structures of data. Isomap (Tenenbaum et al. (2000)) was proposed to approximate the geodesic distance of highdimensional data and embed it onto the lower dimension. Global t-SNE (Zhou & Sharpee (2018)) converted the joint probability distribution, P , in the high-dimensional space from Gaussian to Student’s-t distribution, and proposed a variant of KL divergence. By adding it with the original loss function of t-SNE, Global t-SNE assigns a relatively large penalty for a pair of distant data points in high-dimensional space being close in the projection space. Another example is topological autoencoders (Moor et al. (2020)), a deep-learning approach that uses a generative model to make the latent space resemble the high-dimensional space by appending a topological loss to the original reconstruction loss of autoencoders. However, they required a huge amount of time for hyperparameter exploration and training for a dataset, and only focused on the global aspect of data. Unlike other techniques that presented a variation of loss functions in a single pipeline, UMATO is novel as it preserves both structures by dividing the optimization into two phases; this makes it outperform the baselines with respect to quality metrics in our experiments.\nHubs, landmarks, and anchors. Many dimensionality reduction techniques have tried to draw sample points to better model the original space; these points are usually called hubs, landmarks, or anchors. Silva & Tenenbaum (2003) proposed Landmark Isomap, a landmark version of classical multidimensional scaling (MDS) to alleviate its computation cost. Based on the Landmark Isomap, Yan et al. (2018) tried to retain the topological structures (i.e., homology) of high-dimensional data by approximating the geodesic distances of all data points. However, both techniques have the limitation that landmarks were chosen randomly without considering their importance. UMATO uses a k-nearest neighbor graph to extract significant hubs that can represent the overall skeleton of high-dimensional data. The most similar work to ours is At-SNE (Fu et al. (2019)), which optimized the anchor points and all other points with two different loss functions. However, since the anchors wander during the optimization and the KL divergence does not care about distant points, it hardly\ncaptures the global structure. UMATO separates the optimization process into two phases so that the hubs barely moves but guides other points so that the subareas manifest the shape of the highdimensional manifold in the projection. Applying different cross-entropy functions to each phase also helps preserve both structures." }, { "heading": "3 UMAP", "text": "Since UMATO shares the overall pipeline of UMAP (McInnes et al. (2018)), we briefly introduce UMAP in this section. Although UMAP is grounded in a sophisticated mathematical foundation, its computation can be simply divided into two steps, graph construction and layout optimization, a configuration similar to t-SNE. In this section, we succinctly explain the computation in an abstract manner. For more details about UMAP, please consult the original paper (McInnes et al. (2018)).\nGraph Construction. UMAP starts by generating a weighted k-nearest neighbor graph that represents the distances between data points in the high-dimensional space. Given an input dataset X = {x1, . . . , xn}, the number of neighbors to consider k and a distance metric d : X × X → [0,∞), UMAP first computes Ni, the k-nearest neighbors of xi with respect to d. Then, UMAP computes two parameters, ρi and σi, for each data point xi to identify its local metric space. ρi is a nonzero distance from xi to its nearest neighbor:\nρi = min j∈Ni {d(xi, xj) | d(xi, xj) > 0}. (1)\nUsing binary search, UMAP finds σi that satisfies:∑ j∈Ni exp(−max(0, d(xi, xj)− ρi)/σi) = log2(k). (2)\nNext, UMAP computes:\nvj|i = exp(−max(0, d(xi, xj)− ρi)/σi), (3)\nthe weight of the edge from a point xi to another point xj . To make it symmetric, UMAP computes vij = vj|i + vi|j − vj|i · vi|j , a single edge with combined weight using vj|i and vi|j . Note that vij indicates the similarity between points xi and xj in the original space. Let yi be the projection of xi in a low-dimensional projection space. The similarity between two projected points yi and yj is wij = (1 + a||yi − yj ||2b2 )−1, where a and b are positive constants defined by the user. Setting both a and b to 1 is identical to using Student’s t-distribution to measure the similarity between two points in the projection space as in t-SNE (Maaten & Hinton (2008)).\nLayout Optimization. The goal of layout optimization is to find the yi that minimizes the difference (or loss) between vij and wij . Unlike t-SNE, UMAP employs the cross entropy:\nCUMAP = ∑ i 6=j [vij · log(vij/wij)− (1− vij) · log((1− vij)/(1− wij))], (4)\nbetween vij andwij as the loss function. UMAP initializes yi through spectral embedding (Belkin & Niyogi (2002)) and iteratively optimize its position to minimize CUMAP . Given the output weight wij as 1/(1 + ad2bij ), the attractive gradient is:\nCUMAP yi\n+ = −2abd2(b−1)ij 1 + ad2bij vij(yi − yj), (5)\nand the repulsive gradient is:\nCUMAP yi\n− =\n2b\n( + d2ij)(1 + ad 2b ij )\n(1− vij)(yi − yj), (6)\nwhere is a small value added to prevent division by zero and dij is a Euclidean distance between yi and yj . For efficient optimization, UMAP leverages the negative sampling technique from Word2Vec (Mikolov et al. (2013)). After choosing a target point and its negative samples, the position of the target is updated with the attractive gradient, while the positions of the latter do so with\nthe repulsive gradient. Moreover, UMAP utilizes edge sampling (Tang et al. (2015; 2016)) to accelerate and simplify the optimization process (Table 1). In other words, UMAP randomly samples edges with a probability proportional to their weights, and subsequently treats the selected ones as binary edges. Considering the previous sampling techniques, the modified objective function is:\nO = ∑\n(i,j)∈E\nvij(log(wij) + M∑ k=1 Ejk∼Pn(j)γ log(1− wijk)). (7)\nHere, vij and wij are the similarities in the high and low-dimensional spaces respectively, M is the number of negative samples and Ejk∼Pn(j) indicates that jk is sampled according to a noisy distribution, Pn(j), from Word2Vec (Mikolov et al. (2013))." }, { "heading": "4 UMATO", "text": "Figure 2 illustrates the computation pipeline of UMATO, which delineates the two-phase optimization (see Figure 9 for a detailed illustration of the overall pipeline). As a novel approach, we split the optimization into global and local so that it could generate a low-dimensional projection keeping both structures well-maintained. We present the pseudocode of UMATO in Appendix A, and made the source codes of it publicly available1.\n4.1 POINTS CLASSIFICATION\nIn the big picture, UMATO follows the pipeline of UMAP. We first find the k-nearest neighbors in the same way as UMAP, by assuming the local connectivity constraint, i.e., no single point is isolated and each point is connected to at least a user-defined number of points. After calculating ρ (Equation 1) and σ (Equation 2) for each point, we obtain the pairwise similarity for every pair of points. Once the k-nearest neighbor indices are established, we unfold it and check the frequency of each point to sort them into descending order so that the index of the popular points come to the front.\nThen, we build a k-nearest neighbor graph by repeating the following steps until no points remain unconnected: 1) choose the most frequent point as a hub among points that are not already connected, 2) retrieve the k-nearest neighbors of the chosen point (i.e., hub), and the points selected from steps 1 and 2 will become a connected component. The gist is that we divide the points into three disjoint sets: hubs, expanded nearest neighbors, and outliers (Figure 1). Thanks to the sorted indices, the most popular point in each iteration—but not too densely located—becomes the hub point. Once the hub points are determined, we recursively seek out their nearest neighbors and again look for the nearest neighbors of those neighbors, until there are no points to be newly appended. In\nother words, we find all connected points that are expanded from the original hub points, which, in turn, is called the expanded nearest neighbors. Any remaining point that is neither a hub point nor a part of any expanded nearest neighbors is classified as an outlier. The main reason to rule out the outliers is, similar to the previous approach (Gong et al. (2012)), to achieve the robustness of the practical manifold learning algorithm. As the characteristics of these classes differ significantly, we take a different approach for each class of points to obtain both structures. That is, we run global optimization for the hub points (Section 4.2), local optimization for the expanded nearest neighbors (Section 4.3), and no optimization for the outliers (Section 4.4). In the next section we explain each in detail." }, { "heading": "4.2 GLOBAL OPTIMIZATION", "text": "After identifying hub points, we run the global optimization to retrieve the skeletal layout of the data. First, we initialize the positions of hub points using PCA, which makes the optimization process\n1https://www.github.com/anonymous-author/anonymous-repo\nfaster and more stable than using random initial positions. Next, we optimize the positions of hub points by minimizing the cross-entropy function (Equation 4). Let f(X) = {f(xi, xj)|xi, xj ∈ X} and g(Y ) = {g(yi, yj)|yi, yj ∈ Y } be two adjacency matrices in high- and low-dimensional spaces. If Xh represents a set of points selected as hubs in high-dimensional space, and Yh is a set of corresponding points in the projection, we minimize the cross entropy—CE(f(Xh)||g(Yh))— between f(Xh) and g(Yh).\nTable 1: The runtime for each algorithm using MNIST dataset. UMAP and UMATO take much less time than MulticoreTSNE (Ulyanov (2016)) when tested on a Linux server with 40-core Intel Xeon Silver 4210 CPUs. The runtimes are averaged over 10 runs. Isomap (Tenenbaum et al. (2000)) took more than 3 hours to get the embedding result.\nAlgorithm Runtime (s)\nIsomap 3 hours > t-SNE 374.85 ± 11.38 UMAP 26.10 ± 3.97\nUMATO 73.32 ± 8.39\nUMAP computes the cross-entropy between all existing points using two sampling techniques, edge sampling and negative sampling, for speed (Table 1). However, this often ends up capturing only the local properties of data because of the sampling biases and thus it cannot be used for cases that require a comprehensive understanding of the data. On the other hand, in its first phase, UMATO only optimizes for representatives (i.e., the hub points) of data, which takes much less time but can still approximate the manifold effectively." }, { "heading": "4.3 LOCAL OPTIMIZATION", "text": "In the second phase, UMATO embeds the expanded nearest neighbors to the projection that is computed using only the hub points from the first phase. For each point in the expanded nearest neighbors, we retrieve its nearest m (e.g., 10) hubs in the original high-dimensional space and set its initial position in the projection to the average positions of the hubs in the projection with small random perturbations. We follow a similar optimization process as UMAP in the local optimization with small differences. As explained in Section 3, UMAP first constructs the graph structure; we perform the same task but only with the hubs and expanded nearest neighbors. While doing this, since some points are excluded as outliers, we need to update the k-nearest neighbor indices. This is fast because we recycle the already-built k-nearest neighbor indices by updating the outliers to the new nearest neighbor.\nOnce we compute the similarity between points (Equation 3), to optimize the positions of points, similar to UMAP, we use the cross-entropy loss function with edge sampling and negative sampling (Equation 7). Here, we try to avoid moving the hubs as much as possible since they have already formed the global structure. Thus, we only sample a point p among the expanded nearest neighbors as one end of an edge, while the point q at the other end of the edge can be chosen from all points except outliers. In UMAP implementation, when q pulls in p, p also drags q to facilitate the optimization (Equation 5). When updating the position of q, we only give a penalty to this (e.g., 0.1), if q is a hub point, not letting its position excessively be affected by p. In addition, because the repulsive force can disperse the local attachment, making the point veer off for each epoch and eventually destroying the well-shaped global layout, we multiply a penalty (e.g., 0.1) when calculating the repulsive gradient (Equation 6) for the points selected as negative samples." }, { "heading": "4.4 OUTLIERS ARRANGEMENT", "text": "Since the isolated points, which we call outliers, mostly have the same distance to all the other data points in high-dimensional space, due to the curse of dimensionality, they both sabotage the global structure we have already made and try to mingle with all other points, thus distorting the overall projection. We do not optimize these points but instead simply append them using the alreadyprojected points (e.g., hubs or expanded nearest neighbors), that belong to each outlier’s connected component of the nearest neighbor graph. That is, if xi ∈ Cn where xi is the target outlier and Cn is the connected component to which xi belongs, we find xj ∈ Cn that has already been projected and is closest to xi. We arrange yi which corresponds to xi in low-dimensional space using the position of yj in the same component offset by a random noise. In this way, we can benefit from the comprehensive composition of the projection that we have already optimized when arranging the outliers. We can ensure that all outliers can find a point as its neighbor since we picked hubs from each connected component of the nearest neighbor graph and thus at least one point is already located and has an optimized position (Section 4.2)." }, { "heading": "4.5 MULTI-PHASE OPTIMIZATION", "text": "The optimization of UMATO can be easily expanded to multiple phases (e.g., three or more phases). Since we have a recursive procedure to expand the nearest neighbors, we can insert the optimization process each time we expand the neighbors to create a multi-phase algorithm. However, our experiment with three- and four-phase optimization with the Fashion MNIST dataset showed that there is no big difference between two-phase optimization and that with more than two phases. Appendix C contains the quantitative and qualitative results of the experiment for multi-phase optimization." }, { "heading": "5 EXPERIMENTS", "text": "We conducted experiments to evaluate UMATO’s ability to capture the global and local structures of high-dimensional data. We compared UMATO with six baseline algorithms, PCA, Isomap, t-SNE, UMAP, topological autoencoders, and At-SNE in terms of global (i.e., DTM and KLσ) and local (i.e., trustworthiness, continuity, and MRREs) quality metrics." }, { "heading": "5.1 DATASETS", "text": "We used four datasets for the experiments: Spheres, MNIST (LeCun & Cortes (2010)), Fashion MNIST (Xiao et al. (2017)), and Kuzushiji MNIST (Clanuwat et al. (2018)). Spheres is a synthetic dataset that has 10,000 rows of 101 dimensions. It has a high-dimensional structure in which ten small spheres are contained in a larger sphere. Specifically, the dataset’s first 5,000 rows are the points sampled from a sphere of radius 25 and 500 points are sampled for each of the ten smaller spheres of radius 5 shifted to a random direction from the origin. This dataset is the one used for the original experiment with topological autoencoders (Moor et al. (2020)). Other datasets are images of digits, fashion items, and Japanese characters, each of which consists of 60,000 784-dimensional (28 × 28) images from 10 classes." }, { "heading": "5.2 EXPERIMENTAL SETTING", "text": "Evaluation Metrics. To assess how well projections preserve the global structures of highdimensional data, we computed the density estimates (Chazal et al. (2011; 2017)), the so-called Distance To a Measure (DTM), between the original data and the projections. Moor et al. (2020) adopted the Kullback-Leibler divergence between density estimates with different scales (KLσ) to evaluate the global structure preservation. To follow the original experimental setup by Moor et al. (2020), we found the projections with the lowest KL0.1 from all algorithms by adjusting their hyperparameters. Next, to evaluate the local structure preservation of projections, we used the mean relative rank errors (MRREs, Lee & Verleysen (2007)), trustworthiness, and continuity (Venna & Kaski (2001)). All of these local quality metrics estimate how well the nearest neighbors in one space (e.g., high- or low-dimensional space) are preserved in the other space. For more information on the quality metrics, we refer readers to Appendix E.\nBaselines. We set the most widely used dimensionality reduction techniques as our baselines, including PCA, Isomap (Tenenbaum et al. (2000)), t-SNE (Maaten & Hinton (2008)),\nUMAP (McInnes et al. (2018)), and At-SNE (Fu et al. (2019)). In the case of t-SNE, we leveraged Multicore t-SNE (Ulyanov (2016)) for fast computation. To initialize the points’ position, we used PCA for t-SNE, following the recommendation in the previous work (Linderman et al. (2019)), and spectral embedding for UMAP which was set to default. In addition, we compared with topological autoencoders (Moor et al. (2020)) that were developed to capture the global properties of the data using a deep learning-based generative model. Following the convention of visualization in dimensionality reduction, we determined our result projected onto 2D space. We tuned the hyperparameters of each technique to minimize the KL0.1. Appendix F further describes the details of the hyperparameters settings." }, { "heading": "5.3 QUANTITATIVE RESULTS", "text": "Table 2 displays the experiment results. In most cases, UMATO was the only method that has shown performance both in the global and local quality metrics in most datasets. For local metrics, t-SNE, At-SNE, and UMAP generally had the upper-hand, but UMATO showed comparable MRREX and continuity in Spheres, Fashion MNIST, and Kuzushiji MNIST datasets. Meanwhile, Isomap and topological autoencoders were mostly good at global quality metrics, although UMATO had the lowest (best) KL0.1 and DTM except for the MNIST dataset." }, { "heading": "5.4 QUALITATIVE RESULTS", "text": "Among the five algorithms, only UMATO could preserve both the global and local structure of the Spheres dataset. If we look at the figure made by UMATO, the outer sphere encircles the inner spheres in a circular form, which is the most intuitive to understand the relationship among different classes and the local linkage in detail. In the results from Isomap, t-SNE, UMAP, and At-SNE,\nthe points representing the surrounding giant sphere mix with those representing the other small inner spheres, thus failing to capture the nested relationships among different classes. Meanwhile, topological autoencoders are able to realize the global relationship between classes in an incomplete manner; the points for the outer sphere are too spread out, thus losing the local characteristics of the class. From this result, we can acknowledge how UMATO can work with high-dimensional data effectively to reveal both global and local structures. 2D visualization results on other datasets (MNIST, Fashion MNIST, Kuzushiji MNIST) can be found in Appendix H. Lastly, we report an additional experiment on the mouse neocortex dataset (Tasic et al. (2018)) in Appendix N which shows the relationship between classes much better than the baseline algorithms like t-SNE and UMAP." }, { "heading": "5.5 PROJECTION ROBUSTNESS OVER DIVERSE INITIALIZATION METHODS", "text": "We experimented with the robustness of each dimensionality reduction technique with different initialization methods such as PCA, spectral embedding, random position, and class-wise separation. In class-wise separation, we initialized each class with a non-overlapping random position in 2- dimensional space, adding random Gaussian noise. In our results, UMATO embeddings were almost the same on the real-world datasets, while the UMAP and t-SNE results relied highly upon the initialization method. We report this in Table 3 with a quantitative comparison using Procrustes distance. Specifically, given two datasets X = {x1, x2, . . . , xn} and Y = {y′1, y′2, . . . , y′n} where y′i corresponds to xi, the Procrustes distance is defined as\ndP (X,Y ) = √√√√ N∑ i=1 (xi − y′i). (8)\nFor all cases, we ran optimal translation, uniform scaling, and rotation to minimize the Procrustes distance between the two distributions. In the case of the Spheres dataset, as defined in Appendix G, the clusters were equidistant from each other. The embedding results have to be different due to the limitation of the 2-dimensional space since there is no way to express this relationship. However, as we report in Figure 4, the global and local structures of the Spheres data are manifested with UMATO with all different initialization methods." }, { "heading": "6 CONCLUSION", "text": "We present a two-phase dimensionality reduction algorithm called UMATO that can effectively preserve the global and local properties of high-dimensional data. In our experiments with diverse datasets, we have proven that UMATO can outperform previous widely used baselines (e.g., t-SNE and UMAP) both quantitatively and qualitatively. As future work, we plan to accelerate UMATO, as in previous attempts with other dimensionality reduction techniques (Pezzotti et al. (2019); Nolet et al. (2020)), by implementing it on a heterogeneous system (e.g., GPU) for speedups." }, { "heading": "A UMATO ALGORITHM PSEUDOCODE", "text": "The pseudocode of UMATO is as below:\nAlgorithm 1 Uniform Manifold Approximation with Two-phase Optimization 1: procedure UMATO(X , kX , d, min dist, nh, eg , el) Input: High-dimensional data X , number of nearest neighbors k, projection dimension d, minimum distance in projection result min dist, number of hub points nh, epochs for global and local optimization eg , el Output: Low-dimensional projection Y 2: Compute k-nearest neighbors of X 3: Obtain sorted list using indices’ frequency of k-nearest neighbors 4: Build k-nearest neighbor graph structure 5: Classify points into hubs, expanded nearest neighbors, and outliers 6: Optimize CE(f(Xh)||g(Yh)) to preserve global configuration (Equation 4) 7: Initialize expanded nearest neighbors using hub locations 8: Update k-nearest neighbors & compute weights (Equation 3) 9: Optimize CE(f(X)||g(Y )) to preserve local configuration (Equation 7) 10: Position outliers 11: return Y 12: end procedure" }, { "heading": "B THE MEANING OF USING DIFFERENT LOSS FUNCTIONS IN DIMENSIONALITY REDUCTION", "text": "Following the notations from above, we set the similarity between points in high-dimensional space xi and xj as vij and the low-dimensional space yi and yj as wij . Then we can write the KL divergence and cross entropy loss function as:\nKL = ∑ i6=j vij · log(vij/wij), (9)\nCE = ∑ i 6=j vij · log(vij/wij)− ∑ i 6=j (1− vij) · log((1− vij)/(1− wij)). (10)\nIf we use the same probability distributions, vij and wij , the KL divergence and the first term of cross-entropy are exactly the same. If vij and wij are both large (Table 4 a.) or both small (Table 4 d.), this means that the relationship between points in high-dimensional space is well-retained in the projection. Thus, the positions of points in the low-dimensional space do not have to move. As vij and wij are similar, log(vij/wij) becomes zero, producing a small cost in the end.\nHowever, we need to modify the position of wij if there exists a gap between vij and wij . The KL divergence imposes a big penalty when vij is large but wij is small (Table 4 b.). That is, if the neighboring points in high-dimensional space are not captured well in the projection, the KL divergence imposes a high penalty to move the point (vij) into the right position. Thus, we can understand why t-SNE is able to capture the local characteristics of high-dimensional space, but not the global ones. However, the second term of cross-entropy imposes a big penalty when vij is small but wij is large (Table 4 g.). Therefore, it moves points that are close together in the highdimensional space but far apart in the projection." }, { "heading": "C MULTI-PHASE OPTIMIZATION", "text": "We report the result of multi-phase optimization (e.g., three and four-phase) using the Fashion MNIST dataset both quantitatively (Table 5) and qualitatively (Figure 5). As in Figure 5, we were unable to find any significant differences between the 2D projections, although some outliers were located in different places. Moreover, the quality metrics were almost the same for all three results. The original UMATO was the winner in DTM, KL0.1, KL1, continuity, and MRREZ but came last in other quality metrics. In addition, the gap in metrics between UMATO and the multi-phase optimizations indicated a trivial difference. Thus, we concluded that developing a multi-phase optimization for UMATO does not bring about any notable improvement in the projection result." }, { "heading": "D PROJECTION STABILITY", "text": "Table 6 denotes the results of our experiment on the projection stability of UMATO and other dimensionality reduction techniques. When the data size grows, we want to sample a portion of it to speed up the visualization. However, the concern is whether the projection run with the sampled indices is consistent with the part of the projection result with indices selected from the full dataset. If the algorithm can generate stable and consistent results, the two projections should contain the least bias possible. To compute the projection stability of dimensionality reduction techniques, we\nused the normalized Procrustes distance (Equation 8) to measure the distance between two comparable distributions. To replicate the experiment by McInnes et al. (2018), we used the same Flow Cytometry dataset (Spidlen et al. (2012); Brodie et al. (2013)), and ran optimal translation, uniform scaling, and rotation to minimize the Procrustes distance between the two distributions. As we can see in Table 6, UMATO outperformed t-SNE and At-SNE for all sub-sample sizes. Moreover, although UMAP is known as stable among existing algorithms, UMATO showed even better (lower) Procrustes distance except for one sub-sample size (60%). From this result, we can acknowledge that UMATO can generate the more stable and consistent results regardless of sub-sample size than many other dimensionality reduction techniques." }, { "heading": "E QUALITY METRICS", "text": "As UMATO presents a dimensionality reduction technique that can capture both the global and local structures of high-dimensional data, we used several quality metrics to evaluate each aspect respectively. We have referred to some review papers (Gracia et al. (2014); Lee & Verleysen (2009)) for the best use and implementation. Among many quality metrics, we leveraged 1) Distance To a Measure (DTM, Chazal et al. (2011; 2017)), 2) KL divergence between two density functions, 3) trustworthiness and continuity (Venna & Kaski (2001)), and 4) mean relative rank errors (MRREs, Lee & Verleysen (2007)). The first two metrics are used to test the preservation of global structures and the last two metrics are suggested for the preservation of the local structures.\nDistance To a Measure considers the dispersion of high- and low-dimensional data, where it is defined for a given point as fXσ (x) := ∑ y∈X exp (−dist(x, y)2/σ). By summing up the element-\nwise absolute values between two distributions, ∑ x∈X,z∈Z f X σ (x)− fZσ (z) where x is the point in high-dimensional space X and the z is the corresponding projected point in low-dimensional space Z, we can examines the similarity of two datasets. In our experiments, we used the Euclidean distance and the values were normalized between 0 and 1. The σ ∈ R>0, which represents the length scale parameter, was set to 0.1. Moor et al. (2020) proposed the KL divergence of two probability distributions, KLσ := KL(fXσ ||fZσ ), a variation of DTM. Changing σ as a normalizing factor of the distribution, the authors investigated if the algorithms can preserve the global structure of the high-dimensional data. Following the same notion as the experiment in the paper (Moor et al. (2020)), we used three σ values, 1.0, 0.1, and 0.01, to test whether each algorithm can capture the global aspect with respect to diverse density estimates.\nTrustworthiness and continuity measure how much the nearest neighbors are preserved in a space (i.e., high- or low-dimensional space) compared to the other space by analyzing the ranks of knearest neighbors in both spaces. The difference between trustworthiness and continuity comes from which space is held as the base space. Specifically, we first need to find the k-nearest neighbors in both high- and low-dimensional space. Then, we compute the trustworthiness by checking whether the ranks of nearest neighbors in low-dimensional space resemble those of high-dimensional space. If so, we can achieve a high score in trustworthiness. Meanwhile, we achieve a high score in continuity if the ranks of nearest neighbors in high-dimensional space resemble those of low-dimensional space. MRREs take a similar approach to trustworthiness and continuity as it calculates and compares the ranks of the k-nearest neighbors in both spaces, but the normalizing factor is slightly different. Originally, it was better if MRREs had lower values. However, for the ease of comparing\nlocal quality metrics, we defined it as MRREs := 1 −MRREs, so higher MRREs mean that they have better retained the k-nearest neighbors like trustworthiness and continuity." }, { "heading": "F HYPERPARAMETER SETTING", "text": "As explained in Section 5.2, we generated projections for each dimensionality reduction algorithm that had the lowest KL0.1 measure. To tune each algorithm’s hyperparameters, we employed the grid search for t-SNE, UMAP, and At-SNE. For t-SNE and At-SNE, we changed the perplexity from 5 to 50 with an interval of 5, and the learning rate from 0.1 to 1.0 with a log-uniform scale. In the case of UMAP, we changed the number of nearest neighbors from 5 to 50 with an interval of 5, and the minimum distance between points in the projection from 0.1 to 1.0 with an interval of 0.1. We used the Python library scikit-optimize (Head et al. (2018)) to find the best hyperparameters for topological autoencoders. UMATO has several hyperparameters such as the number of hub points, the number of epochs, and the learning rate for global and local optimization. In our experiments, we configured everything except the number of hub points to the same setting for UMATO. We used 200 hub points for the Spheres dataset and had 300 hubs for others. We used fewer hub points for the Spheres since it has only 10,000 data points in total, while the other datasets have 60,000 data points. We set the number of epochs to 100 for global optimization and to 50 for local optimization. Lastly, the global learning rate was set to 0.0065, and the local learning rate was set to 0.01." }, { "heading": "G SYNTHETIC SPHERES DATASET", "text": "We leveraged the same Spheres dataset that Moor et al. (2020) used in their experiments of topological antoencoders. The Spheres dataset contains eleven high-dimensional spheres which reside in 101-dimensional space. We first generated ten spheres of radius of 5, and shifted each sphere by adding the same Gaussian noise to a random direction. For this aim, we created d-dimensional Gaussian vectors X ∼ N(0, I(10/ √ d)), where d is 101. As to embed an interesting geometrical structure to the dataset, the ten spheres of relatively small radii of 5 were enclosed by another larger sphere of radius of 25." }, { "heading": "H MORE EXPERIMENTS ON SYNTHETIC DATASETS", "text": "We leveraged the 3-dimensional S-curve and Swiss roll datasets to test whether UMATO can preserve both the global and local structures of original datasets. As the visualization shows (Figure 6),\nonly PCA and UMATO were able to capture the global and local structures of original datasets. Isomap, t-SNE and UMAP could capture the local manifolds of original datasets, but high-level manifolds of the original datasets were not reflected to the embedding." }, { "heading": "I LOCAL QUALITY METRICS WITH DIFFERENT VALUES OF HYPERPARAMETER", "text": "We report the result of local quality metrics with diverse hyperparameters (Table 7). We changed the number of nearest neighbors (k) from 5 to 15 with an interval of 5. As we have already reported when the k = 5 (Table 2), below are the cases where k = 10, 15. As we can check from the result, while the values fluctuate a little bit, the ranks are mostly robust over diverse k values.\nJ 2D EMBEDDING RESULTS OF UMATO AND BASELINE ALGORITHMS ON REAL-WORLD DATASETS\nFor the real-world datasets, UMATO showed a similar projection to PCA but with better captured local characteristics. The results from topological autoencoders showed some points detached far apart from their centers, even though the best hyperparameters were used for each. Although AtSNE claimed that it could capture both structures, the results were not significantly different from those of the original t-SNE algorithm when projecting the Spheres and Fashion MNIST datasets." }, { "heading": "K EMBEDDING ROBUSTNESS OVER NUMBER OF EPOCHS", "text": "We report the experimental result in Figure 8. As we explained, the UMAP embedding results are susceptible to the number of epochs so that the distance between clusters get dispersed. This can induce a misinterpretation that the user considers the distance between clusters as something meaningful. The two-phase optimization of UMATO can solve the problem since the global optimization (first phase) is easy to converge as it runs only with a small portion of points. Therefore, the increasing number of epochs in the global optimization does not harm the final embedding.\nL ILLUSTRATION OF UMATO PIPELINE\nFor the ease of understanding, we provide an illustration of UMATO pipeline in Figure 9. The detailed explanation for UMATO can be found in Section 4." }, { "heading": "M EFFECT OF LOCAL LEARNING RATE OF UMATO", "text": "By manipulating one of UMATO’s hyperparameters, local learning rate, the user can determine where to focus in the embedding result; to reveal more of the global structures, the user should apply a lower value (e.g., 0.005), while using a higher one (e.g., 0.1) would generate more like a UMAP embedding which prefers to show the local manifolds." }, { "heading": "N UMATO ON REAL-WORLD BIOLOGICAL DATASET", "text": "To test UMATO on the real-world biological dataset, we took professional advice from an expert who has a Ph.D. in Bioinformatics. We have run UMATO and the baseline algorithms (t-SNE, UMAP) on 23,822 single-cell transcriptomes from two areas at distant poles of the mouse neocortex (Tasic et al. (2018)). Each cell belongs to one of 133 clusters defined by Jaccard–Louvain clustering (for more than 4,000 cells) or a combination of k-means and Ward’s hierarchical clustering. Likewise, each cluster belongs to one of 4 classes: GABAergic (red/purple), Endothelial (brown), Glutamatergic (blue/green), Non-Neuronal (dark green).\nThe embedding result for each method is given in Figure 11. In the case of t-SNE, clusters are well-captured, but the classes are much dispersed, while UMAP adequately separates both classes and clusters. Compared to these baseline algorithms, UMATO is able to capture the relationship between classes much better, retaining some of the local manifolds as well. This means that UMATO focuses more on the manifold at a higher level than the baselines that the hub points worked as the representatives that explain well about the overall dataset. Moreover, there are cases in biological data analysis where the researchers want to know the distance between samples González-Blas et al. (2019); Van den Berge et al. (2020). As the UMAP embedding results are susceptible to the number of epochs, this may cause a negative impact to interpret the results accurately. On the other hand, as UMATO is robust over the number of epochs, we do not have to worry about such biases." } ]
2,020
null
SP:fd70696898c5c725ad789565265274a37a6c2ca0
[ "This paper presents a reduction approach to tackle the optimization problem of constrained RL. They propose a Frank-Wolfe type algorithm for the task, which avoids many shortcomings of previous methods, such as the memory complexity. They prove that their algorithm can find an $\\epsilon$-approximate solution with $O(1/\\epsilon)$ invocation. They also show the power of their algorithm with experiments in a grid-world navigation task, though the tasks looks relatively simple." ]
Many applications of reinforcement learning (RL) optimize a long-term reward subject to risk, safety, budget, diversity or other constraints. Though constrained RL problem has been studied to incorporate various constraints, existing methods either tie to specific families of RL algorithms or require storing infinitely many individual policies found by an RL oracle to approach a feasible solution. In this paper, we present a novel reduction approach for constrained RL problem that ensures convergence when using any off-the-shelf RL algorithm to construct an RL oracle yet requires storing at most constantly many policies. The key idea is to reduce the constrained RL problem to a distance minimization problem, and a novel variant of Frank-Wolfe algorithm is proposed for this task. Throughout the learning process, our method maintains at most constantly many individual policies, where the constant is shown to be worst-case optimal to ensure convergence of any RL oracle. Our method comes with rigorous convergence and complexity analysis, and does not introduce any extra hyper-parameter. Experiments on a grid-world navigation task demonstrate the efficiency of our method.
[]
[ { "authors": [ "Jacob Abernethy", "Peter L Bartlett", "Elad Hazan" ], "title": "Blackwell approachability and no-regret learning are equivalent", "venue": "In Proceedings of the 24th Annual Conference on Learning Theory, pp", "year": 2011 }, { "authors": [ "Jacob D Abernethy", "Jun-Kun Wang" ], "title": "On frank-wolfe and equilibrium computation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Alekh Agarwal", "Alina Beygelzimer", "Miroslav Dudı́k", "John Langford", "Hanna Wallach" ], "title": "A reductions approach to fair classification", "venue": "arXiv preprint arXiv:1803.02453,", "year": 2018 }, { "authors": [ "Eitan Altman" ], "title": "Constrained Markov decision processes, volume 7", "venue": "CRC Press,", "year": 1999 }, { "authors": [ "Gabriel Barth-Maron", "Matthew W Hoffman", "David Budden", "Will Dabney", "Dan Horgan", "Dhruva Tb", "Alistair Muldal", "Nicolas Heess", "Timothy Lillicrap" ], "title": "Distributed distributional deterministic policy gradients", "venue": "arXiv preprint arXiv:1804.08617,", "year": 2018 }, { "authors": [ "Amir Beck", "Shimrit Shtern" ], "title": "Linearly convergent away-step conditional gradient for non-strongly convex functions", "venue": "Mathematical Programming,", "year": 2017 }, { "authors": [ "Craig Boutilier", "Tyler Lu" ], "title": "Budget allocation using weakly coupled, constrained markov decision processes", "venue": null, "year": 2016 }, { "authors": [ "Deeparnab Chakrabarty", "Prateek Jain", "Pravesh Kothari" ], "title": "Provable submodular minimization using wolfe’s algorithm", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Minmin Chen", "Alex Beutel", "Paul Covington", "Sagar Jain", "Francois Belletti", "Ed H Chi" ], "title": "Top-k off-policy correction for a reinforce recommender system", "venue": "In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining,", "year": 2019 }, { "authors": [ "Yinlam Chow", "Mohammad Ghavamzadeh" ], "title": "Algorithms for cvar optimization in mdps", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Yinlam Chow", "Mohammad Ghavamzadeh", "Lucas Janson", "Marco Pavone" ], "title": "Risk-constrained reinforcement learning with percentile risk criteria", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Yinlam Chow", "Ofir Nachum", "Edgar Duenez-Guzman", "Mohammad Ghavamzadeh" ], "title": "A lyapunovbased approach to safe reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Jesús A De Loera", "Jamie Haddock", "Luis Rademacher" ], "title": "The minimum euclidean-norm point in a convex polytope: Wolfe’s combinatorial algorithm is exponential", "venue": "In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2018 }, { "authors": [ "Miroslav Dudı́k", "John Langford", "Lihong Li" ], "title": "Doubly robust policy evaluation and learning", "venue": "arXiv preprint arXiv:1103.4601,", "year": 2011 }, { "authors": [ "Marguerite Frank", "Philip Wolfe" ], "title": "An algorithm for quadratic programming", "venue": "Naval research logistics quarterly,", "year": 1956 }, { "authors": [ "Yoav Freund", "Robert E Schapire" ], "title": "Adaptive game playing using multiplicative weights", "venue": "Games and Economic Behavior,", "year": 1999 }, { "authors": [ "Scott Fujimoto", "Herke Van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "arXiv preprint arXiv:1802.09477,", "year": 2018 }, { "authors": [ "Dan Garber", "Elad Hazan" ], "title": "A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization", "venue": "arXiv preprint arXiv:1301.4666,", "year": 2013 }, { "authors": [ "Dan Garber", "Elad Hazan" ], "title": "Playing non-linear games with linear oracles", "venue": "IEEE 54th Annual Symposium on Foundations of Computer Science,", "year": 2013 }, { "authors": [ "Peter Geibel", "Fritz Wysotzki" ], "title": "Risk-sensitive reinforcement learning applied to control under constraints", "venue": "Journal of Artificial Intelligence Research,", "year": 2005 }, { "authors": [ "Elad Hazan", "Alexander Rakhlin", "Peter L Bartlett" ], "title": "Adaptive online gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2008 }, { "authors": [ "Martin Jaggi" ], "title": "Revisiting frank-wolfe: Projection-free sparse convex optimization", "venue": "In Proceedings of the 30th international conference on machine learning,", "year": 2013 }, { "authors": [ "Nan Jiang", "Lihong Li" ], "title": "Doubly robust off-policy value evaluation for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Simon Lacoste-Julien", "Martin Jaggi" ], "title": "On the global linear convergence of frank-wolfe optimization variants", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Hoang Le", "Cameron Voloshin", "Yisong Yue" ], "title": "Batch policy learning under constraints", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Sobhan Miryoosefi", "Kianté Brantley", "Hal Daume III", "Miro Dudik", "Robert E Schapire" ], "title": "Reinforcement learning with convex constraints", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "BF Mitchell", "Vladimir Fedorovich Dem’yanov", "VN Malozemov" ], "title": "Finding the point of a polyhedron closest to the origin", "venue": "SIAM Journal on Control,", "year": 1974 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Santiago Paternain", "Luiz Chamon", "Miguel Calvo-Fullana", "Alejandro Ribeiro" ], "title": "Constrained reinforcement learning has zero duality gap", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Doina Precup" ], "title": "Eligibility traces for off-policy policy evaluation", "venue": "Computer Science Department Faculty Publication Series, pp", "year": 2000 }, { "authors": [ "Doina Precup", "Richard S Sutton", "Sanjoy Dasgupta" ], "title": "Off-policy temporal-difference learning with function approximation", "venue": "In ICML, pp", "year": 2001 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Chen Tessler", "Daniel J Mankowitz", "Shie Mannor" ], "title": "Reward constrained policy optimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "arXiv preprint arXiv:1509.06461,", "year": 2015 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado Hasselt", "Marc Lanctot", "Nando Freitas" ], "title": "Dueling network architectures for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Philip Wolfe" ], "title": "Convergence theory in nonlinear programming", "venue": "Integer and nonlinear programming,", "year": 1970 }, { "authors": [ "Philip Wolfe" ], "title": "Finding the nearest point in a polytope", "venue": "Mathematical Programming,", "year": 1976 }, { "authors": [ "Shuai Xiao", "Le Guo", "Zaifan Jiang", "Lei Lv", "Yuanbo Chen", "Jun Zhu", "Shuang Yang" ], "title": "Model-based constrained mdp for budget allocation in sequential incentive marketing", "venue": "In Proceedings of the 28th ACM International Conference on Information and Knowledge Management,", "year": 2019 }, { "authors": [ "Xiangyu Zhao", "Liang Zhang", "Long Xia", "Zhuoye Ding", "Dawei Yin", "Jiliang Tang" ], "title": "Deep reinforcement learning for list-wise recommendations", "venue": "arXiv, pp", "year": 2017 }, { "authors": [ "Guanjie Zheng", "Fuzheng Zhang", "Zihan Zheng", "Yang Xiang", "Nicholas Jing Yuan", "Xing Xie", "Zhenhui Li" ], "title": "Drn: A deep reinforcement learning framework for news recommendation", "venue": "In Proceedings of the 2018 World Wide Web Conference,", "year": 2018 }, { "authors": [ "Martin Zinkevich" ], "title": "Online convex programming and generalized infinitesimal gradient ascent", "venue": "In Proceedings of the 20th international conference on machine learning (icml-03),", "year": 2003 }, { "authors": [ "Lixin Zou", "Long Xia", "Zhuoye Ding", "Jiaxing Song", "Weidong Liu", "Dawei Yin" ], "title": "Reinforcement learning to optimize long-term user engagement in recommender systems", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Contemporary approaches in reinforcement learning (RL) largely focus on optimizing the behavior of an agent against a single reward function. RL algorithms like value function methods (Zou et al., 2019; Zheng et al., 2018) or policy optimization methods (Chen et al., 2019; Zhao et al., 2017) are widely used in real-world tasks. This can be sufficient for simple tasks. However, for complicated applications, designing a reward function that implicitly defines the desired behavior can be challenging. For instance, applications concerning risk (Geibel & Wysotzki, 2005; Chow & Ghavamzadeh, 2014; Chow et al., 2017), safety (Chow et al., 2018) or budget (Boutilier & Lu, 2016; Xiao et al., 2019) are naturally modelled by augmenting the RL problem with orthant constraints. Exploration suggestions, such as to visit all states as evenly as possible, can be modelled by using a vector to measure the behavior of the agent, and to find a policy whose measurement vector lies in a convex set (Miryoosefi et al., 2019).\nTo solve RL problem under constraints, existing methods either ensure convergence only on a specific family of RL algorithms, or treat the underlying RL algorithms as a black box oracle to find individual policy, and look for mixed policy that randomizes among these individual policies. Though the second group of methods has the advantage of working with arbitrary RL algorithms that best suit the underlying problem, existing methods have practically infeasible memory requirement. To get an -approximate solution, they require storing O(1/ ) individual policies, and an exact solution requires storing infinitely many policies. This limits the prevalence of such methods, especially when the individual policy uses deep neural networks.\nIn this paper, we propose a novel reduction approach for the general convex constrained RL (C2RL) problem. Our approach has the advantage of the second group of methods, yet requires storing at most constantly many policies. For a vector-valued Markov Decision Process (MDP) and any given target convex set, our method finds a mixed policy whose measurement vector lies in the target convex set, using any off-the-shelf RL algorithm that optimizes a scalar reward as a RL oracle. To do so, the C2RL problem is reduced to a distance minimization problem between a polytope and a convex set, and a novel variant of Frank-Wolfe type algorithm is proposed to solve this distance minimization problem. To find an -approximate solution in an m-dimensional vector-valued MDP,\nour method only stores at most m + 1 policies, which improves from infinitely many O(1/ ) (Le et al., 2019; Miryoosefi et al., 2019) to a constant. We also show this m + 1 constant is worstcase optimal to ensure convergence of RL algorithms using deterministic policies. Moreover, our method introduces no extra hyper-parameter, which is favorable for practical usage. A preliminary experimental comparison demonstrates the performance of the proposed method and the sparsity of the policy found." }, { "heading": "2 RELATED WORK", "text": "For high dimensional constrained RL, one line of approaches incorporates the constraint as a penalty signal into the reward function, and makes updates in a multiple time-scale scheme (Tessler et al., 2018; Chow & Ghavamzadeh, 2014). When used with policy gradient or actor-critic algorithms (Sutton & Barto, 2018), this penalty signal guides the policy to converge to a constraint satisfying one (Paternain et al., 2019; Chow et al., 2017). However, the convergence guarantee requires the RL algorithm can find a single policy that satisfies the constraint, hence ruling out methods that search for deterministic policies, such as Deep Q-Networks (DQN) (Mnih et al., 2013), Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015) and their variants (Van Hasselt et al., 2015; Wang et al., 2016; Fujimoto et al., 2018; Barth-Maron et al., 2018).\nAnother line of approaches uses a game-theoretic framework, and does not tie to specific families of RL algorithm. The constrained problem is relaxed to a zero-sum game, whose equilibrium is solved by online learning (Agarwal et al., 2018). The game is played repeatedly, each time any RL algorithm can be used to find a best response policy to play against a no-regret online learner. The mixed policy that uniformly distributed among all played policies can be shown to converge to an optimal policy of the constrained problem (Freund & Schapire, 1999; Abernethy et al., 2011). Taking this approach, Le et al. (2019) uses Lagrangian relaxation to solve the orthant constraint case, and Miryoosefi et al. (2019) uses conic duality to solve the convex constraint case. However, since the convergence is established by the no-regret property, the policy found by these methods requires randomization among policies found during the learning process, which limits their prevalence.\nDifferent from the game-theoretic approaches, we reduce the C2RL to a distance minimization problem and propose a novel variant of Frank-Wolfe (FW) algorithm to solve it. Our result builds on recent finding that the standard FW algorithm emerges as computing the equilibrium of a special convex-convave zero sum game (Abernethy & Wang, 2017). This connects our approach with previous approaches from game-theoretic framework (Agarwal et al., 2018; Le et al., 2019; Miryoosefi et al., 2019). The main advantage of our reduction approach is that the convergence of FW algorithm does not rely on the no-regret property of an online learner. Hence there is no need to introduce extra hyper-parameters, such as learning rate of the online learner, and intuitively, we can eliminate unnecessary policies to achieve better sparsity. To do so, we extend Wolfe’s method for minimum norm point problem (Wolfe, 1976) to solve our distance minimization problem. Throughout the learning process, we maintain an active policy set, and constantly eliminate policies whose measurement vector are affinely dependent of others. Unlike norm function in Wolfe’s method, our objective function is not strongly convex. Hence we cannot achieve the linear convergence of Wolfe’s method as shown in Lacoste-Julien & Jaggi (2015). Instead, we analyze the complexity of our method based on techniques from Chakrabarty et al. (2014). A theoretical comparison between our method and various approaches in constrained RL is provided in Table 1." }, { "heading": "3 PRELIMINARIES", "text": "A vector-valued Markov decision process can be identified by a tuple {S,A, β, P, c}, where S is a set of states,A is the set of actions and β is the initial state distribution. At the start of each episode, an initial state s0 is drawn following the distribution β. Then, at each step t = 0, 1, . . . , the agent observes a state st ∈ S and makes a decision to take an action at. After at is chosen, at the next observation the state evolves to state st+1 ∈ S with probability P (st+1|st, at). However, instead of a scalar reward, in our setting, the agent receives an m-dimensional vector ct ∈ Rm that may implicitly contain measurements of reward, risk or violation of other constraints. The episode ends after a certain number of steps, called the horizon, or when a terminate state is reached.\nActions are typically selected according to a policy π, where π(s) is a distribution over actions for any s ∈ S . Policies that take a single action for any state are deterministic policies, and can be identified by the mapping π : S 7→ A. The set of all deterministic policies is denoted by Π. For a discount factor γ ∈ [0, 1), the discounted long-term measurement vector of a policy π ∈ Π is defined as\nc(π) := E( T∑ t=0 γtct(st, π(st))), (1)\nwhere the expectation is over trajectories generated by the described random process.\nUnlike unconstrained setting, for a constrained RL problem, it is possible that all feasible policies are non-deterministic (see Appendix D for an example). This limits the usage of RL algorithms that search for deterministic policies in the setting of constrained RL problem.\nOne workaround is to use mixed policies. For a set of policies U , a mixed policy is a distribution over U , and the set of all mixed policies over U is denoted by ∆(U). To execute a mixed policy µ ∈ ∆(U), we first select a policy π ∈ U according to π ∼ µ(π), and then execute π for the entire episode. Altman (1999) shows that any c(·) achievable can be achieved by some mixed deterministic policies µ ∈ ∆(Π). Therefore, though an off-shelves RL algorithm may not converge to any constraint-satisfying policy, it can be used as a subroutine to find individual policies (possibly deterministic), and a randomization among these policies can converge to a feasible policy. The discounted long-term measurement vector of a mixed policy µ ∈ ∆(Π) is defined similarly\nc(µ) := Eπ∼µ(c(π)) = ∑ π∈Π µ(π)c(π). (2)\nFor a mixed policy µ ∈ ∆(U), its active set is defined to be the set of policies with non-zero weights A := {π ∈ U|µ(π) > 0}. The memory requirement of storing µ, is then proportional to the size of its active set. Since a mixed policy can be interpreted as a convex combination of policies in its active set, in the following, the term sparsity of a mixed policy refers to the sparsity of this combination.\nOur learning problem, the convex constrained reinforcement learning (C2RL), is to find a policy whose expected long-term measurement vector lies in a given convex set; i.e., for a given convex target set C ⊂ Rm, our target is to\nfind µ∗ such that c(µ∗) ∈ Ω (C2RL). (3) Any policy µ∗ that satisfies c(µ∗) ∈ Ω is called a feasible policy, and a C2RL problem is feasible if there exists some feasible policies. In the following, we assume the C2RL problem is feasible." }, { "heading": "4 APPROACH, ALGORITHM AND ANALYSIS", "text": "We now show how the C2RL (3) can be reduced to a distance minimization problem (7) between a polytope and a convex set. A novel variant of Frank-Wolfe-type algorithm is then proposed to solve the distance minimization problem, followed by theoretic analysis about convergence and sparsity of the proposed method." }, { "heading": "4.1 REDUCE C2RL TO A DISTANCE MINIMIZATION PROBLEM", "text": "Let ||·|| denote the Euclidean norm. For a convex set Ω ∈ Rm, let ProjΩ(x) ∈ arg miny∈Ω ||x−y|| be the projection operator, and dist2(x,Ω) := 12 ||x−ProjΩ(x)|| 2 be half of the squared Euclidean\ndistance function. Then we consider the problem to find a policy whose measurement vector is closest to the target convex set,\narg min µ∈∆(Π)\ndist2(c(µ),Ω). (4)\nA policy µ∗ ∈ ∆(Π) is defined to be an optimal solution if it minimizes (4). Otherwise, the approximation error of µ ∈ ∆(Π) is defined as\nerr(µ) := dist2(c(µ),Ω)− dist2(c(µ∗),Ω) (Approximation Error) (5) where µ∗ is an optimal solution, and a policy is defined to be an -approximate solution if its approximation error is no larger than .\nWhen C2RL (3) is feasible, the equivalence of being optimal to (4) and being feasible to C2RL can be easily established. Since a feasible policy of C2RL problem lies inside Ω, it minimizes the non-negative dist2 function, and hence is optimal to (4). Vice versa, any optimal solution to (4) lies inside Ω and is a feasible solution to C2RL.\nFrom a geometric perspective, let c(Π) := {c(π)|π ∈ Π} be the set of all values achievable by deterministic policies. If the MDP has finite states and actions (though may be extremely large), then Π is finite as well, and hence c(Π) contains finitely many points in Rm. Then the set of values achievable by mixed deterministic policies\nc(∆(Π)) := {c(µ)|µ ∈ ∆(Π)} = { ∑ π µ(π)c(π) | ∑ π µ(π) = 1, µ(π) ≥ 0} ⊂ Rm (6)\nis the convex hull of c(Π); i.e., c(∆(Π)) is a m-dimension polytope whose vertices are c(Π). Therefore finding a policy whose value is closest to the target convex set (4) is equivalent to find a point in the polytope c(∆(Π)) that is closest to the convex set Ω\narg min c(µ)∈c(∆(Π))\ndist2(c(µ),Ω) (Distance minimization problem). (7)\nTo solve this constrained optimization problem, it might be tempting to consider projection methods. However, constructing a projection operator for c(∆(Π)) is non-trivial. For any given measurement vector, it is obscure how to modify a general RL algorithm to update the parameters such that the discounted expected measurement vector is closest to the given value. Therefore, projection-free methods are preferable for this task.\nFrank-Wolfe (FW) algorithm does not require any projection operation, instead it uses a linear minimizer oracle. Intuitively, finding a linear minimizer is similar to the reward maximization process of what a general RL algorithm does. In section 4.3, we formalize this idea. We show that after simple modifications, any RL algorithm that maximizes a scalar reward can be used to construct such a linear minimizer oracle. Before getting into details of the construction process, we discuss FW-type algorithms over polytope and its applications in the distance minimization problem (7)." }, { "heading": "4.2 DISTANCE MINIMIZATION BY FRANK-WOLFE-TYPE ALGORITHMS", "text": "The Frank-Wolfe algorithm (FW) is a first-order method to minimize a convex function f : P 7→ R over a compact and convex set P , with only access to a linear minimizer oracle. When the feasible set is a polytope P := conv({s1, s2, . . . , sn}) ⊂ Rm defined as the convex hull of finitely many points, FW-type algorithms are discussed by Lacoste-Julien & Jaggi (2015) to optimize\nmin x∈P f(x) using Oracle(v) := arg min s∈{s1,...,sn}\nsTv. (8)\nThe standard FW (Algorithm 2 in Appendix A.1) consists of making repeated calls to the linear minimizer oracle to find an improving point s, followed by a convex averaging step of the current iterate xt−1 and the oracle’s output s.\nIf we have already constructed a RL oracle(λ) that outputs a policy π ∈ arg minπ∈Π λT c(π) together with its measurement vector c(π), then the distance minimizing problem (7) can be solved with standard FW by using\nπ, c(π)← RL oracle(∇dist2(xt−1,Ω)) = RL oracle(xt−1 − ProjΩ(xt−1)) (9)\nAlgorithm 1 Convex Constrained Reinforcement Learning (C2RL) Input. RL Oracle constructed by any RL algorithm, projection operator to target set ProjΩ. Initialize. Random policy π, value x = c(π), active sets Sp := [π],Sc := [x] and weight λ = [1]. Output. Mixed policy µ and its value c(µ) s.t. c(µ) minimizes the distance to the target set Ω.\n1: while true do // Major cycle 2: ω ← ProjΩ(x) 3: π, c(π)← RL Oracle(x− ω) // Potential improving point 4: if (x− ω)T (x− c(π)) ≤ then break 5: if Sc ∪ {c(π)} is affinely independent then Sc ← Sc ∪ {c(π)},Sp ← Sp ∪ {π} 6: while true do // Minor cycle 7: y,α← AffineMinimizer(Sc,ω) // y = arg mins∈aff(Sc) ||s− ω||2 8: if αs > 0 for all s then break // y ∈ conv(Sc) 9: // If y /∈ conv(Sc), then update y to the intersection of conv(Sc) and segment joining x\nand y. Then remove points in Sc unnecessary for describing y. 10: θ ← mini:αi≤0 λiλi−αi // Recall λ satisfies x = ∑ s∈Sc λss" }, { "heading": "11: y ← θy + (1− θ)x, λi = θαi + (1− θ)λi", "text": "12: Sc ← {c(πi)|c(πi) ∈ Sc and λi > 0},Sp ← {πi|πi ∈ Sp and λi > 0} 13: end while 14: Update µ← ∑ π∈Sp λππ, x← y, λ← α. 15: end while 16: return µ, c(µ)← x\nto find an improving policy and its measurement vector. For ηt := 2t+2 , the convex averaging steps\nµt ← (1− ηt)µt−1 + ηtπ, xt ← (1− ηt)xt−1 + ηtc(π), (10) then maintain the mixed policy, and the corresponding measurement vector, respectively.\nHowever, after T rounds of iteration, the µt found has an active set containing up to T individual polices, and is not sparse enough. If neural networks are used to parameterize the policy, that requires storing T copies of parameters for the individual network, which is unaffordable for largescale usage.\nTo find even more sparse policies, we turn to variants of FW-type algorithms. In particular, Wolfe’s method for minimum norm point in a polytope (Wolfe, 1976; De Loera et al., 2018). In Wolfe’s method (Algorithm 3 in Appendix A.2), the loop in FW is called a major cycle, and the convex averaging step is replaced by a weight optimization process, called minor cycle. Wolfe’s method maintains an active set S, and the current point can be represented by a sparse combination of points in the active set. The minor cycles maintain S to be an affinely independent set such that the affine minimizer is inside St, which Wolfe calls corrals. Recall an affine minimizer is defined as arg mins∈aff(S) ||s||2, where aff(S) := {y|y = ∑ z∈S α T zx, ∑ z∈S αz = 1} is the affine hull formed by S. Since the active set is affinely independent, the number of active atoms is at most m + 1 at any time. Wolfe’s method is shown to strictly decrease the approximation error between two major cycles." }, { "heading": "4.3 OUR MAIN ALGORITHM", "text": "The main obstacle to apply Wolfe’s method to our distance minimization problem (7) is that the objective function in Wolfe’s method is the norm function. However, in our problem, the objective function is the distance function to a convex set. Unlike the norm function, the distance function to a convex set is not strongly convex and affine minimizer is ill-defined with respect to a convex set. To tackle these problems, we modify the Wolfe’s method. At the core of our new variant of FW algorithm, we add a projection step to Wolfe’s method.\nProjection Step In each major cycle, we minimize the distance to a projected pointω := ProjΩ(x). Intuitively, since the distance to the convex set is upper bounded by the distance to this projected point ω, if the distance to ω converges, so does the distance to the target convex set.\nFormally, for a set of points S ⊂ Rm, and a point x ∈ Rm, we extend the definition of an affine minimizer to define affine minimizer with respect to x as arg mins∈aff(S) ||s − x||2. For x being\nthe affine minimizer of S with respect to ω, the extended affine minimizer property gives\nGiven ω,∀v ∈ aff(S), (v − x)T (x− ω) = 0 (Extended affine minimizer property) (11) Similar to Wolfe’s method, our C2RL method (Algo. 1) contains an outer loop (called major cycle) to find improving policies and their measurement vectors, and an inner loop (called minor cycle) to maintain the affinely independent property of the active set Sc. At the start of each major cycle step, the Sc is an affinely independent set. Then, the RL oracle (defined in (15)) finds a potential improving policy π ∈ U , and its long-term measurement vector c(π). If the c(π) does not get strictly closer to the ω := Proj(x), then we are done, and x is the optimal value. Otherwise, the c(π) is added into the active set, and the minor cycle is run to eliminate policies whose measurement vectors are affinely dependent.\nLine 6 to line 13 contains the minor cycle, which is the same as the original Wolfe’s method (except in line 6, we find affine minimizer with respect to ω). The elimination is executed as a series of affine projections. The minor cycle terminates if active set Sc is affinely independent. Though the interleaving of major and minor cycles oscillate the size of active set Sc, the minor cycles keep |Sc| an affinely independent set, and is terminated whenever Sc contains a single element. Therefore at the start of any major cycle, the size of the active set satisfies |Sc| ∈ [0,m + 1]. More background about the minor cycle in Wolfe’s method is provided in Appendix A.2.\nConstruction of RL Oracle The construction of our RL oracle can use any off-the-shelf RL algorithm that maximizes a scalar reward. For any given λ ∈ Rm, we define any algorithm that finds a policy minimizing the linear function λT c(·) as a RL oracle, that is\nRL oraclep(λ) ∈ arg min π∈Π λT c(π). (12)\nRecall that standard RL algorithm receives a scalar reward after each state transition, instead of the long-term measurement vector c(π) ∈ Rm. We then use the following linear property to reformulate the right hand side of (12) to a standard RL problem\narg min π∈Π λT c(π) = arg min π∈Π λTE( T∑ t=0 γtct) = − arg max π∈Π E( T∑ t=0 γt(−λT ct)). (13)\nThis shows that if we consider the Markov decision process with the same state, action, and transition probability, and construct a scalar reward r := (−λT ct), then any policy that maximizes the expected r is a linear minimizer of (12). Therefore any RL algorithm that best suits the underlying problems can be used to construct a RL oracle.\nCertifying constraint satisfaction amounts to evaluate the measurement vector of the current policy. This is handy in online settings, where simulations can be used to evaluate the measurement vector of the policy directly. Otherwise, in batch settings, various off-policy evaluation methods, such as importance sampling (Precup, 2000; Precup et al., 2001) or doubly robust (Jiang & Li, 2016; Dudı́k et al., 2011), can be used to evaluate the policy.\nRL oraclec(λ) := c(arg min π∈Π λT c(π)) = arg min c(π),π∈Π λT c(π). (14)\nTo simplify notation, we assume a RL Oracle returns a policy as well as its measurement vector\nRL Oracle(λ) := π, c(π) = RL oraclep(λ), RL oraclec(λ) (15)\nFinding Extended Affine Minimizer The process AffineMinimizer(S,x) returns the (y,α) the affine minimizer of S with respect to x where y is the affine minimizer and α := {αs|∀s ∈ Sc} is the set of coefficient expressing y as an affine combination of points in S, that is y =∑\ns∈Sc αss, where αs is the weight associated with s. The process AffineMinimizer(S,x) can be straightforwardly implemented using linear algebra. Wolfe (1976) also provides a more efficient implementation that uses a triangular array representation of the active set." }, { "heading": "4.4 CONVERGENCE AND SPARSITY", "text": "In this section, we analyze the convergence and complexity of the proposed C2RL method (Algo. 1). We first show that approximation error of C2RL strictly decreases between any two major cycle steps\nand it converges in O(1/t) rate. Then we show our method ensures convergence of arbitrary RL algorithm, including those searching for deterministic policies. Moreover, concerning the memory complexity, we show that maintaining an active policy set ofm+1 is worst case optimal to ensure the convergence of arbitrary RL algorithm. Therefore, the proposed C2RL indeed achieves the optimal sparsity for the found policy, making it favorable for large-scale usage.\nThe main difference between the convergence analysis of C2RL and Wolfe’s method is the addition of the projection step. Intuitively, at each major step, if we are making a significant progress toward the projected point, then the distance to the convex set is decreased by at least the same amount.\nTime Complexity. In our analysis, we consider the approximation error as defined in (5). We use superscript t to denote the variable in t-th major cycle before executing any minor cycle. To simplify notions, we let xt := c(µt) and st := c(πt). When discussing one step with t fixed, let yi denote the affine minimizer found in i-th minor cycle (line 6 of Algo. 1). We first show that the C2RL method strictly reduces approximation error between two calls of the RL oracle.\nTheorem 4.1 (Approximation Error Strictly Decreases). For any non-terminal step t, we have err(µt+1) < err(µt). That is, the measurement vector of µt found by the C2RL method gets strictly closer to the convex set Ω after major cycle step.\nThe proof is provided in Appendix B. The idea is to consider the distance between xt and ωt. When the major cycle has no minor cycle, the non-terminal condition and the affine minimizer property implies dist2(xt+1,ωt) < dist2(xt,ωt). Otherwise we show that the first minor cycle strictly reduces the dist2(xt,ωt) by moving along the segment joining x and y, and the subsequent minor cycle cannot increase it. Since ωt ∈ Ω, we conclude err(xt+1) ≤ dist2(xt+1, ωt) < dist2(xt, ωt) = err(xt), and the approximation error strictly decreases.\nGiven the approximation error strictly decreases, Wolfe’s method for minimum norm point can be shown to terminate finitely (Wolfe, 1976). However, this finitely terminating property does not hold for our algorithm. Since a changed ωt may yield a lower distance to the same active set Stc, the active set may stay unchanged across major cycles (see Figure 2 Middle for an example). Therefore we establish the convergence of the C2RL method by the following theorem.\nTheorem 4.2 (Convergence in Approximation Error). For t ≥ 1, the mixed policy µt found by the C2RL method satisfies\nerr(µt) ≤ 16Q2/(t+ 2), (16)\nwhere Q := maxµ∈∆(U) ||c(µ)|| is the maximum norm of a measurement vector.\nThe proof is provided in Appendix C, which relies on the following two lemmas. We briefly discuss the main idea here. Define major cycle steps with at most one minor cycle as ”non-drop step” and major cycle steps with more than one minor cycles as ”drop steps”. We show that in each non-drop step, Algorithm 1 is guaranteed to make enough progress in the following lemma.\nLemma 4.3. For a non-drop step in C2RL method, we have err(µt)−err(µt+1) ≥ err2(µt)/8Q2.\nThough this does not hold for drop steps, we can bound the frequency of drop steps by the following.\nLemma 4.4. After t major cycle steps of C2RL method, the number of drop steps is less than t/2.\nSince the approximation error strictly decreases (Thm. 4.1), and in more than half of the major cycles steps, the C2RL method makes significantly progress. The Thm. (4.2) can then be proved using an induction argument (Appendix C).\nConvergence with Arbitrary RL Algo. The convergence of the C2RL method when used with RL algorithms that search for deterministic policies, such as DQN, DDPG and variants, is indeed straightforward. In (8), though each time the oracle yields a vertex, the FW-type algorithms indeed optimize over the polytope formed by these vertices. Then since citetaltman1999constrained shows that any c(·) achievable can be achieved by some mixed deterministic policies, we conclude that if the underlying problem is feasible, then our C2RL method is able to find a feasible policy.\nMemory Complexity We then discuss the sparsity of mixed policy for constrained RL problem. We give a constructive proof in Appendix D to show that to ensure convergence for RL algorithms that search for deterministic policies, storing m+ 1 policies is required in the worst case.\nTheorem 4.5 (Memory Complexity Bound). For an constrained RL problem with m-dimensional measurement vector, in the worst case, a mixed policy needs to randomize among m+ 1 individual policies to ensure convergence of RL oracles that search for deterministic policies.\nSince the minor cycles in the C2RL method eliminate policies with affinely dependent measurement vectors, after the termination of minor cycles, the size of the active set is at most m+ 1. That is, the policy found by the C2RL method requires randomization among no more than m + 1 individual policies. Therefore the proposed C2RL indeed achieves the optimal sparsity in the worst case, making it favorable for large-scale usage. Corollary 4.5.1. The C2RL method that randomizes among at most m + 1 policies is worst-case optimal to ensure convergence of any RL oracle." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate the performance of C2RL in a grid-world navigation task (Fig. 1), and demonstrate its ability to efficiently find sparse policy. In this Risky Mars Rover environment, the agent is required to navigate from the starting point to the goal point, by moving to one of the four neighborhood cells at each step. The episodes terminate when the goal point is reached or after 300 steps. To enforce robustness, we add a risky area to indicate the dangerous states. The agent receives a measurement vector to indicate the steps it takes (0.1 for every step), and whether it stays in the risky area (0.1 for every risky step, and 0 otherwise), with discount factor γ = 0.99. We constrain the agent to reach the goal point with expected cumulative steps measure within 1.1 and the expected cumulative risky steps within 0.05. Note that by design, the shortest path from the starting point to the goal point does not satisfy the constraint. This is common in practice, as robustness typically evolves trade-off between the reward and the constraint satisfaction.\nThe proposed C2RL method is compared with approachability-based policy optimization (ApproPO) (Miryoosefi et al., 2019) and with reward constrained policy optimization (RCPO) (Tessler et al., 2018). ApproPO solves the same convex constrained RL problem by using an RL oracle to play against a no-regret online learner (Hazan et al., 2008; Zinkevich, 2003). Since ApproPO and C2RL both use a RL oracle, ApproPO is a natural baseline to be compared with our method. Besides, we also compare with RCPO, which takes a Lagrangian approach to incorporate the constraints as a penalty signal into the reward. Using an advantage actor critic (A2C) Mnih et al. (2016), RCPO has been shown to converge to a fixed point. For a fair comparison, C2RL and ApproPO uses an A2C agent as the RL oracle, with the same hyperparameter as used in RCPO. The approximation errors are compared after training for the same number of samples.\nNote that the C2RL method does not introduce any extra hyper-parameter. For ApproPO and RCPO, they require extra hyper-parameter for the initialization and learning rate of a variable equivalent to our λ in the outer loop. This is because our approach does not rely on the online learning framework, and therefore there is no need to tune the initialization and learning rate for our λ and ease the usage.\nWe first showcase the consequences of our theoretical results using an optimal RL oracle. For any x ∈ Rm, an optimal policy can be easily found via Dijkstra’s algorithm. If multiple optimal paths exist, one is randomly picked to form a deterministic policy.\nUsing this as an optimal RL oracle, the convergence property of C2RL and ApproPo are compared. Figure 2 Middle shows the value of policies c(µt) found after each call to the oracle. In Figure 2 Right, when approaching the boundary of the feasible set, the iterations of approachability-based methods start to zigzag. Since C2RL contains a minor cycle to re-optimize the weights among the active set, C2RL progresses quickly to reach the exact optimal solution. In Figure 3 Left, the approximation error is shown for 300 calls of the optimal RL oracle.\nWe then compare C2RL, ApproPO and RCPO using the same A2C agent (details of the model structures and hyper-parameters are provided in Appendix E). We run each algorithm for 50 times, and each run for a maximum of 100 thousands of samples. The mean and standard deviation of the results are presented in Figure 3. The original paper of ApproPO suggests using a cache to save memory, and the memory requirement of this variant is also presented. Figure 3 demonstrates that C2RL converges to an optimal policy faster than previous methods, and a sparse combination of individual policies is maintained throughout the iteration process." }, { "heading": "6 CONCLUSION", "text": "In this paper, we introduce C2RL, an algorithm to solve RL problems under orthant or convex constraints. Our method reduces the constrained RL problem to a distance minimization problem, and a novel variant of Frank-Wolfe type algorithm is proposed to solve this. Our method comes with rigorous theoretical guarantees and does not introduce any extra hyper-parameter. To find an -approximation solution, C2RL takes O(1/ ) calls of any RL oracle and ensures convergence to work with arbitrary RL algorithm. Moreover, C2RL strictly reduces the approximation error between consecutive calls of RL oracle, and form-dimensional constraints, the memory requirement is reduced from storing infinitely many policies (O(1/ )) to storing at most constantly many (m+1) polices. We further show that the constant is worst-case optimal to ensure the convergence for RL algorithms that search for deterministic policies. Experimentally, we demonstrate that the proposed C2RL method finds sparse solution efficiently, and outperforms previous methods." }, { "heading": "A MORE ON FRANK-WOLFE-TYPE ALGORITHMS", "text": "A.1 STANDARD FRANK-WOLFE ALGORITHM\nAlgorithm 2 Frank-Wolfe algorithm (Frank et al., 1956) Input: obj. f : Y 7→ R, oracle O(·), init. x0 ∈ Y\n1: for t=1, 2, 3 . . . , T do 2: s← Oracle(∇f(xt−1)) = arg mins∈{s1,...,sn} s\nT∇f(xt−1) 3: xt ← (1− ηt)xt−1 + ηts , for ηt := 2t+2 4: end for 5: return xT\nFor a convex function f : X 7→ R the Frank-Wolfe algorithm (FW) solves the constrained optimization problem over a compact and convex set X . The standard FW is known to have a sublinear convergence rate, and various methods are proposed to improve the performance. For example, when the underlying feasible set is a polytope, and the objective function is strongly convex, multiple variants, such as away-step FW (Wolfe, 1970; Jaggi, 2013), pairwise FW (Mitchell et al., 1974), and Wolfe’s method (Wolfe, 1976) are shown to enjoy linear convergence rate. Linear convergence under other conditions is also studied (Beck & Shtern, 2017; Garber & Hazan, 2013a;b).\nA.2 WOLFE’S METHOD FOR MINIMUM NORM POINT\nAlgorithm 3 Wolfe’s Method for Minimum Norm Point Initialize x ∈ P , active set S = [x] and weight λ = [1]. Output: x ∈ P that has the minimum Euclidean norm.\n1: while true do // Major cycle 2: s← Oracle(x) // Potential improving point 3: if ||x||2 ≤ xTs+ then break 4: S ← S ∪ {s} 5: while true do // Minor cycle 6: y,α← AffineMinimizer(S) // y = arg mins∈aff(S) ||s||2 7: if αs > 0 for all s then break // y ∈ conv(S) 8: // If y /∈ conv(S), then update y to the intersection of conv(S) and segment joining x and y. Then remove points in S unnecessary for describing y. 9: θ ← mini:αi≤0 λiλi−αi // Recall λ satisfies x = ∑ s∈S λss" }, { "heading": "10: y ← θy + (1− θ)x, λi = θαi + (1− θ)λi", "text": "11: S ← {si|si ∈ S and λi > 0} 12: end while 13: Update x = y and λ = α. 14: end while 15: return x\nWolfe’s method is an iterative algorithm for finding the point with minimum Euclidean norm in a polytope, which is defined as the convex hull of a set of finitely many points.\nThe Wolfe’s method consists of a finite number of major cycles, each of which consists of a finite number of minor cycles. At the start of each major cycle, let H(x) := {yTx = xx} be the hyperplane defined by x. If H(x) separates the polytope from the origin, then the major cycle is terminated. Otherwise, it invokes an oracle to find any point on the near side of the hyperplane. The point is then added into the active set S, and starts a minor cycle. In a minor cycle, let y be the point of smallest norm in of the affine hull aff(S). If y is in the relative interior of the convex hull conv(S), then x is updated to y and the minor cycle is terminated. Otherwise, y is updated to the nearest point to y on the line segment conv(S) ∩ [x,y]. Thus y is updated to a boundary point of conv(S), and any point that is not on the face of conv(S) in which y lies is deleted. The minor cycles are executed repeatedly until S becomes a corral, that is, a set\nwhose affine minimizer lies inside its convex hull. Since a set of one point is always a corral, the minor cycles is terminated after a finite number of runs." }, { "heading": "B PROOF OF THEOREM 4.1", "text": "Theorem 4.1 (Approximation Error Strictly Decreases). For any non-terminal step t, we have err(µt+1) < err(µt). That is, the measurement vector of µt found by the C2RL method gets strictly closer to the convex set Ω after major cycle step.\nProof. If the current step is a major cycle with no minor cycle, then xt+1 is the affine minimizer of aff(S ∪ {st}) with respect to ωt. Then the affine minimizer property implies (st − xt+1)(xt+1 − ωt) = 0. Since iteration does not terminate at step t, we have (xt − ωt)T (xt − st) > 0, and therefore xt+1 not equal to xt. Then xt+1 is the unique affine minimizer implies fΩ(xt+1) = minω∈Ω ||xt+1 − ω||2 ≤ ||xt+1 − ωt||2 < ||xt − ωt||2 = fΩ(xt). Otherwise the current step contains one or more minor cycles. In this case, we show that the first minor cycle strictly reduces the approximation error, and the (possibly) following minor cycles cannot increase it. For the first minor cycle, the affine minimizer y0 of aff(S ∪ {st}) with respect to ωt is outside conv(S ∪ {st}). Let z = θy0 + (1 − θ)xt be the intersection of conv(S ∪ {st}) and segment joining x and y. Let V0 := St and Vi denote the active set after the i-th minor cycle. Then since y1 is the affine minimizer of V1 with respect to ωt, we have\n||z − ωt|| = ||θy0 + (1− θ)xt − ωt|| ≤ θ||y0 − ωt||+ (1− θ)||xt − ωt|| < ||xt − ωt||, (17)\nwhere the second step uses the triangle inequality and the last step follows since the segment xty0 intersects the interior of conv(S∪{st}), and the distance to ωt strictly decreases along this segment. Therefore the point z found by first minor cycle satisfies\nfΩ(z) = min ω∈Ω ||z − ω||2 ≤ ||z − ωt||2 < ||xt − ωt|| = fΩ(xt). (18)\nHence h(y1) < h(xt), and the first minor cycle strictly decreases the approximation error. By a similar argument, in subsequent minor cycles the approximation error cannot be increased. However, after the first minor cycle, the iterating point may already at the intersection point and the strict inequality in last step of Eq. 17 need to be replaced by non-strict inequality.\nTherefore any major cycle either finds an improving point and continue, or enters minor cycles where the first minor cycle finds an improving point, and the subsequent minor cycles does not increase the distance. Adding both side of fΩ(xt+1) < fΩ(xt) by fΩ(x∗) and we have the approximation error h(xt+1) < h(xt) strictly decreases." }, { "heading": "C PROOF OF THEOREM 4.2", "text": "We first prove the Theorem 4.2, using Lemma 4.3 and Lemma 4.4. Then we present the proof of the lemmas.\nTheorem 4.2 (Convergence in Approximation Error). For t ≥ 1, the mixed policy µt found by the C2RL method satisfies\nerr(µt) ≤ 16Q2/(t+ 2). (19)\nwhere Q := maxµ∈∆(U) ||c(µ)|| is the maximum norm of a measurement vector.\nProof. Since Lemma 4.4 shows that drop steps are no more than half of total major cycle steps, and Theorem 4.1 guarantees these drop steps reducing the approximation error, we can safely skip these step, and re-index the step numbers to include non-drop steps only using k.\nFor these non-drop steps, we claim that err(µk) ≤ 8Q2/(k + 1). Using Lemma 4.3, we prove the convergence rate using induction. We first bound the error of any err(µk). For any k ≥ 1\nerr(µk) = dist2(c(µk),Ω)− dist2(c(µ∗),Ω) (20) = 1/2||c(µk)− ProjΩ(c(µ k))||2 − 1/2||c(µ∗)− ProjΩ(c(µ ∗))||2 (21)\n≤ 1/2(||c(µk)||2 + ||ProjΩ(c(µ k))||2 − ||c(µ∗)||2 − ||ProjΩ(c(µ ∗))||2) (22) ≤ ||c(µk)||2 − ||c(µ∗)||2 (23) ≤ ||c(µk)||2 (24) ≤ Q2, (25)\nwhere Eq. 21 uses the definition of our squared Euclidean distance function. Eq. 22 follows from triangle inequality, and Eq. 23 is by the contractive property of the Euclidean distance.\nWhen k = 1, the Eq. 25 established the based case. Now for k ≥ 1, assume that err(µk) ≤ 8Q2/(k + 1) for k ≥ 1, then Lemma 4.3 gives err(µk+1) ≤ err(µk) − err2(µk)/8Q2. Since the quadratic function of the right hand side is monotonically increasing on (−∞, 4Q2], using the inductive hypothesis\nerr(µk+1) ≤ err(µk)− err2(µk)/8Q2 ≤ 8Q2/(k + 1)− 8Q2/(k + 1)2 ≤ Q2/(k + 2) (26)\nSince for t steps of major cycle steps, the number of non-drop steps k > t/2, we conclude that err(µt) ≤ 16Q2/(t+ 2).\nThen we prove the lemmas.\nLemma 4.3. For a non-drop step, we have err(µt)− err(µt+1) ≥ err2(µt)/8Q2.\nProof. The non-drop step contains either no minor cycle or one minor cycle. We first consider the no minor cycle case.\nIf a major cycle contains no minor cycle, then xt+1 is the affine minimizer of the S ∪ {st}.\nerr(µt)− err(µt+1) = dist2(xt,Ω)− dist2(xt+1,Ω) (27) = 1/2(||xt − ωt||2 −min\nω∈Ω ||xt+1 − ω||2) (28)\n≥ 1/2(||xt − ωt||2 − ||xt+1 − ωt||2) (29) = 1/2(||xt − ωt||2 + ||xt+1 − ωt||2 − 2||xt+1 − ωt||2) (30) = 1/2(||xt − ωt||2 + ||xt+1 − ωt||2 − 2(xt − ωt)T (xt+1 − ωt)) (31) = 1/2(||xt − xt+1||2), (32)\nwhere the equation (31) follows from the affine minimizer property Eq. (11). For ||xt − xt+1|| in the last equation, and ∀q ∈ aff(S ∪ {st}), we have\n||xt − xt+1|| ≥ ||xt − xt+1|| ||x t||+ ||q||\n2Q ( Definition of Q) (33)\n≥ ||xt − xt+1|| ||x t − q|| 2Q\n( Triangle inequality) (34)\n≥ 1 2Q (xt − xt+1)(xt − q) ( Cauchy-Schwarz inequality) (35)\n= 1\n2Q (xt − ωt)(xt − q) ( Affine minimizer property). (36)\nThen it suffices to show that (xt − ωt)(xt − q) ≥ err(µt).\nSince Ω is a convex set, the squared Euclidean distance function dist2(x,Ω) is convex for x, which implies\ndist2(xt,Ω) + (q − xt)∇dist2(xt,Ω) ≤ dist2(q,Ω). (37)\nPutting in∇dist2(xt,Ω) = (xt−ProjΩ(xt)) = (xt−ωt), we get (xt−ωt)(xt−q) ≥ err(µt), which together with Eq. 32 and Eq. 36 concludes that for non-drop step with no minor cycles, we have err(µt)− err(µt+1) ≥ err2(µt)/8Q2. For non-drop step with one minor cycle, we use the Theorem 6 of (Chakrabarty et al., 2014). By a linear translation of adding all points with −ωt, it gives\n||xt − ωt||2 − ||xt+1 − ωt||2 ≥ ((xt − ωt)(xt − q))2/8Q2. (38) Then applying the same argument as Eq. 37, and we finished our proof.\nLemma 4.4. After t major cycle steps of C2RL method, the number of drop steps is less than t/2.\nProof. Recall that at the termination of a minor cycle, the size of the active set |Sc| ∈ [1,m]. Since in each major cycle steps, the size of active set St increases by one, and each drop step reduces the size of St by at least one, the number of drop steps is always less than half of total number of the major cycle steps." }, { "heading": "D PROOF OF THEOREM 4.5", "text": "Theorem 4.5 (Memory Complexity Bound). For an constrained RL problem with m-dimensional measurement vector, in the worst case, a mixed policy needs to randomize among m+ 1 individual policies to ensure convergence of RL oracles that search for deterministic policies.\nProof. We give a constructive proof. Consider a m-dimensional vector-valued MDP with a single state, m + 1 actions, and c(ai) := ei is the unit vector of i-th dimension for i ∈ [1,m], and c(am+1) := 0, and the episode terminates after 1 steps. The constrained RL problem is to find a policy whose measurement vector lies in the convex set of a single point {1/2m}. By linear programming, it is clear that the only feasible mixed deterministic policy is to select am+1 with 1/2 probability, and the rest m actions with 1/2m probability; i.e. the unique feasible policy to this problem has an active set containing m + 1 deterministic policies. Therefore any method randomize among less than m + 1 individual policies does not ensure convergence when used with RL algorithms searching for deterministic policies." }, { "heading": "E ADDITIONAL EXPERIMENT DETAILS", "text": "All the methods use the same A2C agent. The input is the one-hot encoded current position index. The A2C is the standard fully connected multi-layer perceptron with ReLU activation function. The actor and critic share the internal representation and have their only final layer. Both actor and critic networks use Adam optimizer with learning rate set to 1e−2. The network is as follows\nActor Critic Input layer One-hot encoded state index (dim=54)\nHidden layer Linear(in=54, out=128, act=”relu”) Output layer Linear(in=128, out=4, act=”relu”) Linear(in=128, out=1, act=”relu”) Output name Action score State value\nFor ApproPO, the constant κ for projection convex set to convex cone is set to be 20. The θ is initialized to 0. Following the original paper.\nFor RCPO, the learning rate of its λ is set to 2.5e−5, and its λ is initialized to 0 and updated by online gradient descent with learning rate set to 1, as used by the original paper.\nThe proposed C2RL introduces no extra hyper-parameters, and has nothing to report." } ]
2,020
null
SP:df5fec4899d97f7d5df259a013f467e038895669
[ "The paper proposes a post-hoc uncertainty tuning pipeline for Bayesian neural networks. After getting the point estimate, it adds extra dimensions to the weight matrices and hidden layers, which has no effect on the network output, with the hope that it would influence the variance of the original network weights under the Laplacian approximation. More specifically, it tunes the extra weights by optimizing another objective borrowed from the non-Bayesian robust learning literature, which encourages low uncertainty over real (extra, validation) data, and high uncertainty over manually constructed, out-of-distribution data." ]
Laplace approximations are classic, computationally lightweight means for constructing Bayesian neural networks (BNNs). As in other approximate BNNs, one cannot necessarily expect the induced predictive uncertainty to be calibrated. Here we develop a formalism to explicitly “train” the uncertainty in a decoupled way to the prediction itself. To this end we introduce uncertainty units for Laplaceapproximated networks: Hidden units with zero weights that can be added to any pre-trained, point-estimated network. Since these units are inactive, they do not affect the predictions. But their presence changes the geometry (in particular the Hessian) of the loss landscape around the point estimate, thereby affecting the network’s uncertainty estimates under a Laplace approximation. We show that such units can be trained via an uncertainty-aware objective, making the Laplace approximation competitive with more expensive alternative uncertaintyquantification frameworks.
[]
[ { "authors": [ "Felix Dangel", "Frederik Kunstner", "Philipp Hennig" ], "title": "BackPACK: Packing more into Backprop", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Mark N Gibbs" ], "title": "Bayesian gaussian processes for regression and classification", "venue": "Ph. D. Thesis, Department of Physics, University of Cambridge,", "year": 1997 }, { "authors": [ "Matthias Hein", "Maksym Andriushchenko", "Julian Bitterwolf" ], "title": "Why ReLU Networks Yield Highconfidence Predictions Far Away from the Training Data and How to Mitigate the Problem", "venue": null, "year": 2019 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking Neural Network Robustness to Common Corruptions and Perturbations", "venue": null, "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep Anomaly Detection with Outlier Exposure", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Tom Heskes" ], "title": "On “Natural” Learning and Pruning in Multilayered Perceptrons", "venue": "Neural Computation,", "year": 2000 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "Agustinus Kristiadi", "Matthias Hein", "Philipp Hennig" ], "title": "Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "David JC MacKay" ], "title": "The Evidence Framework Applied to Classification Networks", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "David JC MacKay" ], "title": "A Practical Bayesian Framework For Backpropagation Networks", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "Andrey Malinin", "Mark Gales" ], "title": "Predictive Uncertainty Estimation via Prior Networks", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Andrey Malinin", "Mark Gales" ], "title": "Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness", "venue": "In NIPS,", "year": 2019 }, { "authors": [ "James Martens", "Roger Grosse" ], "title": "Optimizing Neural Networks With Kronecker-Factored Approximate Curvature", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Alexander Meinke", "Matthias Hein" ], "title": "Towards Neural Networks that Provably Know when They don’t Know", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Sebastian W Ober", "Carl Edward Rasmussen" ], "title": "Benchmarking the Neural Linear Model for Regression", "venue": "arXiv preprint arXiv:1912.08416,", "year": 2019 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "David Sculley", "Sebastian Nowozin", "Joshua Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty under Dataset Shift", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Hippolyt Ritter", "Aleksandar Botev", "David Barber" ], "title": "Online Structured Laplace Approximations for Overcoming Catastrophic Forgetting", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Hippolyt Ritter", "Aleksandar Botev", "David Barber" ], "title": "A Scalable Laplace Approximation for Neural Networks", "venue": "In ICLR,", "year": 2018 } ]
[ { "heading": null, "text": "Laplace approximations are classic, computationally lightweight means for constructing Bayesian neural networks (BNNs). As in other approximate BNNs, one cannot necessarily expect the induced predictive uncertainty to be calibrated. Here we develop a formalism to explicitly “train” the uncertainty in a decoupled way to the prediction itself. To this end we introduce uncertainty units for Laplaceapproximated networks: Hidden units with zero weights that can be added to any pre-trained, point-estimated network. Since these units are inactive, they do not affect the predictions. But their presence changes the geometry (in particular the Hessian) of the loss landscape around the point estimate, thereby affecting the network’s uncertainty estimates under a Laplace approximation. We show that such units can be trained via an uncertainty-aware objective, making the Laplace approximation competitive with more expensive alternative uncertaintyquantification frameworks." }, { "heading": "1 INTRODUCTION", "text": "The point estimates of neural networks (NNs)—constructed as maximum a posteriori (MAP) estimates via (regularized) empirical risk minimization—empirically achieve high predictive performance. However, they tend to underestimate the uncertainty of their predictions, leading to an overconfidence problem (Hein et al., 2019), which could be disastrous in safety-critical applications such as autonomous driving. Bayesian inference offers a principled path to overcome this issue. The goal is to turn a “vanilla” NN into a Bayesian neural network (BNN), where the posterior distribution over the network’s weights are inferred via Bayes’ rule and subsequently taken into account when making predictions. Since the cost of exact posterior inference in a BNN is often prohibitive, approximate Bayesian methods are employed instead.\nLaplace approximations (LAs) are classic methods for such a purpose (MacKay, 1992b). The key idea is to obtain an approximate posterior by “surrounding” a MAP estimate of a network with a Gaussian, based on the loss landscape’s geometry around it. A standard practice in LAs is to tune a single hyperparameter—the prior precision—which is inflexible (Ritter et al., 2018b; Kristiadi et al., 2020). Here, we aim at improving the flexibility of uncertainty tuning in LAs. To this end, we introduce Learnable Uncertainty under Laplace Approximations (LULA) units, which are hidden units associated with a zeroed weight. They can be added to the hidden layers of any MAP-trained network. Because they are inactive, such units do not affect the prediction of the underlying network. However, they can still contribute to the Hessian of the loss with respect to the parameters, and hence induce additional structures to the posterior covariance under a LA. LULA units can be trained via an uncertainty-aware objective (Hendrycks et al., 2019; Hein et al., 2019, etc.), such that they improve the predictive uncertainty-quantification (UQ) performance of the Laplace-approximated BNN. Figure 1 demonstrates trained LULA units in action: They improve the UQ performance of a standard LA, while keeping the MAP predictions in both regression and classification tasks.\nIn summary, we (i) introduce LULA units: inactive hidden units for uncertainty tuning of a LA, (ii) bring a robust training technique from non-Bayesian literature for training these units, and (iii) show empirically that LULA-augmented Laplace-approximated BNNs can yield better UQ performance compared to both previous tuning techniques and contemporary, more expensive baselines." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 BAYESIAN NEURAL NETWORKS", "text": "Let f : Rn × Rd → Rk defined by (x, θ) 7→ f(x; θ) be an L-layer neural network. Here, θ is the concatenation of all the parameters of f . Suppose that the size of each layer of f is given by the sequence of (nl ∈ Z>0)Ll=1. Then, for each l = 1, . . . , L, the l-th layer of f is defined by\na(l) := W (l)h(l−1) + b(l), with h(l) := { ϕ(a(l)) if l < L a(l) if l = L ,\n(1)\nwhere W (l) ∈ Rnl×nl−1 and b(l) ∈ Rnl are the weight matrix and bias vector of the layer, and ϕ is a component-wise activation function. We call the vector h(l) ∈ Rnl the l-th hidden units of f . Note that by convention, we consider n0 := n and nL := k, while h(0) := x and h(L) := f(x; θ).\nFrom the Bayesian perspective, the ubiquitous training formalism of neural networks amounts to MAP estimation: The empirical risk and the regularizer are interpretable as the negative loglikelihood under an i.i.d. dataset D := {xi, yi}mi=1 and the negative log-prior, respectively. That is, the loss function is interpreted as\nL(θ) := − m∑ i=1 log p(yi | f(xi; θ))− log p(θ) = − log p(θ | D) . (2)\nIn this view, the de facto weight decay regularizer amounts to a zero-mean isotropic Gaussian prior p(θ) = N (θ | 0, λ−1I) with a scalar precision hyperparameter λ. Meanwhile, the usual softmax and quadratic output losses correspond to the Categorical and Gaussian distributions over yi in the case of classification and regression, respectively.\nMAP-trained neural networks have been shown to be overconfident (Hein et al., 2019) and BNNs can mitigate this issue (Kristiadi et al., 2020). They quantify epistemic uncertainty by inferring the full posterior distribution of the parameters θ (instead of just a single point estimate in MAP training). Given that p(θ | D) is the posterior, then the prediction for any test point x ∈ Rn is obtained via marginalization\np(y | x,D) = ∫ p(y | f(x; θ)) p(θ | D) dθ , (3)\nwhich captures the uncertainty encoded in the posterior." }, { "heading": "2.2 LAPLACE APPROXIMATIONS", "text": "In deep learning, since the exact Bayesian posterior is intractable, approximate Bayesian inference methods are used. An important family of such methods is formed by LAs. Let θMAP be the minimizer of (2), which corresponds to a mode of the posterior distribution. A LA locally approximates the posterior using a Gaussian\np(θ | D) ≈ N (θ | θMAP, Σ) := N (θ | θMAP, (∇2L|θMAP)−1) . Thus, LAs construct an approximate Gaussian posterior around θMAP, whose precision equals the Hessian of the loss at θMAP—the “curvature” of the loss landscape at θMAP. While the covariance of a LA is tied to the weight decay of the loss, a common practice in LAs is to tune the prior precision under some objective, in a post-hoc manner. In other words, the MAP estimation and the covariance inference are thought as separate, independent processes. For example, given a fixed MAP estimate, one can maximize the log-likelihood of a LA w.r.t. the prior precision to obtain the covariance. This hyperparameter tuning can thus be thought as an uncertainty tuning.\nA recent example of LAs is the Kronecker-factored Laplace (KFL, Ritter et al., 2018b). The key idea is to approximate the Hessian matrix with the layer-wise Kronecker factorization scheme proposed by Heskes (2000); Martens & Grosse (2015). That is, for each layer l = 1, . . . , L, KFL assumes that the Hessian corresponding to the l-th weight matrix W (l) ∈ Rnl×nl−1 can be written as the Kronecker product G(l) ⊗ A(l) for some G(l) ∈ Rnl×nl and A(l) ∈ Rnl−1×nl−1 . This assumption brings the inversion cost of the Hessian down to Θ(n3l +n 3 l−1), instead of the usual Θ(n 3 l n 3 l−1) cost. The approximate Hessian can easily be computed via tools such as BackPACK (Dangel et al., 2020).\nEven with a closed-form Laplace-approximated posterior, the predictive distribution (3) in general does not have an analytic solution since f is nonlinear. Instead, one can employ Monte-Carlo (MC) integration by sampling from the Gaussian:\np(y | x,D) ≈ 1 S S∑ s=1 p(y | f(x; θs)) ; θs ∼ N (θ | θMAP, Σ) ,\nfor S number of samples. In the case of binary classification with f : Rn × Rd → R, one can use the following well-known approximation, due to MacKay (1992a):\np(y = 1 | x,D) ≈ σ ( f(x; θMAP)√ 1 + π/8 v(x) ) , (4)\nwhere σ is the logistic-sigmoid function and v(x) is the marginal variance of the network output f(x), which is often approximated via a linearization of the network around the MAP estimate:\nv(x) ≈ (∇θf(x; θ)|θMAP) > Σ (∇θf(x; θ)|θMAP) . (5)\n(This approximation has also been generalized to multi-class classifications by Gibbs (1997).) In particular, as v(x) increases, the predictive probability of y = 1 goes to 0.5 and therefore the uncertainty increases. This relationship has also been shown empirically in multi-class classifications with MC-integration (Kristiadi et al., 2020)." }, { "heading": "3 LULA UNITS", "text": "The problem with the standard uncertainty tuning in LAs is that the only degree-of-freedom available for performing the optimization is the scalar prior precision and therefore inflexible.1 We shall address this by introducing “uncertainty units”, which can be added on top of the hidden units of any MAP-trained network (Section 3.1) and can be trained via an uncertainty-aware loss (Section 3.2)." }, { "heading": "3.1 CONSTRUCTION", "text": "Let f : Rn × Rd → Rk be a MAP-trained L-layer neural network with parameters θMAP = {W (l)MAP, b (l) MAP}Ll=1. The premise of our method is simple: At each hidden layer l = 1, . . . , L − 1,\n1While one can also use a non-scalar prior precision, it appears to be uncommon in deep learning. In any case, such a element-wise weight-cost would interact with the training procedure.\nMAP. The\nadditional units are represented by the additional block at the bottom of each layer. Dashed lines correspond to the free parameters Ŵ (1), . . . , Ŵ (L−1), while dotted lines to the zero weights.\nsuppose we add ml ∈ Z≥0 additional hidden units, under the original activation function, to h(l). As a consequence, we need to augment each of the weight matrices to accommodate them.\nConsider the following construction: for each layer l = 1, . . . , L − 1 of the network, we expand W (l) and b(l) to obtain the block matrix and vector\nW̃ (l) :=\n( W\n(l) MAP 0\nŴ (l) 1 Ŵ (l) 2\n) ∈ R(nl+ml)×(nl−1+ml−1) ; b̃(l) := ( b (l) MAP\nb̂(l)\n) ∈ Rnl+ml , (6)\nrespectively, with m0 = 0 since we do not add additional units to the input. For l = L, we define\nW̃ (L) := (W (L) MAP, 0) ∈ R k×(nL−1+mL−1); b̃(L) := b (L) MAP ∈ R k ,\nso that the output dimensionality is unchanged. For brevity, we denote Ŵ (l) := (Ŵ (l)1 , Ŵ (l) 2 ). Refer to Figure 2 for an illustration and Algorithm 2 in Appendix B for a step-by-step summary. Taken together, we denote the resulting augmented network as f̃ and the resulting parameter vector as θ̃MAP ∈ Rd̃, where d̃ it the resulting number of parameters. Note that we can easily extend this construction to convolutional nets by expanding the “channel” of a hidden layer.2\nLet us inspect the implication of this construction. Here for each l = 1, . . . , L − 1, since they are zero, the upper-right quadrant of W̃ (l) deactivates the ml−1 additional hidden units in the previous layer, thus they do not contribute to the original hidden units in the l-th layer. Meanwhile, the submatrix Ŵ (l) and the sub-vector b̂(l) contain parameters for the additional ml hidden units in the l-th layer. We are free to choose the the values of these parameters since the following proposition guarantees that they will not change the output of the network (the proof is in Appendix A).\nProposition 1. Let f : Rn × Rd → Rk be a MAP-trained L-layer network parametrized by θMAP. Suppose f̃ : Rn × Rd̃ → R and θ̃MAP ∈ Rd̃ are obtained via the previous construction. For any input x ∈ Rn, we have f̃(x; θ̃MAP) = f(x; θMAP).\nSo far, it looks like all our changes to the network are inconsequential. However, they do affect the curvature of the landscape of L,3 and thus the uncertainty arising in a LA. Let θ̃ be a random variable in Rd̃ and θ̃MAP be an instance of it. Suppose we have a Laplace-approximated posterior p(θ̃ | D) ≈ N (θ̃ | θ̃MAP, Σ̃) over θ̃, where the covariance Σ̃ is the inverse Hessian of the negative log-posterior w.r.t. the augmented parameters at θ̃MAP. Then, Σ̃ contains additional dimensions (and thus in general, additional structured, non-zero uncertainty) absent in the original network, which depend on the values of the free parameters {Ŵ (l), b̂(l)}L−1l=1 .\n2E.g. if the hidden units are a 3D array of (channel × height × width), then we expand the first dimension. 3More formally: The principal curvatures of the graph of L, seen as a d-dimensional submanifold of Rd+1.\nThe implication of the previous finding can be seen clearly in real-valued networks with diagonal LA posteriors. The following proposition shows that, under such a network and posterior, the construction above will affect the output uncertainty of the original network f (the proof is in Appendix A).\nProposition 2. Suppose f : Rn × Rd → R is a real-valued network and f̃ is as constructed above. Suppose further that diagonal Laplace-approximated posteriors N (θ | θMAP,diag(σ)), N (θ̃ | θ̃,diag(σ̃)) are employed. Using the linearization (5), for any input x ∈ Rn, the variance over the output f̃(x; θ̃) is at least that of f(x; θ).\nIn summary, the construction along with Propositions 1 and 2 imply that the additional hidden units we have added to the original network are uncertainty units under LAs, i.e. hidden units that only contribute to the Laplace-approximated uncertainty and not the predictions. This property gives rise to the name Learnable Uncertainty under Laplace Approximations (LULA) units." }, { "heading": "3.2 TRAINING", "text": "We have seen that by adding LULA units to a network, we obtain additional free parameters that only affect uncertainty under a LA. These parameters are thus useful for uncertainty calibration. Our goal is therefore to train them to induce low uncertainty over the data (inliers) and high uncertainty on outliers—the so-called out-of-distribution (OOD) data. Specifically, this can be done by minimizing the output variance over inliers while maximizing it over outliers. Note that using variance makes sense in both the regression and classification cases: In the former, this objective directly maintains narrow error bars near the data while widen those far-away from them—cf. Figure 1 (c, top). Meanwhile, in classifications, variances over function outputs directly impact predictive confidences, as we have noted in the discussion of (4)—higher variance implies lower confidence.\nThus, following the contemporary technique from non-Bayesian robust learning literature (Hendrycks et al., 2019; Hein et al., 2019, etc.), we construct the following loss. Let f : Rn×Rd → Rk be an L-layer neural network with a MAP-trained parameters θMAP and let f̃ : Rn × Rd̃ → Rk along with θ̃MAP be obtained by adding LULA units. Denoting the dataset sampled i.i.d. from the data distribution as Din and that from some outlier distribution as Dout, we define\nLLULA(θ̃MAP) := 1 |Din| ∑ xin∈Din ν(f̃(xin); θ̃MAP)− 1 |Dout| ∑ xout∈Dout ν(f̃(xout); θ̃MAP) , (7)\nwhere ν(f̃(x); θ̃MAP) is the total variance over the k components of the network output f̃1(x; θ̃), . . . , f̃k(x; θ̃) under the Laplace-approximated posterior p(θ̃ | D) ≈ N (θ̃ | θ̃MAP, Σ̃(θ̃MAP)), which can be approximated via an S-samples MC-integral\nν(f̃(x);θ̃MAP) := k∑ i=1 varp(θ̃|D) f̃i(x; θ̃)\n≈ k∑ i=1\n( 1\nS S∑ s=1 f̃i(x; θ̃s) 2\n) − ( 1\nS S∑ s=1 f̃i(x; θ̃s)\n)2 ; with θ̃s ∼ p(θ̃ | D).\n(8)\nHere, for clarity, we have shown explicitly the dependency of Σ̃ on θ̃MAP. Note that we can simply set Din to be the training set of a dataset. Furthermore, throughout this paper, we use the simple OOD dataset proposed by Hein et al. (2019) which is constructed via permutation, blurring, and contrast rescaling of the in-distribution datasetDin. As we shall show in Section 5, this artificial, uninformative OOD dataset is sufficient for obtaining good results across benchmark problems. More complex dataset as Dout might improve LULA’s performance further but is not strictly necessary. Since our aim is to solely improve the uncertainty, we must maintain the structure of all weights and biases in θ̃MAP, in accordance to (6). This can simply be enforced via gradient masking: For all l = 1, . . . , L− 1, set the gradients of the blocks of W̃ (l) and b̃(l) not corresponding to Ŵ (l) and b̂(l), respectively, to zero. Furthermore, since the covariance matrix Σ̃(θ̃) of the Laplace-approximated posterior is a function of θ̃, it needs to be updated at every iteration during the optimization ofLLULA.\nAlgorithm 1 Training LULA units. Input:\nMAP-trained network f . Dataset D, OOD dataset Dout. Learning rate α. Number of epochs E. 1: Construct f̃ from f by following Section 3.1. 2: for i = 1, . . . , E do 3: p(θ̃ | D) ≈ N (θ̃ | θ̃MAP, Σ̃(θ̃MAP)) . Obtain a Laplace-approximated posterior of f̃ 4: Compute LLULA(θ̃MAP) via (7) using p(θ̃ | D), D, and Dout 5: g = ∇LLULA(θ̃MAP) 6: ĝ = mask gradient(g) . Zero out the derivatives not corresponding to θ̂ 7: θ̃MAP = θ̃MAP − αĝ 8: end for 9: p(θ̃ | D) ≈ N (θ̃ | θ̃MAP, Σ̃(θ̃MAP)) . Obtain the final Laplace approximation\n10: return f̃ and p(θ̃ | D)\nThe cost scales in the network’s depth and can thus be expensive. Inspired by recent findings that last-layer Bayesian methods are competitive to all-layer alternatives (Ober & Rasmussen, 2019), we thus consider a last-layer LA (Kristiadi et al., 2020) as a proxy: We apply a LA only at the last hidden layer and assume that the first L− 2 layers are learnable feature extractor. We use a diagonal last-layer Fisher matrix to approximate the last-layer Hessian. Note that backpropagation through this matrix does not pose a difficulty since modern deep learning libraries such as PyTorch and TensorFlow supports “double backprop” (backpropagation through a gradient) efficiently. Finally, the loss LLULA can be minimized using standard gradient-based optimizers. Refer to Algorithm 1 for a summary.\nLast but not least, the intuition of LULA training is as follows. By adding LULA units, we obtain an augmented version of the network’s loss landscape. The goal of LULA training is then to exploit the weight-space symmetry (i.e. different parameters but induce the same output) arisen from the construction as shown by Proposition 1 and pick one of these parameters that is symmetric to the original parameter but has “better” curvatures. Here, we define a “good curvature” in terms of the above objective. These curvatures, then, when used in a Laplace approximation, could yield better uncertainty estimates compared to the standard non-LULA-augmented Laplace approximations." }, { "heading": "4 RELATED WORK", "text": "While traditionally hyperparameter optimization in a LA requires re-training of the network—under the evidence framework (MacKay, 1992b) or empirical Bayes (Bernardo & Smith, 2009), tuning it in a post-hoc manner has increasingly becomes a common practice. Ritter et al. (2018a;b) tune the prior precision of a LA by maximizing the predictive log-likelihood. Kristiadi et al. (2020) extend this procedure by also using OOD data to better calibrate the uncertainty. However, they are limited in terms of flexibility since the prior precision of the LAs constitutes to just a single parameter. LULA can be seen as an extension of these approaches with greater flexibility and is complementary to them since LULA is independent to the prior precision.\nConfidence calibration via OOD data has achieved state-of-the-art performance in non-Bayesian outlier detection. Hendrycks et al. (2019); Hein et al. (2019); Meinke & Hein (2020) use OOD data to regularize the standard maximum-likelihood training. Malinin & Gales (2018; 2019) use OOD data to train probabilistic models based on the Dirichlet distribution. All these methods are non-Bayesian and non-post-hoc. Our work is thus orthogonal since we aim at improving a class of Bayesian models in a post-hoc manner. LULA can be seen as bringing the best of both worlds: Bayesian uncertainty that is tuned via the state-of-the-art non-Bayesian technique." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we focus on classification using the standard OOD benchmark problems. Supplementary results on regression are in Section C.2." }, { "heading": "5.1 IMAGE CLASSIFICATIONS", "text": "Here, we aim to show that LULA units and the proposed training procedure are (i) a significantly better method for tuning the uncertainty of a LA than previous methods and (ii) able to make a vanilla LA better than non-post-hoc (thus more expensive) UQ methods. For the purpose of (i), we compare\nLULA-augmented KFL (KFL+LULA) against the vanilla KFL with exact prior precision (KFL), KFL where the prior precision is found by maximizing validation log-likelihood (KFL+LL; Ritter et al., 2018b), and KFL with OOD training (KFL+OOD; Kristiadi et al., 2020). Moreover, for the purpose of (ii), we also compare our method with the following baselines, which have been found to yield strong results in UQ (Ovadia et al., 2019): Monte-Carlo dropout (MCD; Gal & Ghahramani, 2016) and deep ensemble (DE; Lakshminarayanan et al., 2017).\nWe use 5- and 8-layer CNNs for MNIST and CIFAR-10, SVHN, CIFAR-100, respectively. These networks achieve around 99%, 90%, and 50% accuracies for MNIST, CIFAR-10, and CIFAR-100, respectively. For MC-integration during the predictions, we use 10 posterior samples. We quantify the results using the standard metrics: the mean-maximum-confidence (MMC) and area-under-ROC (AUR) metrics (Hendrycks & Gimpel, 2017). All results reported are averages over ten prediction runs. Finally, we use standard test OOD datasets along with the “asymptotic” dataset introduced by Kristiadi et al. (2020) where random uniform noise images are scaled with a large number (5000).\nFor simplicity, we add LULA units and apply only at the last layer of the network. For each indistribution dataset, the training of the free parameters of LULA units is performed for a single epoch using Adam over the respective validation dataset. The number of these units is obtained via a grid-search over the set {64, 128, 256, 512}, balancing both the in- and out-distribution confidences (see Appendix B for the details).\nOOD Detection The results for MMC and AUR are shown in Table 1 and Table 2, respectively. First, we would like to direct the reader’s attention to the last four columns of the table. We can see that in general KFL+LULA performs the best among all LA tuning methods. These results validate the effectiveness of the additional flexibility given by LULA units and the proposed training procedure. Indeed, without this additional flexibility, OOD training on just the prior precision becomes less effective, as shown by the results of KFL+OOD. Finally, as we can see in the results on the Asymptotic OOD dataset, LULA makes KFL significantly better at mitigating overconfidence far away from the training data. Now, compared to the contemporary baselines (MCD, DE), we can see that the vanilla KFL yields somewhat worse results. Augmenting the base KFL with LULA makes it competitive to MCD and DE in general. Keep in mind that both KFL and LULA are post-hoc methods.\nDataset Shift We use the corrupted CIFAR-10 dataset (Hendrycks & Dietterich, 2019) for measuring the robustness of LULA-augmented LA to dataset shifts, following Ovadia et al. (2019). Note that dataset shift is a slightly different concept to OOD data: it concerns about small perturbations of the true data, while OOD data are data that do not come from the true data distribution. Intuitively, humans perceive that the data under a dataset shift lies near the true data while OOD data are farther away. We present the results in Table 3. Focusing first on the last four columns in the table, we see that LULA yields the best results compared other tuning methods for KFL. Furthermore, we see that KFL+LULA outperforms DE, which has been shown by Ovadia et al. (2019) to give the state-of-the-art results in terms of robustness to dataset shifts. Finally, while MCD achieve the best results in this experiment, considering its performance in the previous OOD experiment, we draw a conclusion that KFL+LULA provides a more consistent performance over different tasks.\nComparison with DPN Finally we compare KFL+LULA with the (non-Bayesian) Dirichlet prior network (DPN, Malinin & Gales, 2018) in the Rotated-MNIST benchmark (Ovadia et al., 2019)\n(Figure 3). We found that LULA makes the performance of KFL competitive to DPN. We stress that KFL and LULA are post-hoc methods, while DPN requires training from scratch." }, { "heading": "5.2 COST ANALYSIS", "text": "The cost of constructing a LULA network is negligible even for our deepest network: on both the 5- and 8-layer CNNs, the wall-clock time required (with a standard consumer GPU) to add additional LULA units is on average 0.01 seconds (over ten trials). For training, using the last-layer LA as a proxy of the true LA posterior, it took on average 7.25 seconds and 35 seconds for MNIST and CIFAR-10, SVHN, CIFAR-100, respectively. This tuning cost is cheap relative to the training time of the base network, which ranges between several minutes to more than an hour. We refer the reader to Table 9 (Appendix C) for the detail. All in all, LULA is not only effective, but also cost-efficient." }, { "heading": "6 CONCLUSION", "text": "We have proposed LULA units: hidden units that can be added to any pre-trained MAP network for the purpose of exclusively tuning the uncertainty of a Laplace approximation without affecting its predictive performance. They can be trained via an objective that depends on both inlier and outlier datasets to minimize (resp. maximize) the network’s output variance, bringing the state-ofthe-art technique from non-Bayesian robust learning literature to the Bayesian world. Even with very simple outlier dataset for training, we show in extensive experiments that LULA units provide more effective post-hoc uncertainty tuning for Laplace approximations and make their performance competitive to more expensive baselines which require re-training of the whole network." }, { "heading": "APPENDIX A PROOFS", "text": "Proposition 1. Let f : Rn × Rd → Rk be a MAP-trained L-layer network parametrized by θMAP. Suppose f̃ : Rn × Rd̃ → R and θ̃MAP ∈ Rd̃ are obtained via the previous construction. For any input x ∈ Rn, we have f̃(x; θ̃MAP) = f(x; θMAP).\nProof. Let x ∈ Rn be arbitrary. For each layer l = 1, . . . , L we denote the hidden units and preactivations of f̃ as h̃(l) and ã(l), respectively. We need to show that the output of f̃ , i.e. the last pre-activations ã(L), is equal to the last pre-activations a(L) of f .\nFor the first layer, we have that\nã(1) = W̃ (1)x+ b̃(1) =\n( W (1)\nŴ (1)\n) x+ ( b(1)\nb̂(1)\n) = ( W (1)x+ b(1)\nŴ (1)x+ b̂(1)\n) =: ( a(1)\nâ(1)\n) .\nFor every layer l = 1, . . . , L− 1, we denote the hidden units as the block vector\nh̃(l) = ( ϕ(a(l)) ϕ(â(l)) ) = ( h(l) ĥ(l) ) .\nNow, for the intermediate layer l = 2, . . . , L− 1, we observe that\nã(l) = W̃ (l)h̃(l−1) + b̃(l) =\n( W (l) 0\nŴ (l) 1 Ŵ (l) 2\n)( h(l−1)\nĥ(l−1)\n) + ( b(l)\nb̂(l) ) = ( W (l)h(l−1) + 0 + b(l)\nŴ (l) 1 h (l−1) + Ŵ (l) 2 ĥ (l−1) + b̂(l)\n) =: ( a(l)\nâ(l)\n) .\nFinally, for the last-layer, we get\nã(L) = W̃ (L)x+ b̃(L) = ( W (L) 0 )(h(L−1) ĥ(L−1) ) + b(L) = W (L)h(L−1) + 0 + b(L) = a(L) .\nThis ends the proof.\nProposition 2. Suppose f : Rn × Rd → R is a real-valued network and f̃ is as constructed above. Suppose further that diagonal Laplace-approximated posteriors N (θ | θMAP,diag(σ)), N (θ̃ | θ̃,diag(σ̃)) are employed. Using the linearization (5), for any input x ∈ Rn, the variance over the output f̃(x; θ̃) is at least that of f(x; θ).\nProof. W.l.o.g. we arrange the parameters θ̃ := (θ>, θ̂>)> where θ̂ ∈ Rd̃−d contains the weights corresponding to the the additional LULA units. If g(x) is the gradient of the output f(x; θ) w.r.t. θ, then the gradient of f̃(x; θ̃) w.r.t. θ̃, say g̃(x), can be written as the concatenation (g(x)>, ĝ(x)>)>\nwhere ĝ(x) is the corresponding gradient of θ̂. Furthermore, diag(σ̃) has diagonal elements( σ11, . . . , σdd, σ̂11, . . . , σ̂d̃−d,d̃−d )> =: (σ>, σ̂>)> .\nHence we have\nṽ(x) = g̃(x)>diag(σ̃)g̃(x)\n= g(x)>diag(σ)g(x)︸ ︷︷ ︸ =v(x) +ĝ(x)>diag(σ̂)ĝ(x)\n≥ v(x) ,\nsince diag(σ̂) is positive-definite.\nAlgorithm 2 Adding LULA units. Input:\nL-layer net with a MAP estimate θMAP = (W (l) MAP, b (l) MAP) L l=1. Sequence of non-negative integers\n(ml) L l=1.\n1: for l = 1, . . . , L− 1 do 2: vec Ŵ (l) ∼ p(vec Ŵ (l)) . Draw from a prior 3: b̂(l) ∼ p(̂b(l)) . Draw from a prior\n4: W̃ (l)MAP = W (l)MAP 0 Ŵ\n(l) 1 Ŵ (l) 2 . The zero submatrix 0 is of size nl ×ml−1 5: b̃(l)MAP := b(l)MAP b̂(l)\n 6: end for\n7: W̃ (L)MAP = (W (L) MAP, 0) . The zero submatrix is of size k ×mL−1 8: b̃(L)MAP = b (L) MAP 9: θ̃MAP = (W̃ (l) MAP, b̃ (l) MAP) L l=1\n10: return θ̃MAP" }, { "heading": "APPENDIX B IMPLEMENTATION", "text": "We summarize the augmentation of a network with LULA units in Algorithm 2. Note that the priors of the free parameters Ŵ (l), b̂(l) (lines 2 and 3) can be chosen as independent Gaussians—this reflects the standard procedure for initializing NNs’ parameters.\nWe train LULA units for a single epoch (since for each dataset, we have a large amount of training points) with learning rate 0.01. For each dataset, the number of the additional last-layer units mL is obtained via a grid search over the set {64, 128, 256, 512} =: ML, minimizing the absolute distance to the optimal MMC, i.e. 1 and 1/k for the in- and out-distribution validation set, respectively:\nmL = arg min m∈ML\n|1−MMCin(m)|+ |1/k −MMCout(m)| , (9)\nwhere MMCin and MMCout are the validation in- and out-distribution MMC of the Laplaceapproximated, trained-LULA, respectively." }, { "heading": "APPENDIX C ADDITIONAL EXPERIMENT RESULTS", "text": "C.1 TOY DATASET\nIn practice, one can simply set {Ŵ (l), b̂(l)}L−1l=1 randomly given a prior, e.g. the standard Gaussian (see also Algorithm 2). To validate this practice, we show the vanilla Laplace, untrained LULA,\nand fine-tuned LULA’s confidence in Figure 4. Even when set randomly from the standard Gaussian prior, LULA weights provide a significant improvement over the vanilla Laplace. Moreover, training them yields even better predictive confidence estimates. In particular, far from the data, the confidence becomes even lower, while still maintaining high confidence in the data regions.\nC.2 UCI REGRESSION\nTo validate the performance of LULA in regressions, we employ a subset of the UCI regression benchmark datasets. Following previous works, the network architecture used here is a singlehidden-layer ReLU network with 50 hidden units. The data are standardized to have zero mean and unit variance. We use 50 LULA units and optimize them for 40 epochs using OOD data sampled uniformly from [−10, 10]n. For MCD, KFL, and KFL+LULA, each prediction is done via MCintegration with 100 samples. For the evaluation of each dataset, we use a 60-20-20 train-validationtest split. We repeat each train-test process 10 times and take the average.\nIn Table 5 we report the average predictive standard deviation for each dataset. Note that this metric is the direct generalization of the 1D error bar in Figure 1 (top) to multi dimension. The outliers are sampled uniformly from [−10, 10]n. Note that since the inlier data are centered around the origin and have unit variance, they lie approximately in a Euclidean ball with radius 2. Therefore, these outliers are very far away from them. Thus, naturally, high uncertainty values over these outliers are desirable. Uncertainties over the test sets are generally low for all methods, although KFL+LULA has slightly higher uncertainties compared to the base KFL. However, KFL+LULA yield much higher uncertainties over outliers across all datasets, significantly more than the baselines. Moreover, in Table 4, we show that KFL+LULA maintains the predictive performance of the base KFL. Altogether, they imply that KFL+LULA can detect outliers better than other methods without costing the predictive performance.\nC.3 IMAGE CLASSIFICATION\nIn Table 6 we present the sensitivity analysis of confidences under a Laplace approximation w.r.t. the number of additional LULA units. Generally, we found that small number of additional LULA units,\ne.g. 32 and 64, is optimal. It is clear that increasing the number of LULA units decreases both the in- and out-distribution confidences. In the case of larger networks, we found that larger values (e.g. 512 in CIFAR-10) make the Hessian badly conditioned, resulting in numerical instability during its inversion. One might be able to alleviate this issue by additionally tuning the prior precision hyperparameter of the Laplace approximation (as in (Ritter et al., 2018b; Kristiadi et al., 2020)), which corresponds to varying the strength of the diagonal correction of the Hessian. However, we emphasize that even with small amounts of additional LULA units, we can already improve vanilla Laplace approximations significantly, as shown in the main text (Section 5.1).\nWe present the predictive performances of all methods in Table 7. LULA achieves similar accuracies to the base MAP and KFL baselines. Differences in their exact values likely due to various approximations used (e.g. MC-integral). In the case of CIFAR-100, we found that MAP underperforms compared to MCD and DE. This might be because of overfitting, since only weight decay is used for regularization, in contrast to MCD where dropout is used on top of weight decay. Due to MAP’s underperformance, LULA also underperform. However, we stress that whenever the base MAP model performs well, by construction LULA will also perform well.\nAs a supplement we show the performance of KFL+LULA against DPN in OOD detection on MNIST (Table 8). We found that KFL+LULA performance on OOD detection are competitive or better than DPN.\nC.4 COMPUTATIONAL COST\nTo supplement the cost analysis in the main text, we show the wall-clock times required for the construction and training of LULA units in Table 9.\nC.5 DEEPER NETWORKS\nWe also asses the performance of LULA in larger network. We use a 20-layer CNN on CIFAR10, SVHN, and CIFAR-100. Both the KFL and LULA are applied only at the last-layer of the network. The results, in terms of MMC, expected calibration error (ECE), and AUR, are presented in Table 10 and Table 11. We observe that LULA is the best method for uncertainty tuning in LA: It makes KFL better calibrated in both in- and out-distribution settings. Moreover, the LULA-imbued KFL is competitive to DE, which has been shown by Ovadia et al. (2019) to be the best Bayesian method for uncertainty quantification. Note that, KFL+LULA is a post-hoc method and thus can be applied to any pre-trained network. In contrast, DE requires training multiple networks (usually 5) from scratch which can be very expensive.\nWe additionally show the performance of LULA when applied on top of a KFL-approximated DenseNet-121 (Huang et al., 2017) in Tables 12 and 13. LULA generally outperforms previous uncertainty tuning methods for LA and is competitive to DE. However, we observe in SVHN that LULA do not improve KFL significantly. This issue is due to the usage the Smooth Noise dataset, which has already assigned low confidence in this case, for training LULA. Thus, we re-train LULA with the Uniform Noise dataset and present the result in Table 14. We show that using this dataset, we obtain better OOD calibration performance, outperforming DE." } ]
2,020
null
SP:2a2368b5bc6b59f66af75ea37f4cbc19c8fcf50f
[ "In this paper, the authors studied the possibility of sparsity exploration in Recurrent Neural Networks (RNNs) training. The main contributions include two parts: (1) Selfish-RNN training algorithm in Section 3.1 (2) SNT-ASGD optimizer in Section 3.2. The key idea of the Selfish-RNN training algorithm is a non-uniform redistribution across cell weights for better regularization. The authors mentioned previous sparse training techniques mainly focus on Multilayer Perceptron Networks (MLPs) and Convolutional Neural Networks (CNNs) rather than RNNs. This claim seems to be doubtful because one-time SVD + fine-tuning usually works very well for most RNN training applications in the industry." ]
Sparse neural networks have been widely applied to reduce the necessary resource requirements to train and deploy over-parameterized deep neural networks. For inference acceleration, methods that induce sparsity from a pre-trained dense network (dense-to-sparse) work effectively. Recently, dynamic sparse training (DST) has been proposed to train sparse neural networks without pre-training a large and dense network (sparse-to-sparse), so that the training process can also be accelerated. However, previous sparse-to-sparse methods mainly focus on Multilayer Perceptron Networks (MLPs) and Convolutional Neural Networks (CNNs), failing to match the performance of dense-to-sparse methods in Recurrent Neural Networks (RNNs) setting. In this paper, we propose an approach to train sparse RNNs with a fixed parameter count in one single run, without compromising performance. During training, we allow RNN layers to have a non-uniform redistribution across cell weights for a better regularization. Further, we introduce SNT-ASGD, a variant of the averaged stochastic gradient optimizer, which significantly improves the performance of all sparse training methods for RNNs. Using these strategies, we achieve state-of-the-art sparse training results, even better than dense model results, with various types of RNNs on Penn TreeBank and Wikitext-2 datasets.
[]
[ { "authors": [ "Guillaume Bellec", "David Kappel", "Wolfgang Maass", "Robert Legenstein" ], "title": "Deep rewiring: Training very sparse deep networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Rui Dai", "Lefei Li", "Wenjian Yu" ], "title": "Fast training and model compression of gated rnns via singular value decomposition", "venue": "In 2018 International Joint Conference on Neural Networks (IJCNN),", "year": 2018 }, { "authors": [ "Tim Dettmers", "Luke Zettlemoyer" ], "title": "Sparse networks from scratch: Faster training without losing performance", "venue": "arXiv preprint arXiv:1907.04840,", "year": 2019 }, { "authors": [ "Jeffrey L Elman" ], "title": "Finding structure in time", "venue": "Cognitive science,", "year": 1990 }, { "authors": [ "Utku Evci", "Trevor Gale", "Jacob Menick", "Pablo Samuel Castro", "Erich Elsen" ], "title": "Rigging the lottery: Making all tickets winners", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "A theoretically grounded application of dropout in recurrent neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Edouard Grave", "Armand Joulin", "Nicolas Usunier" ], "title": "Improving neural language models with a continuous cache", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Scott Gray", "Alec Radford", "Diederik P Kingma" ], "title": "Gpu kernels for block-sparse weights", "venue": "arXiv preprint arXiv:1711.09224,", "year": 2017 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Lynette Hirschman", "Marc Light", "Eric Breck", "John D Burger" ], "title": "Deep read: A reading comprehension system", "venue": "In Proceedings of the 37th annual meeting of the Association for Computational Linguistics,", "year": 1999 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Hakan Inan", "Khashayar Khosravi", "Richard Socher" ], "title": "Tying word vectors and word classifiers: A loss framework for language modeling", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Max Jaderberg", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Speeding up convolutional neural networks with low rank expansions", "venue": "arXiv preprint arXiv:1405.3866,", "year": 2014 }, { "authors": [ "Steven A Janowsky" ], "title": "Pruning versus clipping in neural networks", "venue": "Physical Review A,", "year": 1989 }, { "authors": [ "Nal Kalchbrenner", "Phil Blunsom" ], "title": "Recurrent continuous translation models", "venue": "In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing,", "year": 2013 }, { "authors": [ "Nal Kalchbrenner", "Erich Elsen", "Karen Simonyan", "Seb Noury", "Norman Casagrande", "Edward Lockhart", "Florian Stimberg", "Aaron van den Oord", "Sander Dieleman", "Koray Kavukcuoglu" ], "title": "Efficient neural audio synthesis", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jeremy Kepner", "Ryan Robinett" ], "title": "Radix-net: Structured sparse matrices for deep neural networks", "venue": "IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW),", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Ben Krause", "Emmanuel Kahembwe", "Iain Murray", "Steve Renals" ], "title": "Dynamic evaluation of neural sequence models", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yann LeCun", "John S Denker", "Sara A Solla" ], "title": "Optimal brain damage", "venue": "In Advances in neural information processing systems,", "year": 1990 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip Torr" ], "title": "SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Stephen Gould", "Philip H.S. Torr" ], "title": "A signal propagation perspective for pruning neural networks at initialization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yixuan Li", "Jason Yosinski", "Jeff Clune", "Hod Lipson", "John E Hopcroft" ], "title": "Convergent learning: Do different neural networks learn the same representations", "venue": "Iclr,", "year": 2016 }, { "authors": [ "Junjie Liu", "Zhe XU", "Runbin SHI", "Ray C.C. Cheung", "Hayden K.H. So" ], "title": "Dynamic sparse training: Find efficient sparse network from scratch with trainable masked layers", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Shiwei Liu", "Decebal Constantin Mocanu", "Mykola Pechenizkiy" ], "title": "Intrinsically sparse long shortterm memory networks", "venue": "arXiv preprint arXiv:1901.09208,", "year": 2019 }, { "authors": [ "Shiwei Liu", "Decebal Constantin Mocanu", "Yulong Pei", "Mykola Pechenizkiy" ], "title": "Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware", "venue": "Neural Computing and Applications,", "year": 2020 }, { "authors": [ "Shiwei Liu", "TT van der Lee", "Anil Yaman", "Zahra Atashgahi", "D Ferrar", "Ghada Sokar", "Mykola Pechenizkiy", "DC Mocanu" ], "title": "Topological insights into sparse neural networks", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2020 }, { "authors": [ "Christos Louizos", "Karen Ullrich", "Max Welling" ], "title": "Bayesian compression for deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P Kingma" ], "title": "Learning sparse neural networks through l 0 regularization", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Rongrong Ma", "Jianyu Miao", "Lingfeng Niu", "Peng Zhang" ], "title": "Transformed l1 regularization for learning sparse deep neural networks", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Mitchell Marcus", "Beatrice Santorini", "Mary Ann Marcinkiewicz" ], "title": "Building a large annotated corpus of english: The penn treebank", "venue": null, "year": 1993 }, { "authors": [ "Gábor Melis", "Chris Dyer", "Phil Blunsom" ], "title": "On the state of the art of evaluation in neural language models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Stephen Merity", "Nitish Shirish Keskar", "Richard Socher" ], "title": "Regularizing and optimizing LSTM language models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tomáš Mikolov", "Martin Karafiát", "Lukáš Burget", "Jan Černockỳ", "Sanjeev Khudanpur" ], "title": "Recurrent neural network based language model", "venue": "In Eleventh annual conference of the international speech communication association,", "year": 2010 }, { "authors": [ "Decebal Constantin Mocanu", "Elena Mocanu", "Phuong H. Nguyen", "Madeleine Gibescu", "Antonio Liotta" ], "title": "A topological insight into restricted boltzmann machines", "venue": "Machine Learning,", "year": 2016 }, { "authors": [ "Decebal Constantin Mocanu", "Elena Mocanu", "Peter Stone", "Phuong H Nguyen", "Madeleine Gibescu", "Antonio Liotta" ], "title": "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science", "venue": "Nature Communications,", "year": 2018 }, { "authors": [ "Dmitry Molchanov", "Arsenii Ashukha", "Dmitry Vetrov" ], "title": "Variational dropout sparsifies deep neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Reza Moradi", "Reza Berangi", "Behrouz Minaei" ], "title": "Sparsemaps: convolutional networks with sparse feature maps for tiny image classification", "venue": "Expert Systems with Applications,", "year": 2019 }, { "authors": [ "Hesham Mostafa", "Xin Wang" ], "title": "Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Michael C Mozer", "Paul Smolensky" ], "title": "Using relevance to reduce network size automatically", "venue": "Connection Science,", "year": 1989 }, { "authors": [ "Sharan Narang", "Erich Elsen", "Gregory Diamos", "Shubho Sengupta" ], "title": "Exploring sparsity in recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Boris T Polyak", "Anatoli B Juditsky" ], "title": "Acceleration of stochastic approximation by averaging", "venue": "SIAM Journal on Control and Optimization,", "year": 1992 }, { "authors": [ "Ameya Prabhu", "Girish Varma", "Anoop Namboodiri" ], "title": "Deep expander networks: Efficient deep networks from graph theory", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Alberto Sanfeliu", "King-Sun Fu" ], "title": "A distance measure between attributed relational graphs for pattern recognition", "venue": "IEEE transactions on systems, man, and cybernetics,", "year": 1983 }, { "authors": [ "Andrew M Saxe", "James L McClelland", "Surya Ganguli" ], "title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", "venue": null, "year": 2014 }, { "authors": [ "Yikang Shen", "Shawn Tan", "Alessandro Sordoni", "Aaron Courville" ], "title": "Ordered neurons: Integrating tree structures into recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Chaoqi Wang", "Guodong Zhang", "Roger Grosse" ], "title": "Picking winning tickets before training by preserving gradient flow", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Shuohang Wang", "Jing Jiang" ], "title": "Machine comprehension using match-lstm and answer pointer", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Wei Wen", "Yuxiong He", "Samyam Rajbhandari", "Minjia Zhang", "Wenhan Wang", "Fang Liu", "Bin Hu", "Yiran Chen", "Hai Li" ], "title": "Learning intrinsic sparse structures within long short-term memory", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jie Yang", "Jun Ma" ], "title": "Feed-forward neural network training using sparse representation", "venue": "Expert Systems with Applications,", "year": 2019 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Ruslan Salakhutdinov", "William W. Cohen" ], "title": "Breaking the softmax bottleneck: A high-rank RNN language model", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Wojciech Zaremba", "Ilya Sutskever", "Oriol Vinyals" ], "title": "Recurrent neural network regularization", "venue": "arXiv preprint arXiv:1409.2329,", "year": 2014 }, { "authors": [ "Hattie Zhou", "Janice Lan", "Rosanne Liu", "Jason Yosinski" ], "title": "Deconstructing lottery tickets: Zeros, signs, and the supermask", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Michael Zhu", "Suyog Gupta" ], "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression", "venue": "arXiv preprint arXiv:1710.01878,", "year": 2017 }, { "authors": [ "Julian Georg Zilly", "Rupesh Kumar Srivastava", "Jan Koutnı́k", "Jürgen Schmidhuber" ], "title": "Recurrent highway networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Shen" ], "title": "ON-LSTM can learn the latent tree structure of natural language by learning the order of neurons. For a fair comparison, we use exactly the same model hyper-parameters and regularization used in ON-LSTM. We set the sparsity of each layer to 55% and the initial removing rate to 0.5. We train the model for 1000 epochs and rerun SNT-ASGD", "venue": null, "year": 2019 }, { "authors": [ "Evci" ], "title": "2020) that while state-of-the-art sparse training method (RigL) achieves promising performance in terms of CNNs, it fails to match the performance of pruning in RNNs. Given the fact that magnitude pruning has become a widely-used and strong baseline for model compression, we also report a comparison between Selfish-RNN and iterative magnitude", "venue": "I COMPARISON BETWEEN SELFISH-RNN", "year": 2020 }, { "authors": [ "Moradi" ], "title": "The results of our paper provide motivation for new types of hardware accelerators and libraries with better support for sparse neural networks. Nevertheless, many recent works have been developed to accelerate sparse neural networks including Gray et al", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recurrent neural networks (RNNs) (Elman, 1990), with a variant of long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997), have been highly successful in various fields, including language modeling (Mikolov et al., 2010), machine translation (Kalchbrenner & Blunsom, 2013), question answering (Hirschman et al., 1999; Wang & Jiang, 2017), etc. As a standard task to evaluate models’ ability to capture long-range context, language modeling has witnessed great progress in RNNs. Mikolov et al. (2010) demonstrated that RNNs perform much better than backoff models for language modeling. After that, various novel RNN architectures such as Recurrent Highway Networks (RHNs) (Zilly et al., 2017), Pointer Sentinel Mixture Models (Merity et al., 2017), Neural Cache Model (Grave et al., 2017), Mixture of Softmaxes (AWD-LSTM-MoS) (Yang et al., 2018), ordered neurons LSTM (ON-LSTM) (Shen et al., 2019), and effective regularization like variational dropout (Gal & Ghahramani, 2016), weight tying (Inan et al., 2017), DropConnect (Merity et al., 2018) have been proposed to significantly improve the performance of RNNs.\nAt the same time, as the performance of deep neural networks (DNNs) improves, the resources required to train and deploy deep models are becoming prohibitively large. To tackle this problem, various dense-to-sparse methods have been developed, including but not limited to pruning (LeCun et al., 1990; Han et al., 2015), Bayesian methods (Louizos et al., 2017a; Molchanov et al., 2017), distillation (Hinton et al., 2015), L1 Regularization (Wen et al., 2018), and low-rank decomposition (Jaderberg et al., 2014). Given a pre-trained model, these methods work effectively to accelerate the inference. Recently, some dynamic sparse training (DST) approaches (Mocanu et al., 2018; Mostafa & Wang, 2019; Dettmers & Zettlemoyer, 2019; Evci et al., 2020) have been proposed to bring efficiency for both, the training phase and the inference phase by dynamically changing the sparse connectivity during training. However, previous approaches are mainly for CNNs. For RNNs, the long-term dependencies and repetitive usage of recurrent cells make them more difficult to be sparsified (Kalchbrenner et al., 2018; Evci et al., 2020). More importantly, the state-of-the-art performance achieved by RNNs on language modeling is mainly associated with the optimizer, averaged stochastic gradient descent (ASGD) (Polyak & Juditsky, 1992), which is not compatible with the existing DST approaches. The above-mentioned problems heavily limit the performance\nof the off-the-shelf sparse training methods in the RNN field. For instance, while “The Rigged Lottery” (RigL) achieves state-of-the-art sparse training results with various CNNs, it fails to match the performance of the iterative pruning method in the RNN setting (Evci et al., 2020). In this paper, we introduce an algorithm to train sparse RNNs with a fixed number of computational costs throughout training. We abbreviate our sparse RNN training method as Selfish-RNN because our method encourages cell weights to obtain their parameters selfishly. The main contributions of this work are five-fold:\n• We propose an algorithm to train sparse RNNs from scratch with a fixed number of parameters. This advantage constrains the training costs to a fraction of the costs needed for training a dense model, allowing us to choose suitable sparsity levels for different types of training platforms. • We introduce SNT-ASGD, a sparse variant of the non-monotonically triggered averaged\nstochastic gradient descent optimizer, which overcomes the over-sparsified problem of the original NT-ASGD (Merity et al., 2018) caused by dynamic sparse training. • We demonstrate state-of-the-art sparse training performance with various RNN models,\nincluding stacked LSTMs (Zaremba et al., 2014), RHNs, ordered neurons LSTM (ONLSTM) on Penn TreeBank (PTB) dataset (Marcus et al., 1993) and AWD-LSTM-MoS on WikiText-2 dataset (Melis et al., 2018). • We present an approach to analyze the evolutionary trajectory of the sparse connectivity\noptimized by dynamic sparse training from the perspective of graph. With this approach, we show that there exist many good structural local optima (sparse sub-networks having equally good performance) in RNNs, which can be found in an efficient and robust manner. • Our analysis shows two surprising phenomena in the setting of RNNs contrary to CNNs:\n(1) random-based weight growth performs better than gradient-based weight growth, (2) uniform sparse distribution performs better than Erdős-Rényi (ER) sparse initialization. These results highlight the need to choose different sparse training methods for different architectures." }, { "heading": "2 RELATED WORK", "text": "Dense-to-Sparse. There are a large amount of works operating on a dense network to yield a sparse network. We divide them into three categories based on the training cost in terms of memory and computation. (1) Iterative Pruning and Retraining. To the best of our knowledge, pruning was first proposed by Janowsky (1989) and Mozer & Smolensky (1989) to yield a sparse network from a pre-trained network. Recently, Han et al. (2015) brought it back to people’s attention based on the idea of iterative pruning and retraining with modern architectures. Some recent works were proposed to further reduce the number of iterative retraining e.g., Narang et al. (2017); Zhu & Gupta (2017). Frankle & Carbin (2019) proposed the Lottery Ticket Hypothesis showing that the sub-networks (“winning tickets”) obtained via iterative pruning combined with their “lucky” initialization can outperform the dense networks. Zhou et al. (2019) discovered that the sign of their initialization is the crucial factor that\nmakes the “winning tickets” work. Our work shows that there exists a much more efficient and robust way to find those “winning ticketts” without any special initialization. The aforementioned methods require at least the same training cost as training a dense model, sometimes even more, as a pre-trained dense model is involved. We compare our method with state-of-the-art pruning method proposed by Zhu & Gupta (2017) in Appendix I. With fewer training costs, our method is able to discover sparse networks that can achieve lower test perplexity than iterative pruning. (2) Learning Sparsity During Training. There are also some works attempting to learn the sparse networks during training. Louizos et al. (2017b) and Wen et al. (2018) are examples that gradually enforce the network weights to zero via L0 and L1 regularization, respectively. Dai et al. (2018) proposed a singular value decomposition (SVD) based method to accelerate the training process for LSTMs. Liu et al. (2020a) proposed Dynamic Sparse Training to discover sparse structure by learning binary masks associated with network weights. However, these methods start with a fully dense network, and hence are not memory efficient. (3) One-Shot Pruning. Some works aim to find sparse neural networks by pruning once prior to the main training phase based on some salience criteria, such as connection sensitivity (Lee et al., 2019), signal propagation, (Lee et al., 2020), and gradient signal preservation (Wang et al., 2020). These techniques can find sparse networks before the standard training, but at least one iteration of dense model needs to be trained to identify the sparse sub-networks, and therefore the pruning process is not applicable to memory-limited scenarios. Additionally, one-shot pruning generally cannot match the performance of dynamic sparse training, especially at extreme sparsity levels (Wang et al., 2020).\nSparse-to-Sparse. Recently, many works have emerged to train intrinsically sparse neural networks from scratch to obtain efficiency both for training and inference. (1) Static Sparse Training. Mocanu et al. (2016) introduced intrinsically sparse networks by exploring the scale-free and small-world topological properties in Restricted Boltzmann Machines. Later, some works expand static sparse training into CNNs based on expander graphs and show comparable performance (Prabhu et al., 2018; Kepner & Robinett, 2019). (2) Dynamic Sparse Training. Mocanu et al. (2018) introduced Sparse Evolutionary Training (SET) which initializes a sparse network and dynamically changes the sparse connectivity by a simple remove-and-regrow strategy. At the same time, DeepR (Bellec et al., 2018) trained very sparse networks by sampling the sparse connectivity based on a Bayesian posterior. The iterative configuration updates have been proved to converge to a stationary distribution. Mostafa & Wang (2019) introduced Dynamic Sparse Reparameterization (DSR) to train sparse neural networks while dynamically adjusting the sparsity levels of different layers. Sparse Networks from Scratch (SNFS) (Dettmers & Zettlemoyer, 2019) improved the sparse training performance by growing free weights according to their momentum. It requires extra computation and memory to update the dense momentum tensor for each iteration. Further, Evci et al. (2020) introduced RigL which activates weights with the highest magnitude gradients. This approach grows weights expected to receive gradients with high magnitudes, while amortizing a large number of memory requirements and computational cost caused by momentum. Due to the inherent limitations of deep learning software and hardware libraries, all of the above works simulate sparsity using a binary mask over weights. More recently, Liu et al. (2020b) proved the potentials of DST by developing for the first time an independent software framework to train very large truly sparse MLPs trained with SET. However, all these works mainly focus on CNNs and MLPs, and they are not designed to match state-of-the-art performance for RNNs.\nWe summarize the properties of all approaches compared in this paper in Table 1. Same with SET, our method can guarantee Backward Sparse, which does not require any extra information from the removed weights. Additionally, we discuss the differences among SET, pruning techniques, and our method in Appendix H." }, { "heading": "3 SPARSE RNN TRAINING", "text": "Our sparse RNN training method is illustrated in Figure 1 with LSTM as a specific case of RNNs. Note that our method can be easily applied to any other RNN variants. The only difference is the number of cell weights. Before training, we randomly initialize each layer at the same sparsity (the fraction of zero-valued weights), so that the training costs are proportional to the dense model at the beginning. To explore more sparse structures, while to maintain a fixed sparsity level, we need to optimize the sparse connectivity together with the corresponding weights (a combinatorial optimization problem). We apply dynamic sparse connectivity and SNT-ASGD to handle this combinatorial optimization problem. The pseudocode of the full training procedure of our algorithm is shown in Algorithm 1." }, { "heading": "3.1 DYNAMIC SPARSE CONNECTIVITY", "text": "We consider uniform sparse initialization, magnitude weight removal, random weight growth, cell weight redistribution together as main components of our dynamic sparse connectivity method, which can ensure a fixed number of parameters and a clear sparse backward pass, as discussed next. Notation. Given a dataset of N samples D = {(xi, yi)}Ni=1 and a network f(x; θ) parameterized by θ. We train the network to minimize the loss function ∑N i=1 L(f(xi; θ), yi). The basic mechanism of sparse neural networks is to use a fraction of parameters to reparameterize the whole network, while preserving the performance as much as possible. Hence, a sparse neural network can be denoted as fs(x; θs) with a sparsity level S = 1− ‖θs‖0‖θ‖0 , where ‖ · ‖0 is the `0-norm. Uniform Sparse Initialization. First, the network is uniformly initialized with a sparse distribution in which the sparsity level of each layer is the same S. More precisely, the network is initialized by:\nθs = θ M (1)\nwhere θ is a dense weight tensor initialized in a standard way; M is a binary tensor, in which nonzero elements are sampled uniformly based on the sparsity S; refers to the Hadamard product. Magnitude Weight Removal. For non-RNN layers, we use magnitude weight removal followed by random weight growth to update the sparse connectivity. We remove a fraction p of weights with the smallest magnitude after each training epoch. This step is performed by changing the binary tensor M , as follows: M =M − P (2) where P is a binary tensor with the same shape as M , in which the nonzero elements have the same indices with the top-p smallest-magnitude nonzero weights in θs, with ||P ||0 = p||M ||0. Random Weight Growth. To keep a fixed parameter count, we randomly grow the same number of weights immediately after weight removal, by:\nM =M +R (3)\nwhere R is a binary tensor where the nonzero elements are randomly located at the position of zero elements of M . We choose random growth to get rid of using any information of the non-existing weights, so that both feedforward and backpropagation are completely sparse. It is more desirable to have such pure sparse structures as it enables the possibility of conceiving in the future specialized hardware accelerators for sparse neural networks. Besides, our analysis of growth methods in Section 4.3 shows that random growth can explore more sparse structural degrees of freedom than gradient growth, which might be crucial to the sparse training. Cell Weight Redistribution. Our dynamic sparse connectivity differs from previous methods mainly in cell weight redistribution. For RNN layers, the naive approach is to sparsify all cell weight tensors independently at the same sparsity, as shown in Liu et al. (2019) which is a straightforward extension of applying SET to RNNs. Essentially, it is more desirable to redistribute new parameters to cell weight tensors dependently, as all cell weight tensors collaborate together to regulate information. Intuitively, we redistribute new parameters in a way that weight tensors containing more largemagnitude weights should have more parameters. Large-magnitude weights indicate that their loss\ngradients are large and few oscillations occur. Thus, weight tensors with more large-magnitude connections should be reallocated with more parameters to accelerate training. Concretely, for each RNN layer l, we remove weights dependently given by an ascending sort:\nSortp(|θl1|, |θl2|, .., |θlt|) (4)\nwhere {θl1, θl2, ..., θlt} are all weight tensors within each cell, and Sortp returns p indices of the smallest-magnitude weights. After weight removal, new parameters are uniformly grown to each weight tensor to implement our cell weight redistribution gradually. We also tried other approaches including the mean value of the magnitude of nonzero weights or the mean value of the gradient magnitude of nonzero weights, but our approach achieves the best performance, as shown in Appendix B. We further demonstrate the final sparsity breakdown of cell weights learned by our method in Appendix M and observe that weights of forget gates are consistently sparser than other weights for all models. Note that redistributing parameters across cell weight tensors does not change the FLOP counting, as the sparsity of each layer is not changed. In contrast, the across-layer weight redistribution used by DSR and SNFS affects the sparsity level of each layer. As a result, it will change the number of floating-point operations (FLOPs).\nSimilar with SNFS, We also decay the removing rate p to zero with a cosine annealing. We further use Eq. (1) to enforce the sparse structure before the forward pass and after the backward pass, so that the zero-valued weights will not contribute to the loss. And all the newly activated weights are initialized to zero." }, { "heading": "3.2 SPARSE NON-MONOTONICALLY TRIGGERED ASGD", "text": "Non-monotonically Triggered ASGD (NT-ASGD) has been shown to achieve surprising performance with various RNNs (Merity et al., 2018; Yang et al., 2018; Shen et al., 2019). However, it becomes less appealing for sparse RNNs training. Unlike dense networks in which every parameter in the model is updated at each iteration, for sparse networks, the zero-valued weights remain zero when they are not activated. Once these zero-valued weights are activated, the original averaging operation of standard NT-ASGD will immediately bring them close to zero. Thereby, after the averaging operation is triggered, the number of valid weights will decrease sharply as shown in Figure 2. To alleviate this problem, we introduce SNT-ASGD as following:\nw̃i =\n{ 0 if mi = 0,∀i,∑K\nt=Ti wi,t (K−Ti+1) if mi = 1,∀i. (5)\nwhere w̃i is the value returned by SNT-ASGD for weight wi; wi,t represents the actual value of weight wi at the tth iteration; mi = 1 if the weight wi exists and mi = 0 means that the weight wi does not exist; Ti is the iteration in which the weight wi grows most recently; and K is the total\nnumber of iterations. We demonstrate the effectiveness of SNT-ASGD in Figure 2. At the beginning, trained with SGD, the number of weights with high magnitude increases fast. However, the trend starts to descend significantly once the optimization switches to NT-ASGD at the 80th epoch, whereas the trend of SNT-ASGD continues to rise after a small drop caused by the averaging operation.\nTo better understand how proposed components, cell weight redistribution and SNT-ASGD, improve the sparse RNN training performance, we further conduct an ablation study in Appendix A. It is clear to see that both of them lead to significant performance improvement." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "We evaluate Selfish-RNN with various models including stacked LSTMs, RHNs, ON-LSTM on the Penn TreeBank dataset and AWD-LSTM-MoS on the WikiText-2 dataset. The performance of Selfish-RNN is compared with 5 state-of-the-art sparse inducing techniques, including Intrinsic Sparse Structures (ISS) (Wen et al., 2018), SET, DSR, SNFS, and RigL. ISS is a method to explore sparsity inside RNNs by using group Lasso regularization. We choose Adam (Kingma & Ba, 2014) optimizer for SET, DSR, SNFS, and RigL. We also evaluate our methods with two state-of-the-art RNN models, ON-LSTM on PTB and AWD-LSTM-MoS on Wikitext-2, as reported in Appendix D and Appendix E, respectively." }, { "heading": "4.1 STACKED LSTMS", "text": "As introduced by Zaremba et al. (2014), stacked LSTMs (large) is a two-layer LSTM model with 1500 hidden units for each LSTM layer. We choose the same sparsity as ISS, 67% and 62%. We empirically found that 0.7 is a safe choice for the removing rate of stacked LSTMs. The clip norm is set to 0.25 and all models are trained for 100 epochs.\nResults are shown in the left side of Table 2. To evaluate our sparse training method fairly, we also provide a new dense baseline trained with the standard NT-ASGD, achieving 6 lower test perplexity than the widely-used baseline. We also test whether a small dense network and a static sparse network\nwith the same number of parameters as Selfish-RNN can match the performance of Selfish-RNN. We train a dense stacked LSTMs with 700 hidden units, named as “Small”. In line with the previous studies (Mocanu et al., 2018; Mostafa & Wang, 2019; Evci et al., 2020), both static sparse networks and the small-dense network fail to match the performance of Selfish-RNN. Training a static sparse network from scratch with uniform distribution performs better than the one with ER distribution. Trained with Adam, all sparse training techniques fail to match the performance of ISS and dense models. Models trained with SNT-ASGD obtain substantially lower perplexity, and Selfish-RNN achieves the lowest one, even better than the new dense baseline with much fewer training costs.\nTo understand better the effect of different optimizers on different DST methods, we report the performance of all DST methods trained with Adam, momentum SGD, and SNT-ASGD. The learning rate of Adam is set as 0.001. The learning rate of momentum SGD is 2 decreased by a factor of 1.33 once the loss fails to decrease and the momentum coefficient is 0.9. The weight decay is set as 1.2e-6 for all optimizers. For SNFS (SNT-ASGD), we replace momentum of weights with their gradients, as SNT-ASGD does not involve any momentum terms. We use the same hyperparameters for all DST methods. The results are shown in Table 3. It is clear that SNT-ASGD brings significant perplexity improvements to all sparse training techniques. This further stands as empirical evidence that SNT-ASGD is crucial to improve the sparse training performance in the RNN setting. Moreover, compared with other DST methods, Selfish-RNN is quite robust to the choice of optimizers due to its simple scheme to update sparse connectivity. Advanced strategies such as across-layer weight redistribution used in DSR and SNFS, gradient-based weight growth used in RigL and SNFS heavily depend on optimizers. They might work decently for some optimization methods but may not work for others.\nAdditionally, note that different DST methods use different sparse distributions, leading to very different computational costs even with the same sparsity. We also report the approximated training and inference FLOPs for all methods. The FLOP gap between Selfish-RNN and RigL is very small, whereas SNFS requires more FLOPs than our method for both training and inference (see Appendix L for details on how FLOPs are calculated). ISS achieves a lower number of FLOPs, since it does not sparsify the embedding layer and therefore, their LSTM layers are much more sparse than LSTM layers obtained by other methods. This would cause a fewer number of FLOPs as LSTM layers typically require more FLOPs than other layers." }, { "heading": "4.2 RECURRENT HIGHWAY NETWORKS", "text": "Recurrent Highway Networks (Zilly et al., 2017) is a variant of RNNs allowing RNNs to explore deeper architectures inside the recurrent transition. See Appendix C for experimental settings of RHN. The results are shown in the right side of Table 2. Selfish-RNN achieves better performance than the dense model with half FLOPs. Unlike the large FLOP discrepancy of stacked LSTMs, the FLOP gap between different sparse training techniques for RHNs is very small, except SNFS which requires computing dense momentum for each iteration. Additionally, ISS has similar FLOPs with Selfish-RNN for RHN, as it sparsifies the embedding layer as well." }, { "heading": "4.3 ANALYZING THE PERFORMANCE OF SELFISH-RNN", "text": "Analysis of Evolutionary Trajectory of Sparse Connectivity. The fact that Selfish-RNN consistently achieves good performance with different runs naturally raises some questions: e.g., are final sparse connectivities obtained by different runs similar or very different? Is the distance between the original sparse connectivity and the final sparse connectivity large or small? To answer these questions, we investigate a method based on graph edit distance (GED) (Sanfeliu & Fu, 1983) to measure the topological distance between different sparse connectivities learned by different runs. The distance is scaled between 0 and 1. The smaller the distance is, the more similar the two sparse topologies are (See Appendix J for details on how we measure the sparse topological distance).\nThe results are demonstrated in Figure 3. Figure 3-left shows how the topology of one randominitialized network evolves when trained with Selfish-RNN. We compare the topological distance between the sparse connectivity obtained at the 5th epoch and the sparse connectivities obtained in the following epochs. We can see that the distance gradually increases from 0 to a very high value 0.8, meaning that Selfish-RNN optimizes the initial topology to a very different one after training. Moreover, Figure 3-right illustrates that the topological distance between two same-initialized networks trained with different seeds after the 4th epoch. We can see that starting from the same sparse topology, they evolve to completely different sparse connectivities. Note that even when leading to completely different sparse connectivities, different runs achieve similarly good performance, which indicates that in the case of RNNs there exist many good local optima in terms of sparse connectivity that can have equally good performance. This phenomenon complements the findings of Liu et al. (2020c) which show that there are numerous sparse sub-networks performing similarly well in the context of MLPs.\nAnalysis of Sparse Initialization. We compare two types of sparse initialization, ER distribution and uniform distribution. Uniform distribution namely enforces the sparsity level of each layer to be the same as S. ER distribution allocates higher sparsity to larger layers than smaller ones. Note that its variant Erdős-Rényi-kernel proposed by Evci et al. (2020) scales back to ER for RNNs, as no kernels are involved. The results are shown as the Static group in Table 2. We can see that uniform distribution outperforms ER distribution consistently. Moreover, ER usually causes RNN layers to be less sparse than other layers, resulting in a small increase of FLOPs.\nAnalysis of Growth Methods. Methods that leverage gradient-based weight growth (SNFS and RigL) have shown superiority over the methods using random-based weight growth for CNNs. However, we observe a different behavior with RNNs. We set up a controlled experiment to compare these two methods with SNT-ASGD and momentum SGD. We report the results with various update intervals (the number of iterations between sparse connectivity updates) in Figure 4. Surprisingly, gradient-based growth performs worse than random-based growth in most cases. And there is an increased performance gap as the update interval increases. Our hypothesis is that random growth helps in exploring better the search space, as it naturally considers a large number of various sparse connectivities during training, which is crucial to the performance of dynamic sparse training. Differently, gradient growth drives the network topology towards some similar local optima for the sparse connectivity as it uses a greedy search strategy (highest gradient magnitude) at every topological change. However, benefits provided by high-magnitude gradients might change dynamically afterwards due to complicated interactions between weights. We empirically illustrate our hypothesis via the proposed distance measure between sparse connectivities in Appendix K.\nAnalysis of Hyper-parameters. The sparsity S and the initial removing rate p are two hyperparameters of our method. We show their sensitivity analysis in Appendix F and Appendix G. We find that Selfish Stacked LSTMs, RHNs, ON-LSTM, and AWD-LSTM-MoS need around 25%, 40%, 45%, and 40% parameters to reach the performance of their dense counterparts, respectively. And our method is quite robust to the choice of the initial removing rate." }, { "heading": "5 CONCLUSION", "text": "In this paper, we proposed an approach to train sparse RNNs from scratch with a fixed parameter count throughout training. Further, we introduced SNT-ASGD, a specially designed sparse optimizer for training sparse RNNs and we showed that it substantially improves the performance of all dynamic sparse training methods in RNNs. We observed that random-based growth achieves lower perplexity than gradient-based growth in the case of RNNs. Further, we developed an approach to compare two different sparse connectivities from the perspective of graph theory. Using this approach, we found that random-based growth explores better the topological search space for optimal sparse connectivities, whereas gradient-based growth is prone to drive the network towards similar sparse connectivity patterns. opening the path for a better understanding of sparse training." }, { "heading": "A ABLATION STUDY", "text": "To verify if the improvement shown above is caused by the cell weight redistribution or the Sparse NT-ASGD, we conduct an ablation study for all architectures. To avoid distractive factors, all models use the same hyper-parameters with the ones reported in the paper. And the use of finetuning is not excluded. We present the validation and testing perplexity for variants of our model without these two contributions, as shown in Table 4. Not surprisingly, removing either of these two novelties degrades the performance. There is a significant degradation in the performance for all models, up to 13 perplexity point, if the optimizer switches to the standard NT-ASGD. This stands as empirical evidence regarding the benefit of SNT-ASGD. Without cell weight redistribution, the testing perplexity also rises. The only exception is RHN whose number of redistributed weights in each layer is only two. This empirically shows that cell weight redistribution is more effective for the models with more cell weights." }, { "heading": "B COMPARISON OF DIFFERENT CELL WEIGHT REDISTRIBUTION METHODS", "text": "In Table 5, we conduct a small experiment to compare different methods of cell weight redistribution with stacked LSTMs, including redistributing based on the mean value of the magnitude of nonzero weights from different cell weights and the mean value of the gradient magnitude of nonzero weights." }, { "heading": "C EXPERIMENTAL DETAILS FOR RHN", "text": "Recurrent Highway Networks (Zilly et al., 2017) is a variant of RNNs allowing RNNs to explore deeper architecture inside the recurrent transition. Instead of stacking recurrent layers directly, RHN stacks multiple highway layers on top of recurrent state transition. Within each highway layer, free weights are redistributed across the input weight and the state weight. The sparsity level is set the same as ISS, 67.7% and 52.8%. Dropout rates are set to be 0.20 for the embedding layer, 0.65 for the input, 0.25 for the hidden units, and 0.65 for the output layer. The model is trained for 500 epochs with a learning rate of 15, a batch size of 20, and a sequence length to of 35. At the end of each training epoch, new weights are redistributed across the weights of the H nonlinear transform and the T gate." }, { "heading": "D EXPERIMENTAL RESULTS WITH ON-LSTM", "text": "Table 6: Single model perplexity on validation and test sets for the Penn Treebank language modeling task with ON-LSTM. Methods with “ASGD” are trained with SNT-ASGD. The numbers reported are averaged over five runs.\nModels #Param Val Test\nDense1000 25M 58.29± 0.10 56.17± 0.12 Dense1300 25M 58.55± 0.11 56.28± 0.19 SET 11.3M 65.90± 0.08 63.56± 0.14 DSR 11.3M 65.22± 0.07 62.55± 0.06 SNFS 11.3M 68.00± 0.10 65.52± 0.15 RigL 11.3M 64.41± 0.05 62.01± 0.13 RigL1000 (ASGD) 11.3M 59.17± 0.08 57.23± 0.09 RigL1300 (ASGD) 11.3M 59.10± 0.05 57.44± 0.15 Selfish-RNN1000 11.3M 58.17± 0.06 56.31± 0.10 Selfish-RNN1300 11.3M 57.67 ± 0.03 55.82 ± 0.11\nTable 7: Single model perplexity on validation and test sets for the WikiText-2 language modeling task with AWD-LSTMMoS. Baseline is AWD-LSTM-MoS obtained from Yang et al. (2018). Methods with “ASGD” are trained with SNTASGD.\nModels #Param Val Test\nDense 35M 66.01 63.33 SET 15.6M 72.82 69.61 DSR 15.6M 69.95 66.93 SNFS 15.6M 79.97 76.18 RigL 15.6M 71.36 68.52 RigL (ASGD) 15.6M 68.84 65.18\nSelfish-RNN 15.6M 65.96 63.05\nProposed by Shen et al. (2019) recently, ON-LSTM can learn the latent tree structure of natural language by learning the order of neurons. For a fair comparison, we use exactly the same model hyper-parameters and regularization used in ON-LSTM. We set the sparsity of each layer to 55% and the initial removing rate to 0.5. We train the model for 1000 epochs and rerun SNT-ASGD as a fine-tuning step once at the 500th epoch, dubbed as Selfish-RNN1000. As shown in Table 6, Selfish-RNN outperforms the dense model while reducing the model size to 11.3M. Without SNT-ASGD, sparse training techniques can not reduce the test perplexity to 60. SNT-ASGD is able to improve the performance of RigL by 5 perplexity. Moreover, one interesting observation is that one of the regularizations used in the standard ON-LSTM, DropConnect, is perfectly compatible with our method, although it also drops the hidden-to-hidden weights out randomly during training.\nIn our experiments we observe that Selfish-RNN benefits significantly from the second fine-tuning operation. We scale the learning schedule to 1300 epochs with two fine-tuning operations after 500 and 1000 epochs, respectively, dubbed as Selfish-RNN1300. It is interesting that Selfish-RNN1300 can achieve lower testing perplexity after the second fine-tuning step, whereas the dense model Dense1300 can not even reach again the perplexity that it had before the second fine-tuning. The heuristic explanation here is that our method helps the optimization escape the local optima or a local saddle point by optimizing the sparse structure, while for dense models whose energy landscape is fixed, it is very difficult for the optimizer to find its way off the saddle point or the local optima." }, { "heading": "E EXPERIMENTAL RESULTS WITH AWD-LSTM-MOS", "text": "We also evaluate Selfish-RNN on the WikiText-2 dataset. The model we choose is AWD-LSTM-MoS (Yang et al., 2018), which is the state-of-the-art RNN-based language model. It replaces Softmax with Mixture of Softmaxes (MoS) to alleviate the Softmax bottleneck issue in modeling natural language. For a fair comparison, we exactly follow the model hyper-parameters and regularization used in AWD-LSTM-MoS. We sparsify all layers with 55% sparsity except for the prior layer as its number of parameters is negligible. We train our model for 1000 epochs without finetuning or dynamical evaluation (Krause et al., 2018) to simply show the effectiveness of our method. As demonstrated in Table 7. Selfish AWD-LSTM-MoS can reach dense performance with 15.6M parameters." }, { "heading": "F EFFECT OF SPARSITY", "text": "There is a trade-off between the sparsity level S and the test perplexity of Selfish-RNN. When there are too few parameters, the sparse neural network will not have enough capacity to model the data. If the sparsity level is too small, the training acceleration will be small. Here, we analyze this trade-off by varying the sparsity level while keeping the other experimental setup the same, as shown in\nFigure 5a. We find that Selfish Stacked LSTMs, RHNs, ON-LSTM, and AWD-LSTM-MoS need around 25%, 40%, 45%, and 40% parameters to reach the performance of their dense counterparts, respectively. Generally, the performance of sparsified models is decreasing as the sparsity level increases." }, { "heading": "G EFFECT OF INITIAL REMOVING RATE", "text": "The initial removing rate p determines the number of removed weights at each connectivity update. We study the performance sensitivity of our algorithm to the initial removing rate p by varying it ∈ [0.3, 0.5, 0.7]. We set the sparsity level of each model as the one having the best performance in Figure 5a. Results are shown in Figure 5b. We can clearly see that our method is very robust to the choice of the initial removing rate." }, { "heading": "H DIFFERENCE AMONG SET, SELFISH-RNN AND ITERATIVE PRUNING METHODS", "text": "The topology update strategy of Selfish-RNN differs from SET in several important features. (1) we automatically redistribute weights across cell weights for better regularization, (2) we use magnitudebased removal instead of removing a fraction of the smallest positive weights and the largest negative weights, (3) we use uniform initialization rather than non-uniform sparse distribution like ER or ERK, as it consistently achieves better performance. Additionally, the optimizer proposed in this work, SNT-ASGD, brings substantial perplexity improvement to the sparse RNN training.\nFigure 6-left illustrates a high-level overview from an efficiency perspective of the difference between Selfish-RNN and iterative pruning techniques (Han et al., 2016; Zhu & Gupta, 2017; Frankle & Carbin, 2019). The conventional pruning and re-training techniques usually involve three steps: (1) pre-training a dense model, (2) pruning unimportant weights, and (3) re-training the pruned model to improve performance. The pruning and re-training cycles can be iterated. This iteration is taking place at least once, but it may also take place several times depending on the specific algorithms used. Therefore, the sparse networks obtained via iterative pruning at least involve pre-training a dense model. Different from the aforementioned three-step techniques, FLOPs required by Selfish-RNN is proportional to the density of the model, as it allows us to train a sparse network with a fixed number of parameters throughout training in one single run, without any re-training phases. Moreover, the overhead caused by the adaptive sparse connectivity operation is negligible, as it is operated only once per epoch." }, { "heading": "I COMPARISON BETWEEN SELFISH-RNN AND PRUNING", "text": "It has been shown by Evci et al. (2020) that while state-of-the-art sparse training method (RigL) achieves promising performance in terms of CNNs, it fails to match the performance of pruning in RNNs. Given the fact that magnitude pruning has become a widely-used and strong baseline for model compression, we also report a comparison between Selfish-RNN and iterative magnitude pruning with stacked LSTMs. The pruning baseline here is the Tensorflow Model Pruning library (Zhu & Gupta, 2017). The results are demonstrated in Figure 6-right.\nWe can see that Selfish-RNN exceeds the performance of pruning in most cases. An interesting phenomenon is that, with increased sparsity, we see a decreased performance gap between SelfishRNN and pruning. Especially, Selfish-RNN performs worse than pruning when the sparsity level is 95%. This can be attributed to the poor trainability problem of sparse models with extreme sparsity levels. Noted in Lee et al. (2020), the extreme sparse structure can break dynamical isometry (Saxe et al., 2014) of sparse networks, which degrades the trainability of sparse neural networks. Different from sparse training methods, pruning operates from a dense network and thus, does not have this problem." }, { "heading": "J SPARSE TOPOLOGY DISTANCE MEASUREMENT", "text": "Our sparse topology distance measurement considers the unit alignment based on a semi-matching technique introduced by Li et al. (2016) and a graph distance measurement based on graph edit distance (GED) (Sanfeliu & Fu, 1983). More specifically, our measurement includes the following steps:\nStep 1: We train two sparse networks with dynamic sparse training on the training dataset and store the sparse topology after each epoch. Let Wil be the set of sparse topologies for the l-th layer of network i.\nStep 2: Using the saved model, we compute the activity output on the test data, Oil ∈ Rn×m, where n is the number of hidden units and m is the number of samples.\nStep 3: We leverage the activity units of each layer to pair-wisely match topologies Wil . We achieve unit matching between a pair of networks by finding the unit in one network with the maximum correlation to the one in the other network.\nStep 4: After alignment, we apply graph edit distance (GED) to measure the similarity between pairwise Wil . Eventually, the distance is scaled to lie between 0 and 1. The smaller the distance is, the more similar the two sparse topologies are.\nHere, We choose stacked LSTMs on PTB dataset as a specific case to analyze. Specifically, we train two stacked LSTMs for 100 epochs with different random seeds. We choose a relatively small removing rate of 0.1. We start alignment at the 5th epoch to ensure a good alignment result, as at the very beginning of training networks do not learn very well. We then use the matched order of output tensors to align the pairwise topologies Wil ." }, { "heading": "K TOPOLOGICAL DISTANCE OF GROWTH METHODS", "text": "In this section, we empirically illustrate that gradient growth drives different networks into some similar connectivity patterns based on the proposed distance measurement between sparse connectivities. The initial removing rates are set as 0.1 for all training runs in this section. First, we measure the topological distance between two different training runs trained with gradient growth and random growth, respectively, as shown in Figure 7. We can see that, starting with very different sparse connectivity topologies, two networks trained with random growth end up at the same distance, whereas the topological distance between networks trained with gradient growth is continuously decreasing and this tendency is likely to continue as the training goes on. We further report the distance between two networks with same initialization but different training seeds when trained with gradient growth and random growth, respectively. As shown in Figure 8, the distance between sparse networks discovered by gradient growth is smaller than the distance between sparse networks discovered by random growth. These observations are in line with our hypothesis that gradient growth drives networks into some similar structures, whereas random growth explores more sparse structures spanned over the dense networks." }, { "heading": "L FLOPS ANALYSIS OF DIFFERENT APPROACHES", "text": "We follow the way of calculating training FLOPs layer by layer based on sparsity level sl, proposed by Evci et al. (2020). We split the process of training a sparse recurrent neural network into two steps: forward pass and backward pass.\nForward pass In order to calculate the loss of the current models given a batch of input data, the output of each layer is needed to be calculated based on a linear transformation and a non-linear activation function. Within each RNN layer, different cell weights are used to regulate information in sequence using the output of the previous time step and the input of this time step.\nBackward pass In order to update weights, during the backward pass, each layer calculates 2 quantities: the gradient of the loss function with respect to the activations of the previous layer and the gradient of the loss function with respect to its own weights. Therefore, the computational expense of backward pass is twice that of forward pass. Given that RNN models usually contain an embedding layer from which it is very efficient to pick a word vector, for models not using weight tying, we\nonly count the computations to calculate the gradient of its parameters as the training FLOPs and we omit its inference FLOPs. For models using weight tying, both the training FLOPs and the inference FLOPs are omitted.\nGiven a specific architecture, we denote fD as dense FLOPs required to finish one training iteration and fS as the corresponding sparse FLOPs (fS ≈ (1− S)fD), where S is the sparsity level. Thus fS fD for very sparse networks. Since different sparse training methods cause different sparse distribution, their FLOPs fS are also different from each other. We omit the FLOPs used to update the sparse connectivity, as it is only performed once per epoch. Overall, the total FLOPs required for one training update on one single sample are given in Table 8. The training FLOPs of dense-to-sparse methods like, ISS and pruning, are 3fD ∗ st, where st is the sparsity of the model at iteration t. Since dense-to-sparse methods require to train a dense model for a while, their training FLOPs and memory requirement are higher than our method. For methods that allow the sparsity of each layer dynamically changing e.g., DSR and SNFS, we approximate their training FLOPs via their final distribution, as their sparse distribution converge to the final distribution in the first few epochs. ER distribution causes a bit more inference FLOPs than uniform distribution because is allocates more weights to the RNN layers than other layers. SNFS requires extra FLOPs to calculate dense gradients during the backward pass. Although RigL also uses the dense gradients to assist weight growth, it only needs to calculate dense gradients every ∆T iterations, thus its averaged FLOPs is given by 3fS∆T+2fS+fD ∆T+1 . Here, we simply omit the extra FLOPs required by gradient-based growth, as it is negligible compared with the whole training FLOPs.\nFor inference, we calculate the inference FLOPs on single sample based on the final sparse distribution learned by different methods." }, { "heading": "M FINAL CELL WEIGHT SPARSITY BREAKDOWN", "text": "We further study the final sparsity level across cell weights learned automatically by our method. We find a consistent observation that the weight of forget gates, either the forget gate in the standard LSTM or the master forget gate in ON-LSTM, tend to be sparser than the weight of other gates, whereas the weight of cell gates and output gates are denser than the average, as shown in Figure 9. However, there is no big difference between weights in RHN, although the H nonlinear transform weight is slightly sparser than the T gate weight in most RHN layers. This phenomenon is in line with the Ablation analysis where the cell weight redistribution does not provide performance improvement for RHNs. Cell weight redistribution is more important for models with more regulating weights." }, { "heading": "N LIMITATION", "text": "The aforementioned training benefits have not been fully explored, as off-the-shelf software and hardware have limited support for sparse operations. The unstructured sparsity is difficult to be efficiently mapped to the existing parallel processors. The results of our paper provide motivation for new types of hardware accelerators and libraries with better support for sparse neural networks. Nevertheless, many recent works have been developed to accelerate sparse neural networks including Gray et al. (2017); Moradi et al. (2019); Ma et al. (2019); Yang & Ma (2019); Liu et al. (2020b). For instance, NVIDIA introduces the A100 GPU enabling the Fine-Grained Structured Sparsity (NVIDIA, 2020). The sparse structure is enforced by allowing two nonzero values in every four-entry vector to reduce memory storage and bandwidth by almost 2×. We do not claim that Selfish-RNN is the best way to obtain sparse recurrent neural networks, but simply highlights that it is an important future research direction to develop more efficient hardware and software to benefit from sparse neural networks." } ]
2,020
null
SP:60d704b4a1555e24c09963617c879a15d8f3c805
[ "This paper proposes a spatial-temporal graph neural network, which is designed to adaptively capture the complex spatial-temporal dependency. Further, the authors design a spatial-temporal attention module, which aims to capture multi-scale correlations. For multi-step prediction instead of one-step prediction, they further propose the sequence transform block to solve the problem of error accumulations. The authors conducted experiments on three real-world datasets (traffic on highways and mobile traffic), which shows their method achieves the best performance." ]
Spatial-temporal data forecasting is of great importance for industries such as telecom network operation and transportation management. However, spatialtemporal data are inherent with complex spatial-temporal correlations and behaves heterogeneities among the spatial and temporal aspects, which makes the forecasting remain as a very challenging task though recently great work has been done. In this paper, we propose a novel model, Adaptive Spatial-Temporal Inception Graph Convolution Networks (ASTI-GCN), to solve the multi-step spatial-temporal data forecasting problem. The model proposes multi-scale spatial-temporal joint graph convolution block to directly model the spatial-temporal joint correlations without introducing elaborately constructed mechanisms. Moreover inception mechanism combined with the graph node-level attention is introduced to make the model capture the heterogeneous nature of the graph adaptively. Our experiments on three real-world datasets from two different fields consistently show ASTI-GCN outperforms the state-of-the-art performance. In addition, ASTI-GCN is proved to generalize well.
[]
[ { "authors": [ "Lei Bai", "Lina Yao", "Salil Kanhere", "Xianzhi Wang", "Quan Sheng" ], "title": "Stg2seq: Spatialtemporal graph to sequence model for multi-step passenger demand forecasting", "venue": "arXiv preprint arXiv:1905.10069,", "year": 2019 }, { "authors": [ "Lei Bai", "Lina Yao", "Can Li", "Xianzhi Wang", "Can Wang" ], "title": "Adaptive graph convolutional recurrent network for traffic forecasting", "venue": null, "year": 2020 }, { "authors": [ "Chao Chen", "Karl Petty", "Alexander Skabardonis", "Pravin Varaiya", "Zhanfeng Jia" ], "title": "Freeway performance measurement system: Mining loop detector data", "venue": "Transportation Research Record Journal of the Transportation Research Board,", "year": 2001 }, { "authors": [ "G Barlacchi", "M D Nadai", "R Larcher", "A Casella", "C Chitic", "G Torrisi", "F Antonelli", "A Vespignani", "A Pentland", "B. Lepri" ], "title": "A multi-source dataset of urban life in the city of milan and the province of trentino", "venue": "Scientific Data,", "year": 2015 }, { "authors": [ "S Gowrishankar", "P.S. Satyanarayana" ], "title": "A time series modeling and prediction of wireless network traffic", "venue": "International Journal of Interactive Mobile Technologies (iJIM),", "year": 2009 }, { "authors": [ "Shengnan Guo", "Youfang Lin", "Ning Feng", "Chao Song", "Huaiyu Wan" ], "title": "Attention based spatialtemporal graph convolutional networks for traffic flow forecasting", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Kaiwen He", "Yufen Huang", "Xu Chen", "Zhi Zhou", "Shuai Yu" ], "title": "Graph attention spatial-temporal network for deep learning based mobile traffic prediction", "venue": "IEEE Global Communications Conference (GLOBECOM),", "year": 2019 }, { "authors": [ "S Hochreiter", "J Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Xiang Li", "Wenhai Wang", "Xiaolin Hu", "Jian Yang" ], "title": "Selective kernel networks. 2019", "venue": null, "year": 2019 }, { "authors": [ "Yaguang Li", "Rose Yu", "Cyrus Shahabi", "Yan Liu" ], "title": "Diffusion convolutional recurrent neural network: Data-driven traffic forecasting", "venue": "arXiv preprint arXiv:1707.01926,", "year": 2017 }, { "authors": [ "Yuxuan Liang", "Songyu Ke", "Junbo Zhang", "Xiuwen Yi", "Yu Zheng" ], "title": "Geoman: Multi-level attention networks for geo-sensory time series prediction", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Chao Song", "Youfang Lin", "Shengnan Guo", "Huaiyu Wan" ], "title": "Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott E. Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": null, "year": 2017 }, { "authors": [ "Billy M Williams", "Lester A Hoel" ], "title": "Modeling and forecasting vehicular traffic flow as a seasonal arima process: Theoretical basis and empirical results", "venue": "Journal of transportation engineering,", "year": 2003 }, { "authors": [ "SHI Xingjian", "Zhourong Chen", "Hao Wang", "Dit-Yan Yeung", "Wai-Kin Wong", "Wang-chun Woo" ], "title": "Convolutional lstm network: A machine learning approach for precipitation nowcasting", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Bing Yu", "Haoteng Yin", "Zhanxing Zhu" ], "title": "Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting", "venue": "arXiv preprint arXiv:1709.04875,", "year": 2017 }, { "authors": [ "Chuanpan Zheng", "Xiaoliang Fan", "Cheng Wang", "Jianzhong Qi" ], "title": "Gman: A graph multi-attention network for traffic prediction", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Jie Zhou", "Ganqu Cui", "Zhengyan Zhang", "Cheng Yang", "Zhiyuan Liu", "Lifeng Wang", "Changcheng Li", "Maosong Sun" ], "title": "Graph neural networks: A review of methods and applications", "venue": "arXiv preprint arXiv:1812.08434,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Spatial-temporal data forecasting has attracted attention from researchers due to its wide range of applications and the same specific characteristics of spatial-temporal data. Typical applications include mobile traffic forecast (He et al., 2019), traffic road condition forecast (Song et al., 2020; Yu et al., 2017; Guo et al., 2019; Zheng et al., 2020; Li et al., 2017), on-demand vehicle sharing services passenger demand forecast (Bai et al., 2019) and geo-sensory time series prediction (Liang et al., 2018) etc. The accurate forecast is the foundation of many real-world applications, such as Intelligent Telecom Network Operation and Intelligent Transportation Systems (ITS). Specifically, accurate traffic forecast can help transportation agencies better control traffic scheduling and reduce traffic congestion; The traffic volumes prediction of the wireless telecommunication network plays an important role for the network operation and optimization, for example, it can help to infer the accurate sleep periods (low traffic periods) of the base stations to achieve energy saving without sacrificing customer experience.\nHowever, as we all know, accurate spatialtemporal data forecasting faces multiple challenges. First, it is inherent with complex spatial-temporal correlations. In the spatialtemporal graph, different neighbors may have different impacts on the central location at the same time step, as the bold lines shown in Figure1, which called spatial correlations.Different historical observations of the same location influence the future moments of itself variously due to temporal correlations. The observations\nof different neighbors at historical moments can directly affect the central node at future time steps due to the spatial-temporal joint correlations. As shown in Figure1, the information of the spatialtemporal network can propagate along the spatial and temporal dimensions simultaneously, and the\ntransmission process can be discontinuous due to complex external factors, which result in spatialtemporal joint correlations of the spatial-temporal data in a short period.\nSpatial-temporal data is heterogenous in both spatial and temporal dimensions (Song et al., 2020). Nodes in different regions of the graph have various properties and local spatial structures, so the corresponding data distribution can be different. For example, the traffic flow distribution of urban and suburban areas are quite different, while the traffic of urban area is denser and that of suburban area is relatively sparse. Besides, the traffic flow in the same region also exhibit heterogeneity in different time periods. For example, the mobile traffic in business district would decrease at night compared to the daytime, while it’s opposite in the residential district. In addition, multi-step time series forecasting is often accompanied by error accumulation problem. Typical methods like RNNs often cause error accumulation due to iterative forecasting, leading to rapid deterioration of the long-term prediction accuracy. (Yu et al., 2017; Zheng et al., 2020).\nMost of the previous work is mainly to solve the above challenges. To model the spatial-temporal dependency, STGCN (Yu et al., 2017) and DCRNN (Li et al., 2017) extract spatial and temporal correlations separately. ASTGCN (Guo et al., 2019) introduced spatial and temporal attention to model the dynamic spatial and temporal correlations. STG2Seq (Bai et al., 2019) aimed at using GCN to capture spatial and temporal correlations simultaneously. But they all didn’t consider the spatialtemporal joint correlations and heterogeneity. Different from the above methods, STSGCN (Song et al., 2020) used multiple local spatial-temporal graphs to model the spatial-temporal synchronous correlations and spatial-temporal heterogeneity of the local adjacent time steps. But STSGCN can only model the spatial-temporal synchronous correlations of its defined local spatial-temporal graphs and it is equipped with complex structure.\nIn this paper, we propose a novel model called ASTI-GCN, Adaptive spatial-temporal Inception Graph Convolutional Networks, to address the above issues with multi-step spatial-temporal data forecasting. We propose the spatial-temporal joint convolution to directly model the spatial-temporal joint correlations without introducing elaborately constructed mechanisms. And we introduce the inception mechanism to build multi-scale spatial-temporal features to adapt to graph nodes with different properties. Then, to achieve the heterogeneity modeling, we construct the spatial-temporal Inception Graph Convolution Module, which combined the spatial-temporal inception mechanism with the graph attention to build the adaptive ability of graph nodes with different properties. After multiple spatial-temporal inception-GCMs, two decoder modules named sequence decoder and short-term decoder are designed to directly establish the relationships between the historical and future time steps to alleviate error accumulation.\nOverall, our main contributions are summarized as follows:\n• We propose a novel spatial-temporal joint graph convolution network to directly capture spatial-temporal correlations. Moreover, we introduce inception with graph attention to adaptively model the graph heterogeneity.\n• We propose to combine the sequence decoder and short-term decoder together for multistep forecasting to model direct relationships between historical and future time steps to alleviate the error propagation.\n• We evaluate our model on three real-world datasets from two fields, and the experimental results show that our model achieves the best performances among all the eight baselines with good generalization ability." }, { "heading": "2 RELATED WORK", "text": "Spatial-temporal data information can be extracted using the deep learning method from European space, such as ConvLSTM (Xingjian et al., 2015), PredRNN (Gowrishankar & Satyanarayana, 2009) and so on. However, most of the spatial-temporal data in real scenes are graph data with complex and changeable relationships.Common timing prediction models, such as HA and ARIMA (Williams & Hoel, 2003), cannot be simply migrated to such scenarios. Graph based methods like DCRNN (Li et al., 2017) modeled traffic flow as a diffusion process on a directed graph. Spatial dependencies and temporal dependencies are captured by bidirectional random walk and DCGRU based\nencoder-decoder sequence to sequence learning framework respectively. STGCN (Yu et al., 2017) constructed an undirected graph of traffic network, which is combined with GCN and CNN to model spatial and temporal correlation respectively. ASTGCN (Guo et al., 2019) innovatively introduced attention mechanisms to capture dynamic spatial and temporal dependencies. Similarly, GMAN (Zheng et al., 2020) used temporal and spatial attention to extract dynamic spatial-temporal correlations with spatial-temporal coding. The above models extract spatial-temporal correlation with two separate modules, which cannot learn the influence of neighbor node at the same time and the influence of center node at the historical moment simultaneously. To address this problem, Bai et al. (2019) proposed STG2seq to learn the influence of spatial and temporal dimensions at the same time, which is a purely relies on graph convolution structure. However, all the above methods fail to take the heterogeneity of spatial-temporal data into account, that is, the scope of each node influencing its neighbor nodes at future time steps is different. To solve this problem, Song et al. (2020) proposed STSGCN with multiple modules for different time periods to effectively extract the heterogeneity in local spatial-temporal maps. However, this method pays more attention to local information and lacks of global information extraction. Besides, STSGCN runs slowly due to too many parameters.\nTherefore, we propose an Adaptive spatial-temporal Inception Graph Convolutional Networks The Temporal and spatial correlations are extracted simultaneously by spatial-temporal convolution, and the node heterogeneity is modeled by Inception mechanism. At the same time, considering the different influences of each node and time step, the attention mechanism is introduced to adjust the influence weight adaptively." }, { "heading": "3 METHODOLOGY", "text": "" }, { "heading": "3.1 PRELIMINARIES", "text": "In this paper, we define G = (V,E,A) as a static undirected spatial graph network. V represents the set of vertices, |V | = N (N indicates the number of vertices). E is the set of edges representing the connectivity between vertices. A ∈ RN×N is the adjacency matrix of the network graph G where Avi,vj represents the connection between nodes vi and vj . The graph signal matrix is expressed as Xt ∈ RN×C , where t denotes the timestep and C indicates the number of features of vertices. The graph signal matrix represents the observations of graph network G at time step t. Problem Studied Given the graph signal matrix of historical T time steps χ = (Xt1 , Xt2 , . . . , XtT ) ∈ RT×N×C , our goal is to predict the graph signal matrix of the next M time steps Ŷ = ( X̂tT+1 , X̂tT+2 , ..., X̂tT+M ) ∈ RM×N×C . In other words, we need to learn a\nmapping function F to map the graph signal matrix of historical time steps to the future time steps:( X̂tT+1 , X̂tT+2 , · · · , X̂tT+M ) = Fθ (Xt1 , Xt2 , · · · , XtT ) (1)\nwhere θ represents learnable parameters of our model." }, { "heading": "3.2 ARCHITECTURE", "text": "The architecture of the ASTI-GCN proposed in this paper is shown in Figure 2(a). The main ideas of ASTI-GCN can be summarized as follows: (1) We propose spatial-temporal joint graph convolution to directly extract the spatial-temporal correlations; (2) We build the Spatio-Temporal Inception Graph Convolutional Module (STI-GCM) to adaptively model the graph heterogeneity; (3) We use short-term decoder combined with sequence decoder to achieve accurate multi-step forecast." }, { "heading": "3.3 SPATIAL-TEMPORAL INCEPTION-GCM", "text": "Spatial-temporal joint graph convolution\nIn order to extract spatial-temporal correlations simultaneously, we propose spatial-temporal joint graph convolution. In this paper, we construct spatial-temporal joint graph convolution based on graph convolution in the spectral domain. The spectral graph convolution implemented by using the graph Fourier transform basis which is from eigenvalue decomposition of the Laplacian matrix (L)\nto transform the graph signals from spatial into the spectral domain. But the computation cost of the eigenvalue decomposition of L is expensive when the graph is large. To reduce the number of parameters and the computation complexity, Chebyshev polynomial Tk (x) is used for approximation. The spectral graph convolution can be written as (Yu et al., 2017; Guo et al., 2019; Kipf & Welling, 2016):\nΘ∗Gx = Θ (L)x ≈ K−1∑ k=0 θkTk ( L̃ ) x (2)\nwhere ∗G is graph convolution operator, Θ is graph convolution kernel, x ∈ RN is the graph signal, Tk ( L̃ ) ∈ RN×N is the Chebyshev polynomial of order k with the scaled Laplacian L̃ =\n2 λmax L − IN (L is the graph Laplacian matrix, λmax is the largest eigenvalue of L, IN is identity matrix) (Yu et al., 2017). θk is the coefficient of the k-th order polynomial.\nBased on spectral domain graph convolution, we propose spatial-temporal joint graph convolution. First, these K-hop Tk ( L̃ ) are concatenated as the furthest receptive field in the spatial dimension.\nThen we construct the spatial-temporal joint graph convolution kernel Θs,t, Θs,t ∈ RKt×Ks×C×F , where Kt represents the kernel size in the temporal dimension and Ks represents the kernel size in the spatial dimension, C is the input feature dimensions, F is the number of filters. So the kernel Θs,t has the local spatial-temporal receptive field of Kt × Ks, and Ks should be lower than K( which can be written as Ks < K), because of the largest graph convolution perceived field of Khop. And the spatial-temporal joint graph convolution can be formulated as:\nTK\n( L̃ )\n= Concat(T0 ( L̃ ) , T1 ( L̃ ) , . . . , TK−1 ( L̃ ) ) (3)\nXout = Θs,t ∗X = Θs,tTK ( L̃ ) X (4)\nwhere TK ( L̃ ) ∈ RK×N×N is the concatenation of all Chebyshev polynomials in (K-1) hop. ∗\nis the convolution operation between Θs,t and X , X ∈ RN×T×C is the spatial-temporal signal of the input graph, T is the input time steps. After the spatial-temporal joint graph convolution, the output can be written as Xout ∈ RN×(T−Kt+1)×(K−Ks+1)×F . Besides, the neighbors have various influences on the central node, so we implement a learnable spatial mask matrix Wmask ∈ RN×N (Song et al., 2020) to adjust the graph adjacency relationship for assigning weights to different neighbors.\nSpatial-temporal inception-attention\nDifferent from the images, each node of the spatial-temporal graph usually represents a road or eNodeB etc. Then, affected by external factors like geographic location and surrounding environment, the spatial-temporal data properties of different nodes are various, namely the heterogeneity. To solve this problem, an intuitive method is to learn different models for each node, but this method could cause extensive parameters and maintain low generalization ability. So we take another way in this paper. We understand that heterogeneity is manifested in the differences of local spatialtemporal receptive fields of each node of the graph which result from nodes’ various properties and local spatial structures. Inspired by (Song et al., 2020; Zheng et al., 2020; Zhou et al., 2018; Vaswani et al., 2017), we apply a learnable graph node embedding Se ∈ RN×E to represent the properties of each node. Meanwhile, we introduce inception (Szegedy et al., 2015) to extract multi-scale spatialtemporal correlations through spatial-temporal joint graph convolution. Then, we combine the graph attention with inception to achieve node-level attention for modeling the heterogeneity.\nFirstly, we implement inception, as shown in Figure 2(b). For example, the 3×2 block represents that it involves the kernel θs,t ∈ R3×2×C×F , which means it can extract the spatial-temporal correlations of the node itself and its neighbors in the three adjacent time steps by one layer, which is needed by two layers STSGCM in STSGCN (Song et al., 2020). We use the padding method when implement inception, so after B branches, the output of the inception module can be Cout ∈ RN×T×K×(F×B), where we set the number of output filters of each branch to be the same. Then we combined with the graph node attention, which has being widely used (Vaswani et al., 2017). We use the Q = SeWq , Wq ∈ RE×F to get the queries of graph nodes. For each branch in inception, we apply the idea\nof SKNET (Li et al., 2019) to do global pooling Cgl = T∑ i=1 K∑ j=1 Cout,Cg ∈ RN×(F×B) as the corresponding keys of the branches (one can also use Wk ∈ RF×F to do transform as (Vaswani et al., 2017), here we omit it for simplicity). Then we compute the attention by S = QKT (Vaswani et al., 2017). Take a graph node vi as an example,Svi,b = qvi•cg,vi,b√ F\n, qvi ∈ R1×F , cg,vi,b ∈ R1×F denotes the attention of vi and each branchCg,vi,b which represents the corresponding spatialtemporal receptive field perception. Then we concatenate the inception branch results adjusted by attention score to obtain the output. The calculation can be formulated as follows:\nαvi,b = exp(Svi,b) B∑ bc=1 exp(Svi,bc)\n(5)\nAttvi = ||Bb=1 {αvi,b · Cout,vi,b} (6)\nwhere αvi,b ∈ R, Cout,vi,b ∈ RT×K×F , Attvi ∈ RT×K×F×B represent the output of node vi from inception-attention block. Therefore, the final output of spatial-temporal inception-attention block is Catt ∈ RN×T×K×F×B . STI-GCM output Layer\nThen, we use spatial convolution to generate the output of STI-GCM. We first reshape Catt into Catt ∈ RN×T×K×(F ·B). Next, the learnable weight matrix Ws ∈ RK×(F ·B)×(F ·B) is used to convert Catt to Csatt ∈ RN×T×(F ·B). We also implement SE-net (Hu et al., 2020) to model the channel attention. Finally, the output is converted to Cstatt ∈ RN×T×F using the full connection layer with Wo ∈ R(F ·B)×F . And the process can be formulated as Cstatt = CattWsWo." }, { "heading": "3.4 FUSION OUTPUT MODULE", "text": "The iterative forecasting can achieve high accuracy in short-term forecasting, but in long term prediction, its accuracy would decrease rapidly due to the accumulation of errors. So, we propose the sequence transform block named sequence decoder, to build direct relationships between historical and future multi time steps. Since different historical time steps have different effects on different future time steps, we introduce temporal attention by Ŷs out = CstattWFWt(WF ∈ RF×1,Wt ∈ Rt×M ) to adjust the corresponding weights of historical temporal features. At last, in order to benefit from both, we adopt the fusion of iterative prediction and sequence transform prediction results:\nŶ = g(Ŷm out, Ŷs out) (7)\nwhere Ŷm out,Ŷs out represent the prediction of the short-term decoder and sequence decoder prediction result respectively, g(•) represents the weighted fusion function, Ŷ is the final multi-step forecasting result.\nIn this paper, we use the mean square error (MSE) between the predicted value and the true value as the loss function and minimize it through backpropagation.\nL(θ) = 1\nτ i=t+τ∑ i=t+1 (Yi−Ŷi)2 (8)\nwhere θ represents all learnable parameters of our model, Yi is the ground truth. Ŷi denotes the model’s prediction of all nodes at time step i." }, { "heading": "4 EXPERIMENT", "text": "We evaluate ASTI-GCN on two highway datasets (PEMSD4, PEMSD8) and one telecommunications dataset. The traffic datasets come from the Caltrans Performance Measurement System (PeMS) (Chen et al., 2001). The network traffic dataset comes from the mobile traffic volume records of Milan provided by Telecom Italia (G et al., 2015).\nPEMSD4: PEMSD4 comes from the Caltrans Performance Measurement System (PeMS) (Chen et al., 2001). It refers to the traffic data in San Francisco Bay Area from January to February 2018.\nPEMSD8: PEMSD8 comes from the Caltrans Performance Measurement System (PeMS) (Chen et al., 2001). It is the traffic data in San Bernardino from July to August 2016.\nMobile traffic: It contains records of mobile traffic volume over 10-minute intervals in Milan where is divided into 100*100 regions. We use the data in November 2013.\nWe divide the road traffic datasets at a ratio of 6:2:2 for train set, validation set and test set. For the mobile traffic dataset, we use the first 20 days of data to train models and the next 10 days for testing. All datasets are normalized by standard normalization method before training and renormalized before evaluation. We implemented our model in Python with TensorFlow 1.14. In the experiment, we use one-hour historical data to predict the data of the next hour. The hyperparameters of ASTI-GCN are determined by the performance on the validation set. The best size of spatial-temporal kernels is set to 3×1, 1×3, 5×2, 3×2, 2×3, which are used in 4 spatial-temporal inception layers respectively. We evaluate the performance of our network with three evaluation metrics: Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE). For the road traffic datasets, if two nodes are connected, the corresponding value in the adjacency matrix is set to 1, otherwise 0. For the Milan dataset, Spearman correlation coefficient was used to define the adjacency matrix. We compare ASTI-GCN with other widely used forecasting models, including HA, ARIMA (Williams & Hoel, 2003), LSTM (Hochreiter & Schmidhuber, 1997), DCRNN, STGCN, ASTGN, STSGCN, AGCRN. See Appendix A for more detail." }, { "heading": "4.1 BASELINES", "text": "HA: Historical Average method. We use the average of the traffic over last hour to predict the next time slice.\nARIMA (Williams & Hoel, 2003): ARIMA is a well-known model for time series prediction.\nLSTM (Hochreiter & Schmidhuber, 1997): LSTM can extract long-term and short-term temporal dependency and is widely used in time series forecasting.\nDCRNN (Li et al., 2017): A deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow.\nSTGCN (Yu et al., 2017): STGCN uses spectral GCN and CNN to capture spatial and temporal dependencies respectively.\nASTGN (Guo et al., 2019): ASTGCN introduces a temporal and spatial attention mechanism to capture dynamic temporal and spatial correlations.\nSTSGCN (Song et al., 2020): STSGCN purely uses GCN to extract spatial-temporal information simultaneously.\nAGCRN (Bai et al., 2020): AGCRN uses an adaptive graph convolutional recurrent network to capture node information." }, { "heading": "4.2 COMPARISON AND RESULT ANALYSIS", "text": "Table 1 shows the overall prediction results including the average MAE, RMSE and MAPE of our proposed method and baselines. Due to the huge computation cost, it is hard to measure the performance of STSGCN on the Milan dataset. Compared with the models that can model spatial correlation, other models that only model temporal correlation (Historical Average method, ARIMA, LSTM) have poor performance in the three datasets. This is because such models ignore the spatial influence between nodes and only use the historical information of a single node. Among spatial-temporal models, ASTI-GCN achieves the best performance in all indicators except MAPE of PEMSD4 and MILAN dataset. Because STGCN, ASTGCN and DCRNN did not consider the heterogeneity in the spatial-temporal data, while STSGCN only considered the local spatial-temporal heterogeneity of three adjacent time steps, which was insufficient for the global information extraction, our model performed better. Besides, we find AGCRN performs well on road datasets, which is close to our results, but has poor performance on the mobile traffic dataset. We conjecture that the reason is large number of nodes and the large difference in node distribution in MILAN dataset. Meanwhile, it further indicates that ASTI-GCN has a stronger generalization ability than AGCRN.\nFigure 3 shows the changes of different metrics on the three datasets with the increase of predicted time steps. As we can see from the Figure 3, the prediction error increases over time, indicating that the prediction becomes more difficult. Those methods that only consider temporal correlation (ARIMA, HA, LSTM) perform well in short-term forecasting, but as the time interval increases their performance deteriorates sharply. The performance of GCN-based methods is relatively stable, which shows the effectiveness of capturing spatial information. Although our model has no outstanding performance in short-term forecasting tasks, it shows the best performance in medium and long-term forecasting tasks. This benefits from our spatial-temporal inception-GCN module which reduces the accumulation of errors significantly." }, { "heading": "4.3 COMPONENT ANALYSIS", "text": "In order to prove the effectiveness of each key component, we carried out component analysis experiments on PEMSD8 dataset. The basic information of each model is as follows:\nBasic model: the model consists of two blocks. In each block, spatial and temporal convolution are performed respectively without the multi-scale spatial-temporal joint graph convolutions and graph attention. It does not use the sequence decoder with temporal attention as well. The output prediction result of the next time step simply uses a convolutional layer and the fully connected layers to generate.\n+ STI-GCM: Based on the basic model, multi-scale spatial temporal joint graph convolutions are used to replace the separate spatial and temporal convolutions to extract spatial-temporal correlations. In addition, the graph attention is introduced to capture the heterogeneity.\n+ Mask: We equip the basic model with the Mask matrix to learn the influence weights of different neighbors on the central node adaptively.\n+ Sequence decoder with attention: We add to the basic model with the proposed sequence decoder and the temporal attention for multi-step forecasting.\n+ alternant sequence decoder and short-term decoder: The output layer is modified into the fusion of the sequence forecasting result and short-term decoder forecasting result to benefit from both of them.\nExcept for the different control variables, the settings of each experiment were the same, and each experiment was repeated 10 times. The results which are shown in Table2 indicate that the model using STI-GCM has better performance, because the network can adaptively capture the heterogeneity of spatial-temporal network data. The sequence decoder with temporal attention shows good performance. The Mask mechanism also contributes some improvements. In the process of fusion, we train the short-term decoder and sequence decoder model circularly, and fuse the results of the two models as the final prediction output. It can not only ensure the short-term forecasting accuracy, but also avoid the influence of error accumulation on the long-term forecasting accuracy." }, { "heading": "5 CONCLUSION", "text": "This paper introduces a new deep learning framework for multi-step spatial-temporal forecasting. We propose spatial-temporal joint convolution to directly capture spatial-temporal correlation. At the same time, we employ the inception mechanisms to extract multi-scale spatial-temporal correlation, and introduce graph attention to model graph heterogeneity. Moreover, we combine short-term decoding and sequence decoding to map historical data to future time steps directly, avoiding the accumulation of errors. We evaluate our method on 3 real-world datasets from 2 different fields, and the results show that our method is superior to other baselines with good generalization ability." }, { "heading": "A APPENDIX", "text": "A.1 DATASETS\nWe evaluate ASTI-GCN on two highway datasets (PEMSD4, PEMSD8) and one telecommunications dataset. PEMSD4 refers to the traffic data in San Francisco Bay Area from January to February 2018. PEMSD8 is the traffic data in San Bernardino from July to August 2016. The telecommunications dataset contains records of mobile traffic volume over 10-minute intervals in Milan. We summarize the statistics of the datasets in Table 3.\nA.2 ADJACENCY MATRIX DEFINITION\nFor the road traffic datasets, if two nodes are connected, the corresponding value in the adjacency matrix is set to 1, otherwise 0. The spatial adjacency matrix can be expressed as:\nAi,j = { 1, if vi connects to vj 0, otherwise (9)\nwhere Ai,j is the edge weight of node i and node j, vi is node i.\nFor the Milan dataset, Spearman correlation coefficient was used to define the adjacency matrix:\nAi,j =\n{ 1, i 6= j and Spearman ( tvi , tvj ) > δ\n0, otherwise (10)\nwhere tvi and tvj are time series in training data of node i and node j, respectively. Spearman(·) is non-parametric indicator that measure the correlation between the two time series, δ is the threshold that controls the distribution and sparsity of the matrix, which is set to 0.92 in this article.\nA.3 METRICS\nWe evaluate the performance of our network with three evaluation metrics : Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE).\nMAE = 1\nm m∑ i=1 |ŷi − yi| (11)\nMAPE = 100%\nm m∑ i=1 ∣∣∣∣ ŷi − yiyi ∣∣∣∣ (12)\nRMSE = √√√√ 1 m m∑ i=1 (ŷi − yi)2 (13)\nwhere ŷi and yi represent the predicted value and the ground truth respectively. m is the total number of predicted values." } ]
2,020
null
SP:a99af0f9e848f4f9068ad407612745a85a262644
[ "This paper extends NTK to RNN to explain behavior of RNNs in overparametrized case. It’s a good extension study and interesting to see RNN with infinite-width limit converges to a kernel. The paper proves the same RNTK formula when the weights are shared and not shared. The proposed sensitivity for computationally friendly RNTK hyperparameter tuning is also insightful." ]
The study of deep neural networks (DNNs) in the infinite-width limit, via the so-called neural tangent kernel (NTK) approach, has provided new insights into the dynamics of learning, generalization, and the impact of initialization. One key DNN architecture remains to be kernelized, namely, the recurrent neural network (RNN). In this paper we introduce and study the Recurrent Neural Tangent Kernel (RNTK), which provides new insights into the behavior of overparametrized RNNs. A key property of the RNTK should greatly benefit practitioners is its ability to compare inputs of different length. To this end, we characterize how the RNTK weights different time steps to form its output under different initialization parameters and nonlinearity choices. A synthetic and 56 real-world data experiments demonstrate that the RNTK offers significant performance gains over other kernels, including standard NTKs, across a wide array of data sets.
[ { "affiliations": [], "name": "Sina Alemohammad" }, { "affiliations": [], "name": "Zichao Wang" }, { "affiliations": [], "name": "Randall Balestriero" }, { "affiliations": [], "name": "Richard G. Baraniuk" } ]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "On the convergence rate of training recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Martin Arjovsky", "Amar Shah", "Yoshua Bengio" ], "title": "Unitary evolution recurrent neural networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Sanjeev Arora", "Simon S. Du", "Wei Hu", "Zhiyuan Li", "Russ R Salakhutdinov", "Ruosong Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sanjeev Arora", "Simon S. Du", "Wei Hu", "Zhiyuan Li", "Ruosong Wang" ], "title": "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks", "venue": "arXiv preprint arXiv:1901.08584,", "year": 2019 }, { "authors": [ "Sanjeev Arora", "Simon S. Du", "Zhiyuan Li", "Ruslan Salakhutdinov", "Ruosong Wang", "Dingli Yu" ], "title": "Harnessing the power of infinitely wide deep nets on small-data tasks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Yoshua. Bengio", "Patrice. Simard", "Paolo. Frasconi" ], "title": "Learning long-term dependencies with gradient descent is difficult", "venue": "IEEE Trans. Neural Networks,", "year": 1994 }, { "authors": [ "Erwin Bolthausen" ], "title": "An iterative construction of solutions of the tap equations for the sherrington– kirkpatrick model", "venue": "Communications in Mathematical Physics,", "year": 2014 }, { "authors": [ "Chih-Chung Chang", "Chih-Jen Lin" ], "title": "LIBSVM: A library for support vector machines", "venue": "ACM Transactions on Intelligent Systems and Technology,", "year": 2011 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "venue": "arXiv preprint arXiv:1406.1078,", "year": 2014 }, { "authors": [ "Youngmin Cho", "Lawrence K Saul" ], "title": "Kernel methods for deep learning", "venue": "In Advances in Neural Information Processing Systems, pp", "year": 2009 }, { "authors": [ "Hoang Anh Dau", "Eamonn Keogh", "Kaveh Kamgar", "Chin-Chia Michael Yeh", "Yan Zhu", "Shaghayegh Gharghabi", "Chotirat Ann Ratanamahatana", "Yanping Chen", "Bing Hu", "Nurjahan Begum", "Anthony Bagnall", "Abdullah Mueen", "Gustavo Batista", "ML Hexagon" ], "title": "The UCR time series classification archive, 2019", "venue": null, "year": 2018 }, { "authors": [ "Simon S. Du", "Kangcheng Hou", "Russ R Salakhutdinov", "Barnabas Poczos", "Ruosong Wang", "Keyulu Xu" ], "title": "Graph neural tangent kernel: Fusing graph neural networks with graph kernels", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Simon S. Du", "Jason Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "David Duvenaud", "Oren Rippel", "Ryan Adams", "Zoubin Ghahramani" ], "title": "Avoiding pathologies in very deep networks", "venue": "In Artificial Intelligence and Statistics,", "year": 2014 }, { "authors": [ "Jeffrey L. Elman" ], "title": "Finding structure in time", "venue": "Cognitive Science,", "year": 1990 }, { "authors": [ "M. Fernández-Delgado", "M.S. Sirsat", "E. Cernadas", "S. Alawadi", "S. Barro", "M. Febrero-Bande" ], "title": "An extensive experimental survey of regression methods", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Adrià Garriga-Alonso", "Carl Edward Rasmussen", "Laurence Aitchison" ], "title": "Deep convolutional networks as shallow gaussian processes", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Mikael Henaff", "Arthur Szlam", "Yann LeCun" ], "title": "Recurrent orthogonal networks and long-memory tasks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Wei Hu", "Zhiyuan Li", "Dingli Yu" ], "title": "Simple and effective regularization methods for training on noisily labeled data with generalization guarantee", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kaixuan Huang", "Yuqing Wang", "Molei Tao", "Tuo Zhao" ], "title": "Why do deep residual networks generalize better than deep feedforward networks?–a neural tangent kernel perspective", "venue": null, "year": 2002 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Li Jing", "Yichen Shen", "Tena Dubcek", "John Peurifoy", "Scott Skirlo", "Yann LeCun", "Max Tegmark", "Marin Soljačić" ], "title": "Tunable efficient unitary neural networks (eunn) and their application to rnns", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Quoc V Le", "Navdeep Jaitly", "Geoffrey E Hinton" ], "title": "A simple way to initialize recurrent networks of rectified linear units", "venue": "arXiv preprint arXiv:1504.00941,", "year": 2015 }, { "authors": [ "Jaehoon Lee", "Jascha Sohl-dickstein", "Jeffrey Pennington", "Roman Novak", "Sam Schoenholz", "Yasaman Bahri" ], "title": "Deep neural networks as gaussian processes", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jaehoon Lee", "Lechao Xiao", "Samuel Schoenholz", "Yasaman Bahri", "Roman Novak", "Jascha SohlDickstein", "Jeffrey Pennington" ], "title": "Wide neural networks of any depth evolve as linear models under gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jaehoon Lee", "Samuel S Schoenholz", "Jeffrey Pennington", "Ben Adlam", "Lechao Xiao", "Roman Novak", "Jascha Sohl-Dickstein" ], "title": "Finite versus infinite neural networks: an empirical study", "venue": null, "year": 2007 }, { "authors": [ "Radford M Neal" ], "title": "Bayesian Learning for Neural Networks", "venue": "PhD thesis, University of Toronto,", "year": 1995 }, { "authors": [ "Behnam Neyshabur", "Zhiyuan Li", "Srinadh Bhojanapalli", "Yann LeCun", "Nathan Srebro" ], "title": "The role of over-parametrization in generalization of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Roman Novak", "Yasaman Bahri", "Daniel A. Abolafia", "Jeffrey Pennington", "Jascha Sohl-Dickstein" ], "title": "Sensitivity and generalization in neural networks: an empirical study", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Roman Novak", "Lechao Xiao", "Yasaman Bahri", "Jaehoon Lee", "Greg Yang", "Daniel A. Abolafia", "Jeffrey Pennington", "Jascha Sohl-dickstein" ], "title": "Bayesian deep convolutional networks with many channels are gaussian processes", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zichao Wang", "Randall Balestriero", "Richard Baraniuk" ], "title": "A max-affine spline perspective of recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Greg Yang" ], "title": "Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation", "venue": "arXiv preprint arXiv:1902.04760,", "year": 2019 }, { "authors": [ "Greg Yang" ], "title": "Tensor programs I: Wide feedforward or recurrent neural networks of any architecture are gaussian processes", "venue": "arXiv preprint arXiv:1910.12478,", "year": 2019 }, { "authors": [ "Greg Yang" ], "title": "Tensor programs II: Neural tangent kernel for any architecture", "venue": "arXiv preprint arXiv:2006.14548,", "year": 2020 }, { "authors": [ "Greg Yang" ], "title": "Tensor programs III: Neural matrix laws", "venue": "arXiv preprint arXiv:2009.10685,", "year": 2020 }, { "authors": [ "Difan Zou", "Yuan Cao", "Dongruo Zhou", "Quanquan Gu" ], "title": "Stochastic gradient descent optimizes over-parameterized deep ReLU networks", "venue": "arXiv preprint arXiv:1811.08888,", "year": 2018 }, { "authors": [ "Lin" ], "title": "2011) and for hyperparameter selection we performed 10-fold validation for splitting the training data into 90% training set and 10% validation test. We then choose the best performing set of hyperparameters on all the validation sets, retrain the models with the best set of hyperparameters on the entire training data and finally report the performance on the unseen test data. The performance of all kernels on each data set is shown in table", "venue": null, "year": 2011 }, { "authors": [ "Friedman Rank (Fernández-Delgado" ], "title": "2019) first ranks the accuracy of each classifier on each dataset and then takes the average of the ranks for each classifier over all", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The overparameterization of modern deep neural networks (DNNs) has resulted in not only remarkably good generalization performance on unseen data (Novak et al., 2018; Neyshabur et al., 2019; Belkin et al., 2019) but also guarantees that gradient descent learning can find the global minimum of their highly nonconvex loss functions (Du et al., 2019b; Allen-Zhu et al., 2019b;a; Zou et al., 2018; Arora et al., 2019b). From these successes, a natural question arises: What happens when we take overparameterization to the limit by allowing the width of a DNN’s hidden layers to go to infinity? Surprisingly, the analysis of such an (impractical) DNN becomes analytically tractable. Indeed, recent work has shown that the training dynamics of (infinite-width) DNNs under gradient flow is captured by a constant kernel called the Neural Tangent Kernel (NTK) that evolves according to a linear ordinary differential equation (ODE) (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019a).\nEvery DNN architecture and parameter initialization produces a distinct NTK. The original NTK was derived from the Multilayer Perceptron (MLP)(Jacot et al., 2018) and was soon followed by kernels derived from Convolutional Neural Networks (CNTK) (Arora et al., 2019a; Yang, 2019a), Residual DNNs (Huang et al., 2020), and Graph Convolutional Neural Networks (GNTK) (Du et al., 2019a). In (Yang, 2020a), a general strategy to obtain the NTK of any architecture is provided.\nIn this paper, we extend the NTK concept to the important class of overparametrized Recurrent Neural Networks (RNNs), a fundamental DNN architecture for processing sequential data. We show that RNN in its infinite-width limit converges to a kernel that we dub the Recurrent Neural Tangent Kernel (RNTK). The RNTK provides high performance for various machine learning tasks, and an analysis of the properties of the kernel provides useful insights into the behavior of RNNs in the following overparametrized regime. In particular, we derive and study the RNTK to answer the following theoretical questions:\nQ: Can the RNTK extract long-term dependencies between two data sequences? RNNs are known to underperform at learning long-term dependencies due to the gradient vanishing or exploding (Bengio et al., 1994). Attempted ameliorations have included orthogonal weights (Arjovsky et al., 2016; Jing et al., 2017; Henaff et al., 2016) and gating such as in Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Unit (GRU) (Cho et al., 2014) RNNs. We demonstrate that the RNTK can detect long-term dependencies with proper initialization of the hyperparameters, and moreover, we show how the dependencies are extracted through time via different hyperparameter choices.\nQ: Do the recurrent weights of the RNTK reduce its representation power compared to other NTKs? An attractive property of an RNN that is shared by the RNTK is that it can deal with sequences of different lengths via weight sharing through time. This enables the reduction of the number of learnable parameters and thus more stable training at the cost of reduced representation power. We prove the surprising fact that employing tied vs. untied weights in an RNN does not impact the analytical form of the RNTK.\nQ: Does the RNTK generalize well? A recent study has revealed that the use of an SVM classifier with the NTK, CNTK, and GNTK kernels outperforms other classical kernel-based classifiers and trained finite DNNs on small data sets (typically fewer than 5000 training samples) (Lee et al., 2020; Arora et al., 2019a; 2020; Du et al., 2019a). We extend these results to RNTKs to demonstrate that the RNTK outperforms a variety of classic kernels, NTKs and finite RNNs for time series data sets in both classification and regression tasks. Carefully designed experiments with data of varying lengths demonstrate that the RNTK’s performance accelerates beyond other techniques as the difference in lengths increases. Those results extend the empirical observations from (Arora et al., 2019a; 2020; Du et al., 2019a; Lee et al., 2020) into finite DNNs, NTK, CNTK, and GNTK comparisons by observing that their performance-wise ranking depends on the employed DNN architecture.\nWe summarize our contributions as follows:\n[C1] We derive the analytical form for the RNTK of an overparametrized RNN at initialization using rectified linear unit (ReLU) and error function (erf) nonlinearities for arbitrary data lengths and number of layers (Section 3.1).\n[C2] We prove that the RNTK remains constant during (overparametrized) RNN training and that the dynamics of training are simplified to a set of ordinary differential equations (ODEs) (Section 3.2).\n[C3] When the input data sequences are of equal length, we show that the RNTKs of weight-tied and weight-untied RNNs converge to the same RNTK (Section 3.3).\n[C4] Leveraging our analytical formulation of the RNTK, we empirically demonstrate how correlations between data at different times are weighted by the function learned by an RNN for different sets of hyperparameters. We also offer practical suggestions for choosing the RNN hyperparameters for deep information propagation through time (Section 3.4).\n[C5] We demonstrate that the RNTK is eminently practical by showing its superiority over classical kernels, NTKs, and finite RNNs in exhaustive experiments on time-series classification and regression with both synthetic and 56 real-world data sets (Section 4)." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Notation. We denote [n] = {1, . . . , n}, and Id the identity matrix of size d. [A]i,j represents the (i, j)-th entry of a matrix, and similarly [a]i represents the i-th entry of a vector. We use φ(·) : R → R to represent the activation function that acts coordinate wise on a vector and φ′ to denote its derivative. We will often use the rectified linear unit (ReLU) φ(x) = max(0, x) and error function (erf) φ(x) = 2√\nπ ∫ x 0 e−z 2\ndz activation functions. N (µ,Σ) represents the multidimensional Gaussian distribution with the mean vector µ and the covariance matrix Σ.\nRecurrent Neural Networks (RNNs). Given an input sequence data x = {xt}Tt=1 of length T with data at time t, xt ∈ Rm, a simple RNN (Elman, 1990) performs the following recursive computation at each layer ` and each time step t\ng(`,t)(x) = W (`)h(`,t−1)(x) +U (`)h(`−1,t)(x) + b(`), h(`,t)(x) = φ ( g(`,t)(x) ) ,\nwhere W (`) ∈ Rn×n, b(`) ∈ Rn for ` ∈ [L], U (1) ∈ Rn×m and U (`) ∈ Rn×n for ` ≥ 2 are the RNN parameters. g(`,t)(x) is the pre-activation vector at layer ` and time step t, and h(`,t)(x) is the after-activation (hidden state). For the input layer ` = 0, we define h(0,t)(x) := xt. h(`,0)(x) as the initial hidden state at layer ` that must be initialized to start the RNN recursive computation.\nThe output of an L-hidden layer RNN with linear read out layer is achieved via\nfθ(x) = V h (L,T )(x),\nwhere V ∈ Rd×n. Figure 1 visualizes an RNN unrolled through time.\nh(2,2)h (2,1)(x) h(2,3)(x)\nh(1,2)(x)h(1,1)(x) h(1,3)(x)\nW (2) W (2)\nW (1) W (1) U (2) U (2) U (2)\nh(2,0)(x)\nh(1,0)(x)\nW (2)\nW (1)\nx1 x2 x3\nU (1) U (1) U (1)\nNeural Tangent Kernel (NTK). Let fθ(x) ∈ Rd be the output of a DNN with parameters θ. For two input data sequences x and x′, the NTK is defined as (Jacot et al., 2018)\nΘ̂s(x,x ′) = 〈∇θsfθs(x),∇θsfθs(x′)〉,\nwhere fθs and θs are the network output and parameters during training at time s. 1 Let X and Y be the set of training inputs and targets, `(ŷ, y) : Rd × Rd → R+ be the loss function, and L = 1|X | ∑ (x,y)∈X×Y `(fθs(x),y) be the the empirical loss. The evolution of the parameters θs and output of the network fθs on a test input using gradient descent with infinitesimal step size (a.k.a gradient flow) with learning rate η is given by\n∂θs ∂s = −η∇θsfθs(X )T∇fθs (X )L (1)\n∂fθs(x)\n∂s = −η∇θsfθs(x)∇θsfθs(X )T∇fθs (X )L = −ηΘ̂s(x,X )∇fθs (X )L. (2)\nGenerally, Θ̂s(x,x′), hereafter referred to as the empirical NTK, changes over time during training, making the analysis of the training dynamics difficult. When fθs corresponds to an infinite-width MLP, (Jacot et al., 2018) showed that Θ̂s(x,x′) converges to a limiting kernel at initialization and stays constant during training, i.e.,\nlim n→∞\nΘ̂s(x,x ′) = lim\nn→∞ Θ̂0(x,x\n′) := Θ(x,x′) ∀s ,\nwhich is equivalent to replacing the outputs of the DNN by their first-order Taylor expansion in the parameter space (Lee et al., 2019). With a mean-square error (MSE) loss function, the training dynamics in (1) and (2) simplify to a set of linear ODEs, which coincides with the training dynamics of kernel ridge regression with respect to the NTK when the ridge term goes to zero. A nonzero ridge regularization can be conjured up by adding a regularization term λ 2\n2 ‖θs − θ0‖ 2 2 to the empirical loss\n(Hu et al., 2020)." }, { "heading": "3 THE RECURRENT NEURAL TANGENT KERNEL", "text": "We are now ready to derive the RNTK. We first prove the convergence of an RNN at initialization to the RNTK in the infinite-width limit and discuss various insights it provides. We then derive the convergence of an RNN after training to the RNTK. Finally, we analyze the effects of various hyperparameter choices on the RNTK. Proofs of all of our results are provided in the Appendices." }, { "heading": "3.1 RNTK FOR AN INFINITE-WIDTH RNN AT INITIALIZATION", "text": "First we specify the following parameter initialization scheme that follows previous work on NTKs (Jacot et al., 2018), which is crucial to our convergence results:\nW (`) = σ`w√ n W(`), U (1) = σ1u√ m U(1), U (`) = σ`u√ n U(`)(`≥2), V = σv√ n V, b(`) =σbb (`) , (3)\nwhere\n[W`]i,j , [U (`)]i,j , [V]i,j , [b (`)]i ∼ N (0, 1) . (4)\nWe will refer to (3) and (4) as the NTK initialization. The choices of the hyperparameters σw, σu, σv and σb can significantly impact RNN performance, and we discuss them in detail in Section\n1We use s to denote time here, since t is used to index the time steps of the RNN inputs.\n3.4. For the initial (at time t = 0) hidden state at each layer `, we set h(`,0)(x) to an i.i.d. copy of N (0, σh) (Wang et al., 2018) . For convenience, we collect all of the learnable parameters of the RNN into θ = vect [ {{W(`),U(`),b(`)}L`=1,V} ] .\nThe derivation of the RNTK at initialization is based on the correspondence between Gaussian initialized, infinite-width DNNs and Gaussian Processes (GPs), known as the DNN-GP. In this setting every coordinate of the DNN output tends to a GP as the number of units/neurons in the hidden layer (its width) goes to infinity. The corresponding DNN-GP kernel is computed as\nK(x,x′) = E θ∼N\n[ [fθ(x)]i · [fθ(x′)]i ] , ∀i ∈ [d]. (5)\nFirst introduced for a single-layer, fully-connected neural network by (Neal, 1995), recent works on NTKs have extended the results for various DNN architectures (Lee et al., 2018; Duvenaud et al., 2014; Novak et al., 2019; Garriga-Alonso et al., 2019; Yang, 2019b), where in addition to the output, all pre-activation layers of the DNN tends to a GPs in the infinite-width limit. In the case of RNNs, each coordinate of the RNN pre-activation g(`,t)(x) converges to a centered GP depending on the inputs with kernel\nΣ(`,t,t ′)(x,x′) = E\nθ∼N\n[ [g(`,t)(x)]i · [g(`,t ′)(x′)]i ] ∀i ∈ [n]. (6)\nAs per (Yang, 2019a), the gradients of random infinite-width DNNs computed during backpropagation are also Gaussian distributed. In the case of RNNs, every coordinate of the vector δ(`,t)(x) :=√ n ( ∇g(`,t)(x)fθ(x) ) converges to a GP with kernel\nΠ(`,t,t ′)(x,x′) = E\nθ∼N\n[ [δ(`,t)(x)]i · [δ(`,t ′)(x′)]i ] ∀i ∈ [n]. (7)\nBoth convergences occur independently of the coordinate index i and for inputs of possibly different lengths, i.e., T 6= T ′. With (6) and (7), we now prove that an infinite-width RNN at initialization converges to the limiting RNTK.\nTheorem 1 Let x and x′ be two data sequences of potentially different lengths T and T ′, respectively. Without loss of generality, assume that T ≤ T ′, and let τ := T ′ − T . Let n be the number of units in the hidden layers, the empirical RNTK for an L-layer RNN with NTK initialization converges to the following limiting kernel as n→∞\nlim n→∞\nΘ̂0(x,x ′) = Θ(x,x′) = Θ(L,T,T ′)(x,x′)⊗ Id , (8)\nwhere\nΘ(L,T,T ′)(x,x′) = ( L∑ `=1 T∑ t=1 ( Π(`,t,t+τ)(x,x′) · Σ(`,t,t+τ)(x,x′) )) +K(x,x′) , (9)\nwith K(x,x′), Σ(`,t,t+τ)(x,x′), and Π(`,t,t+τ)(x,x′) defined in (5)–(7).\nRemarks. Theorem 1 holds generally for any two data sequences, including different lengths ones. This highlights the RNTK’s ability to produce a similarity measure Θ(x,x′) even if the inputs are of different lengths, without resorting to heuristics such as zero padding the inputs to the to the max length of both sequences. Dealing with data of different length is in sharp contrast to common kernels such as the classical radial basis functions, polynomial kernels, and current NTKs. We showcase this capability below in Section 4.\nTo visualize Theorem 1, we plot in the left plot in Figure 2 the convergence of a single layer, sufficiently wide RNN to its RNTK with the two simple inputs x = {1,−1, 1} of length 3 and x′ = {cos(α), sin(α)} of length 2, where α = [0, 2π]. For an RNN with a sufficiently large hidden state (n = 1000), we see clearly that it converges to the RNTK (n =∞). RNTK Example for a Single-Layer RNN. We present a concrete example of Theorem 1 by showing how to recursively compute the RNTK for a single-layer RNN; thus we drop the layer index for notational simplicity. We compute and display the RNTK for the general case of a multilayer RNN in Appendix B.3. To compute the RNTK Θ(T,T ′)(x,x′), we need to compute the GP\nkernels Σ(t,t+τ)(x,x′) and Π(t,t+τ)(x,x′). We first define the operator Vφ [ K ]\nthat depends on the nonlinearity φ(·) and a positive semi-definite matrixK ∈ R2×2\nVφ [ K ]\n= E[φ(z1) · φ(z2)], (z1, z2) ∼ N (0,K) . (10) Following (Yang, 2019a), we obtain the analytical recursive formula for the GP kernel Σ(t,t+τ)(x,x′) for a single layer RNN as\nΣ(1,1)(x,x′) = σ2wσ 2 h1(x=x′) + σ2u m 〈x1,x′1〉+ σ2b (11) Σ(t,1)(x,x′) = σ2u m 〈xt,x′1〉+ σ2b t > 1 (12)\nΣ(1,t ′)(x,x′) = σ2u m 〈x1,x′t′〉+ σ2b t′ > 1 (13)\nΣ(t,t ′)(x,x′) = σ2wVφ [ K(t,t ′)(x,x′) ]\n+ σ2u m 〈xt,x′t′〉+ σ2b t, t′ > 1 (14)\nK(x,x′) = σ2vVφ [ K(T+1,T ′+1)(x,x′) ] , (15)\nwhere\nK(t,t ′)(x,x′) =\n[ Σ(t−1,t−1)(x,x) Σ(t−1,t ′−1)(x,x′)\nΣ(t−1,t ′−1)(x,x′) Σ(t ′−1,t′−1)(x′,x′)\n] . (16)\nSimilarly, we obtain the analytical recursive formula for the GP kernel Π(t,t+τ)(x,x′) as\nΠ(T,T ′)(x,x′) = σ2vVφ′ [ K(T+1,T+τ+1)(x,x′) ] (17)\nΠ(t,t+τ)(x,x′) = σ2wVφ′ [ K(t+1,t+τ+1)(x,x′) ] Π(t+1,t+1+τ)(x,x′) t ∈ [T − 1] (18)\nΠ(t,t ′)(x,x′) = 0 t′ − t 6= τ. (19) For φ = ReLU and φ = erf , we provide analytical expressions for Vφ [ K ] and Vφ′ [ K ]\nin Appendix B.5. These yield an explicit formula for the RNTK that enables fast and point-wise kernel evaluations. For other activation functions, one can apply the Monte Carlo approximation to obtain Vφ [ K ]\nand Vφ′ [ K ] (Novak et al., 2019)." }, { "heading": "3.2 RNTK FOR AN INFINITE-WIDTH RNN DURING TRAINING", "text": "We prove that an infinitely-wide RNN, not only at initialization but also during gradient descent training, converges to the limiting RNTK at initialization.\nTheorem 2 Let n be the number of units of each RNN’s layer. Assume that Θ(X ,X ) is positive definite on X such that λmin(Θ(X ,X )) > 0. Let η∗ := 2 ( λmin(Θ(X ,X )) + λmax(Θ(X ,X ) ))−1 . For an L-layer RNN with NTK initialization as in (3), (4) trained under gradient flow (recall (1) and (2)) with η < η∗, we have with high probability\nsup s ‖θs − θ0‖2√ n , sup s ‖Θ̂s(X ,X )− Θ̂0(X ,X )‖2 = O ( 1√ n ) .\nRemarks. Theorem 2 states that the training dynamics of an RNN in the infinite-width limit as in (1), (2) are governed by the RNTK derived from the RNN at its initialization. Intuitively, this is due to the NTK initialization (3), (4) which positions the parameters near a local minima, thus minimizing the amount of update that needs to be applied to the weights to obtain the final parameters." }, { "heading": "3.3 RNTK FOR AN INFINITE-WIDTH RNN WITHOUT WEIGHT SHARING", "text": "We prove that, in the infinite-width limit, an RNN without weight sharing (untied weights), i.e., using independent new weights W(`,t), U(`,t) and b(`,t) at each time step t, converges to the same RNTK as an RNN with weight sharing (tied weights). First, recall that it is a common practice to use weight-tied RNNs, i.e., in layer `, the weights W(`), U(`) and b(`) are the same across all time steps t. This practice conserves memory and reduces the number of learnable parameters. We demonstrate that, when using untied-weights, the RNTK formula remains unchanged.\nTheorem 3 For inputs of the same length, an RNN with untied weights converges to the same RNTK as an RNN with tied weights in the infinite-width (n→∞) regime.\nRemarks. Theorem 3 implies that weight-tied and weight-untied RNNs have similar behaviors in the infinite-width limit. It also suggests that existing results on the simpler, weight-untied RNN setting may be applicable for the more general, weight-tied RNN. The plot on the right side of Figure 2 empirically demonstrates the convergence of both the weight-tied and weight-untied RNNs to the RNTK with increasing hidden layer size n; moreover, the convergence rates are similar." }, { "heading": "3.4 INSIGHTS INTO THE ROLES OF THE RNTK’S HYPERPARAMETERS", "text": "Our analytical form for the RNTK is fully determined by a small number of hyperparameters, which contains the various weight variances collected into S = {σw, σu, σb, σh} and the activation function.2 In standard supervised-learning settings, one often performs cross-validation to select the hyperparameters. However, since kernel methods become computationally intractable for large datasets, we seek a more computationally friendly alternative to cross-validation. Here we conduct a novel exploratory analysis that provides new insights into the impact of the RNTK hyperparameters on the RNTK output and suggests a simple method to select them a priori in a deliberate manner.\nTo visualize the role of the RNTK hyperparameters, we introduce the sensitivity s(t) of the RNTK of two input sequences x and x′ with respect to the input xt at time t\ns(t) = ‖∇xtΘ(x,x′)‖2 . (20)\n2From (11) to (18) we emphasize that σv merely scales the RNTK and does not change its overall behavior.\nHere, s(t) indicates how sensitive the RNTK is to the data at time t, i.e., xt, in presence of another data sequence x′. Intuitively, large/small s(t) indicates that the RNTK is relatively sensitive/insensitive to the input xt at time t.\nThe sensitivity is crucial to understanding to which extent the RNTK prediction is impacted by the input at each time step. In the case where some time indices have a small sensitivity, then any input variation in those corresponding times will not alter the RNTK output and thus will produce a metric that is invariant to those changes. This situation can be beneficial or detrimental based on the task at hand. Ideally, and in the absence of prior knowledge on the data, one should aim to have a roughly constant sensitivity across time in order to treat all time steps equally in the RNTK input comparison.\nFigure 3 plots the normalized sensitivity s(t)/maxt(s(t)) for two data sequences of the same length T = 100, with s(t) computed numerically for xt,x′t ∼ N (0, 1). We repeated the experiments 10000 times; the mean of the sensitivity is shown in Figure 3. Each of the plots shows the changes of parameters SReLU = { √ 2, 1, 0, 0} for φ = ReLU and Serf = {1, 0.01, 0.05, 0} for φ = erf .\nFrom Figure 3 we first observe that both ReLU and erf show similar per time step sensitivity measure s(t) behavior around the hyperparameters SReLU and Serf . If one varies any of the weight variance parameters, the sensitivity exhibits a wide range of behavior, and in particular with erf . We observe that σw has a major influence on s(t). For ReLU, a small decrease/increase in σw can lead to over-sensitivity of the RNTK to data at the last/first times steps, whereas for erf , any changes in σw leads to over-sensitivity to the last time steps.\nAnother notable observation is the importance of σh, which is usually set to zero for RNNs. (Wang et al., 2018) showed that a non-zero σh acts as a regularization that improves the performance of RNNs with the ReLU nonlinearity. From the sensitivity perspective, a non-zero σh results in reducing the importance of the first time steps of the input. We also see the same behavior in erf , but with stronger changes as σh increases. Hence whenever one aims at reinforcing the input pairwise comparisons, such parameters should be favored.\nThis sensitivity analysis provides a practical tool for RNTK hyperparameter tuning. In the absence of knowledge about the data, hyperparameters should be chosen to produce the least time varying sensitivity. If given a priori knowledge, hyperparameters can be selected that direct the RNTK to the desired time-steps." }, { "heading": "4 EXPERIMENTS", "text": "We now empirically validate the performance of the RNTK compared to classic kernels, NTKs, and trained RNNs on both classification and regression tasks using a large number of time series data sets. Of particular interest is the capability of the RNTK to offer high performance even on inputs of different lengths.\nTime Series Classification. The first set of experiments considers time series inputs of the same lengths from 56 datasets in the UCR time-series classification data repository (Dau et al., 2019). We restrict ourselves to selected data sets with fewer than 1000 training samples and fewer than 1000 time steps (T ) as kernel methods become rapidly intractable for larger datasets. We compare the RNTK with a variety of other kernels, including the Radial Basis Kernel (RBF), polynomial kernel, and NTK (Jacot et al., 2018), as well as finite RNNs with Gaussian, identity (Le et al., 2015) initialization, and GRU (Cho et al., 2014). We use φ = ReLU for both the RNTKs and NTKs. For each kernel, we train a C-SVM (Chang & Lin, 2011) classifier, and for each finite RNN we use gradient descent training. For model hyperparameter tuning, we use 10-fold cross-validation. Details on the data sets and experimental setup are available in Appendix A.1.\nWe summarize the classification results over all 56 datasets in Table 1; detailed results on each data set is available in Appendix A.2. We see that the RNTK outperforms not only the classical kernels but also the NTK and trained RNNs in all metrics. The results demonstrate the ability of RNTK to provide increased performances compare to various other methods (kernels and RNNs). The superior performance of RNTK compared to other kernels, including NTK, can be explained by the internal recurrent mechanism present in RNTK, allowing time-series adapted sample comparison. In addition, RNTK also outperforms RNN and GRU. As the datasets we consider are relative small in size, finite RNNs and GRUs that typically require large amount of data to succeed do not perform well in our setting. An interesting future direction would be to compare RNTK to RNN/GRU on larger datasets.\nTime Series Regression. We now validate the performance of the RNTK on time series inputs of different lengths on both synthetic data and real data. For both scenarios, the target is to predict the next time-step observation of the randomly extracted windows of different length using kernel ridge regression.\nWe compare the RNTK to other kernels, the RBF and polynomial kernels and the NTK. We also compare our results with a data independent predictor that requires no training, that is simply to predict the next time step with previous time step (PTS).\nFor the synthetic data experiment, we simulate 1000 samples of one period of a sinusoid and add white Gaussian noise with default σn = 0.05. From this fixed data, we extract training set size Ntrain = 20 segments of uniform random lengths in the range of [Tfixed, Tfixed + Tvar] with Tfixed = 10. We use standard kernel ridge regression for this task. The test set is comprised of Ntest = 5000 obtained from other randomly extracted segments, again of varying lengths. For the real data, we use 975 days of the Google stock value in the years 2014–2018. As in the simulated signal setup above, we extract Ntrain segments of different lengths from the first 700 days and test on the Ntest segments from days 701 to 975. Details of the experiment are available in Appendix A.2.\nWe report the predicted signal-to-noise ratio (SNR) for both datasets in Figures 4a and 4c for various values of Tvar. We vary the noise level and training set size for fixed Tvar = 10 in Figures 4b and 4d. As we see from Figures 4a and 4c, the RNTK offers substantial performance gains compared to the other kernels, due to its ability to naturally deal with variable length inputs. Moreover, the performance gap increases with the amount of length variation of the inputs Tvar. Figure 4d demonstrates that, unlike the other methods, the RNTK maintains its performance even when the training set is small. Finally, Figure 4c demonstrates that the impact of noise in the data on the regression performance is roughly the same for all models but becomes more important for RNTK with a large σn; this might be attributed to the recurrent structure of the model allowing for a time propagation and amplification of the noise for very low SNR. These experiments demonstrate the distinctive advantages of the RNTK over classical kernels, and NTKs for input data sequences of varying lengths.\nIn the case of PTS, we expect the predictor to outperform kernel methods when learning from the training samples is hard, due to noise in the data or small training size which can lead to over fitting. In Figure 4a RNTK and Polynomial kernels outperforms PTS for all values of Tvar, but for larger Tvar, NTK and RBF under perform PTS due to the increasing detrimental effect of zero padding.\nFor the Google stock value, we see a superior performance of PTS with respect to all other kernel methods due to the nature of those data heavily relying on close past data. However, RNTK is able to reduce the effect of over-fitting, and provide the closest results to PTS among all kernel methods we employed, with increasing performance as the number of training samples increase." }, { "heading": "5 CONCLUSIONS", "text": "In this paper, we have derived the RNTK based on the architecture of a simple RNN. We have proved that, at initialization, after training, and without weight sharing, any simple RNN converges to the same RNTK. This convergence provides new insights into the behavior of infinite-width RNNs, including how they process different-length inputs, their training dynamics, and the sensitivity of their output at every time step to different nonlinearities and initializations. We have highlighted the RNTK’s practical utility by demonstrating its superior performance on time series classification and regression compared to a range of classical kernels, the NTK, and trained RNNs. There are many avenues for future research, including developing RNTKs for gated RNNs such as the LSTM (Hochreiter & Schmidhuber, 1997) and investigating which of our theoretical insights extend to finite RNNs." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571, N00014-20-1-2787, and N00014-20-1-2534; AFOSR grant FA9550-18-1-0478; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047." }, { "heading": "A EXPERIMENT DETAILS", "text": "" }, { "heading": "A.1 TIME SERIES CLASSIFICATION", "text": "Kernel methods settings. We used RNTK, RBF, polynomial and NTK (Jacot et al., 2018). For data pre-processing, we normalized the norm of each x to 1. For training we used C-SVM in LIBSVM library (Chang & Lin, 2011) and for hyperparameter selection we performed 10-fold validation for splitting the training data into 90% training set and 10% validation test. We then choose the best performing set of hyperparameters on all the validation sets, retrain the models with the best set of hyperparameters on the entire training data and finally report the performance on the unseen test data. The performance of all kernels on each data set is shown in table 2.\nFor C-SVM we chose the cost function value\nC ∈ {0.01, 0.1, 1, 10, 100}\nand for each kernel we used the following hyperparameter sets\n• RNTK: We only used single layer RNTK, we φ = ReLU and the following hyperparameter sets for the variances:\nσw ∈ {1.34, 1.35, 1.36, 1.37, 1.38, 1.39, 1.40, 1.41, 1.42, √\n2, 1.43, 1.44, 1.45, 1.46, 1.47} σu = 1\nσb ∈ {0, 0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 0.9, 1, 2} σh ∈ {0, 0.01, 0.1, 0.5, 1}\n• NTK: The formula for NTK of L-layer MLP (Jacot et al., 2018) for x,x′ ∈ Rm is:\nΣ(1) = σ2w m 〈x,x′〉+ σ2b\nΣ(`)(x,x′) = σ2wVφ[K (`)(x,x′)] + σ2b ` ∈ [L] Σ̇(`)(x,x′) = σ2wVφ′ [K (`+1)(x,x′)] ` ∈ [L]\nK(`)(x,x′) = [ Σ(`−1)(x,x) Σ(`−1)(x,x′) Σ(`−1)(x,x′) Σ(`−1)(x′,x′) ] K(x,x′) = σ2vVφ[K(L+1)(x,x′)]\nkNTK = L∑ `=1\n( Σ(`)(x,x′)\nL∏ `′=` Σ̇(`)(x,x′)\n) +K(x,x′)\nand we used the following hyperparamters\nL ∈ [10] σw ∈ {0.5, 1, √ 2, 2, 2.5, 3}\nσb ∈ {0, 0.01, 0.1, 0.2, 0.5, 0.8, 1, 2, 5}\n• RBF:\nkRBF(x,x ′) = e(−α‖x−x ′‖22)\nα ∈ {0.01, 0.05, 0.1, 0.2, 0.5, 0.6, 0.7, 0.8, 1, 2, 3, 4, 5, 10, 20, 30, 40, 100}\n• Polynomial:\nkPolynomial(x,x ′) = (r + 〈x,x′〉)d\nd ∈ [5] r ∈ {0, 0.1, 0.2, 0.5, 1, 2}\nFinite-width RNN settings. We used 3 different RNNs. The first is a ReLU RNN with Gaussian initialization with the same NTK initialization scheme, where parameter variances are σw = σv = √ 2,\nσu = 1 and σb = 0. The second is a ReLU RNN with identity initialization following (Le et al., 2015). The third is a GRU (Cho et al., 2014) with uniform initialization. All models are trained with RMSProp algorithm for 200 epochs. Early stopping is implemented when the validation set accuracy does not improve for 5 consecutive epochs.\nWe perform standard 5-fold cross validation. For each RNN architecture we used hyperparamters of number of layer, number of hidden units and learning rate as\nL ∈ {1, 2} n ∈ {50, 100, 200, 500}\nη ∈ {0.01, 0.001, 0.0001, 0.00001}\nMetrics descriptions First, only in this paragraph, let i ∈ {1, 2, ..., N} index a total of N datasets and j ∈ {1, 2, ...,M} index a total of M classifiers. Let yij be the accuracy of the j-th classifer on the i-th dataset. We reported results on 4 metrics: average accuracy (Acc. mean), P90, P95, PMA and Friedman Rank. P90 and P95 is the fraction of datasets that the classifier achieves at least 90% and 95% of the maximum achievable accuracy for each dataset, i.e.,\nP90j = 1\nN ∑ i 1(yij ≥ 0.9(max j yij)) . (21)\nPMA is the accuracy of the classifier on a dataset divided by the maximum achievable accuracy on that dataset, averaged over all datasets:\nPMAj = 1\nN ∑ i yij max j yij . (22)\nFriedman Rank (Fernández-Delgado et al., 2019) first ranks the accuracy of each classifier on each dataset and then takes the average of the ranks for each classifier over all datasets, i.e.,\nFRj = 1\nN ∑ i rij , (23)\nwhere rij is the ranking of the j-th classifier on the i-th dataset.\nNote that a better classifier achieves a lower Friedman Rank, Higher P/90 and PMA.\nRemark. In order to provide insight into the performance of RNTK in long time steps setting, we picked two datasets with more that 1000 times steps: SemgHandSubjectCh2 (T = 1024) and StarLightCurves (T = 1024)." }, { "heading": "A.2 TIME SERIES REGRESSION", "text": "For time series regression, we used the 5-fold validation of training set and same hyperparamter sets for all kernels. For training we kernel ridge regression with ridge term chosen form\nλ ∈ {0, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.8, 1, 2, 3, 4, 5, 6, 7, 8, 10, 100}" }, { "heading": "B PROOFS FOR THEOREMS 1 AND 3: RNTK CONVERGENCE AT INITIALIZATION", "text": "" }, { "heading": "B.1 PRELIMINARY: NETSOR PROGRAMS", "text": "Calculation of NTK in any architecture relies on finding the GP kernels that correspond to each pre-activation and gradient layers at initialization. For feedforward neural networks with n1, . . . , nL number of neurons (channels in CNNs) at each layer the form of this GP kernels can be calculated via taking the limit of n1, . . . , nL sequentially one by one. The proof is given by induction, where by conditioning on the previous layers, each entry of the current layer is sum of infinite i.i.d Gaussian random variables, and based on Central Limit Theorem (CLT), it becomes a Gaussian process with\nkernel calculated based on the previous layers. Since the first layer is an affine transformation of input with Gaussian weights, it is a Gaussian process and the proof is completed. See (Lee et al., 2018; Duvenaud et al., 2014; Novak et al., 2019; Garriga-Alonso et al., 2019) for a formal treatment. However, due to weight-sharing, sequential limit is not possible and condoning on previous layers does not result in i.i.d. weights. Hence the aforementioned arguments break. To deal with it, in (Yang, 2019a) a proof using Gaussian conditioning trick (Bolthausen, 2014) is presented which allows use of recurrent weights in a network. More precisely, it has been demonstrated than neural networks (without batch normalization) can be expressed and a series of matrix multiplication and (piece wise) nonlinearity application, generally referred as Netsor programs. It has been shown that any architecture that can be expressed as Netsor programs that converge to GPs as width goes to infinity in the same rate, which a general rule to obtain the GP kernels. For completeness of this paper, we briefly restate the results from (Yang, 2019a) which we will use later for calculation derivation of RNTK.\nThere are 3 types of variables in Netsor programs; A-vars, G-vars and H-vars. A-vars are matrices and vectors with i.i.d Gaussian entries, G-vars are vectors introduced by multiplication of a vector by an A-var and H-vars are vectors after coordinate wise nonlinearities is applied to G-vars. Generally, G-vars can be thought of as pre-activation layers which are asymptotically treated as a Gaussian distributed vectors, H-vars as after-activation layers and A-vars are the weights. Since in neural networks inputs are immediately multiplied by a weight matrix, it can be thought of as an G-var, namely gin. Generally Netsor programs supports G-vars with different dimension, however the asymptotic behavior of a neural networks described by Netsor programs does not change under this degree of freedom, as long as they go to infinity at the same rate. For simplicity, let the G-vars and H-vars have the same dimension n since the network of interest is RNN and all pre-activation layers have the same dimension. We introduce the Netsor programs under this simplification. To produce the output of a neural network, Netsor programs receive a set of G-vars and A-vars as input, and new variables are produced sequentially using the three following operators:\n• Matmul : multiplication of an A-var: A with an H-var: h, which produce a new G-var, g. g = Ah (24)\n• Lincomp: Linear combination of G-vars, gi, 1 ≤ i ≤ k , with coefficients ai ∈ R 1 ≤ i ≤ k which produce of new G-var:\ng = k∑ i=1 aigi (25)\n• Nonlin: creating a new H-var, h, by using a nonlinear function φ : Rk → R that act coordinate wise on a set of G-vars, gi, 1 ≤ i ≤ k :\nh = ϕ(g1, . . . , gk) (26)\nAny output of the neural network y ∈ R should be expressed as inner product of a new A-var which has not been used anywhere else in previous computations and an H-var:\ny = v>h\nAny other output can be produced by another v′ and h′ (possibility the same h or v).\nIt is assumed that each entry of any A-var : A ∈ Rn×n in the netsor programs computations is drawn from N (0, σ 2 a\nn ) and the input G-vars are Gaussian distributed. The collection of a specific entry of all G-vars of in the netsor program converges in probability to a Gaussian vector {[g1]i, . . . , [gk]i} ∼ N (µ,Σ) for all i ∈ [n] as n goes to infinity. Let µ(g) := E [ [g]i ] be the mean of aG-var and Σ(g, g′) := E [ [g]i · [g′]i ] be the covariance between any two G-vars. The general rule for µ(g) is given by the following equations:\nµ(g) = µin(g) if g is input k∑ i=1 aiµ(gi) if g = k∑ i=1 aigi\n0 otherwise\n(27)\nFor g and g′, let G = {g1, . . . , gr} be the set of G-vars that has been introduced before g and g′ with distribution N (µG ,ΣG), where ΣG ∈ R|G|×|G| containing the pairwise covariances between the G-vars. Σ(g, g′) is calculated via the following rules:\nΣ(g, g′) = Σin(g, g′) if g and g′ are inputs k∑ i=1 aiΣ(gi, g′) if g = k∑ i=1 aigi k∑ i=1 aiΣ(g, gi) if g′ = k∑ i=1 aigi σ2A E z∼N (µ,ΣG) [ϕ(z)ϕ̄(z)] if g = Ah and g′ = Ah′\n0 otherwise\n(28)\nWhere h = ϕ(g1, . . . , gr) and h′ = ϕ̄(g1, . . . , gr) are functions of G-vars in G from possibly different nonlinearities. This set of rules presents a recursive method for calculating the GP kernels in a network where the recursive formula starts from data dependent quantities Σin and µin which are given.\nAll the above results holds when the nonlinearities are bounded uniformly by e(cx 2−α) for some α > 0 and when their derivatives exist.\nStandard vs. NTK initialization. The common practice (which netsor programs uses) is to initialize DNNs weights [A]i,j with N (0, σa√n ) (known as standard initialization) where generally n is the number of units in the previous layer. In this paper we have used a different parameterization scheme as used in (Jacot et al., 2018) and we factor the standard deviation as shown in 3 and initialize weights with standard standard Gaussian. This approach does not change the the forward computation of DNN, but normalizes the backward computation (when computing the gradients) by factor 1n , otherwise RNTK will be scales by n. However this problem can be solved by scaling the step size by 1 n and there is no difference between NTK and standard initialization (Lee et al., 2019)." }, { "heading": "B.2 PROOF FOR THEOREM 1: SINGLE LAYER CASE", "text": "We first derive the RNTK in a simpler setting, i.e., a single layer and single output RNN. We then generalize the results to multi-layer and multi-output RNNs. We drop the layer index ` to simplify notation. From 3 and 4, the forward pass for computing the output under NTK initialization for each input x = {xt}Tt=1 is given by:\ng(t)(x) = σw√ m Wh(t−1)(x) + σu√ n Uxt + σbb (29)\nh(t)(x) = φ ( g(t)(x) ) (30)\nfθ(x) = σv√ n v>h(T )(x) (31)\nNote that (29), (30) and (31) use all the introduced operators introduced in 24, 25 and 26 given input variables W, {Uxt}Tt=1,b,v and h(0)(x).\nFirst, we compute the kernels of forward pass Σ(t,t ′)(x,x′) and backward pass Π(t,t ′)(x,x′) introduced in (6) and (7) for two input x and x′. Note that based on (27) the mean of all variables is zero since the inputs are all zero mean. In the forward pass for the intermediate layers we have:\nΣ(t,t ′)(x,x′) = Σ(g(t)(x), g(t ′)(x′))\n= Σ ( σw√ n Wh(t−1)(x) + σu√ m Uxt + σbb, σw√ n Wh(t ′−1)(x′) + σu√ m Ux′t′ + σbb ) = Σ ( σw√ n Wh(t−1)(x), σw√ n Wh(t ′−1)(x′) ) + Σin ( σu√ m Uxt, σu√ m Ux′t′ ) + Σin (σbb, σbb) .\nWe have used the second and third rule in (28) to expand the formula, We have also used the first and fifth rule to set the cross term to zero, i.e.,\nΣ ( σw√ n Wh(t−1)(x), σu√ n Ux′t′ ) = 0\nΣ ( σw√ n Wh(t−1)(x), σbb ) = 0\nΣ ( σu√ m Uxt, σw√ n Wh(t ′−1)(x′) ) = 0\nΣ ( σbb,\nσw√ n Wh(t ′−1)(x′)\n) = 0\nΣin ( σu√ m Uxt, σbb ) = 0\nΣin ( σbb,\nσu√ m Ux′t′\n) = 0.\nFor the non-zero terms we have\nΣin (σbb, σbb) = σ 2 b Σin ( σu√ m Uxt, σu√ m Ux′t′ ) = σ2u m 〈xt,x′t′〉,\nwhich can be achieved by straight forward computation. If t 6= 1 and t′ 6= 1, by using the forth rule in (28) we have\nΣ ( σw√ n Wh(t−1)(x), σw√ n Wh(t ′−1)(x′) ) = σ2w E z∼N (0,K(t,t′)(x,x′)) [φ(z1)φ(z2)] = Vφ [ K(t,t ′)(x,x′) ] .\nWithK(t,t ′)(x,x′) defined in (16). Otherwise, it will be zero by the fifth rule (if t or t = 1) . Here the set of previously introduced G-vars is G ={ {g(α)(x)},Uxα}t−1α=1, {g(α ′)(x′),Ux′α′} t′−1 α′=1,h (0)(x),h(0)(x′) }\n, but the dependency is only on the last layer G-vars, ϕ({g : g ∈ G}) = φ(g(t−1)(x)), ϕ̄(({g : g ∈ G})) = φ(g(t′−1)(x′)), leading the calculation to the operator defined in (10). As a result\nΣ(t,t ′)(x,x′) = σ2wVφ [ K(t,t ′)(x,x′) ]\n+ σ2u m 〈xt,x′t′〉+ σ2b .\nTo complete the recursive formula, using the same procedure for the first layers we have\nΣ(1,1)(x,x′) = σ2wσ 2 h1(x=x′) + σ2u m 〈x1,x′1〉+ σ2b ,\nΣ(1,t ′)(x,x′) = σ2u m 〈x1,x′t′〉+ σ2b ,\nΣ(t,1)(x,x′) = σ2u m 〈xt,x′1〉+ σ2b .\nThe output GP kernel is calculated via K(x,x′) = σ2vVφ [ K(T+1,T ′+1)(x,x′) ] The calculation of the gradient vectors δ(t)(x) = √ n ( ∇g(t)(x)fθ(x) ) in the backward pass is given by\nδ(T )(x) = σvv φ′(g(T )(x))\nδ(t)(x) = σw√ n\nW> ( φ′(g(t)(x)) δ(t+1)(x) ) t ∈ [T − 1]\nTo calculate the backward pass kernels, we rely on the following Corollary from (Yang, 2020b)\nCorollary 1 In infinitely wide neural networks weights used in calculation of back propagation gradients (W>) is an i.i.d copy of weights used in forward propagation (W) as long as the last layer weight (v) is sampled independently from other parameters and has mean 0.\nThe immediate result of Corollary 1 is that g(t)(x) and δ(t)(x) are two independent Gaussian vector as their covariance is zero based on the fifth rule in (28). Using this result, we have:\nΠ(t,t ′)(x,x′) = Σ ( δ(t)(x), δ(t ′)(x) )\n= E [ [δ(t)(x)]i · [δ(t ′)(x′)]i ] = σ2wE [ [φ′(g(t)(x))]i · [δ(t+1)(x)]i · [φ′(g(t ′)(x′))]i · [δ(t ′+1)(x′)]i\n] = σ2w E\nz∼N (0,K(t+1,t+1′)(x,x′)) [φ′(z1) · φ′(z2)] · E\n[ [δ(t+1)(x)]i · [δ(t ′+1)(x′)]i ] = σ2wVφ′ [ K(t+1,t ′+1)(x,x′) ] Π(t+1,t ′+1)(x,x′).\nIf T ′ − t′ = T − t, then the the formula will lead to\nΠ(T,T ′)(x,x′) = E [ [δ(T )(x)]i, [δ (T ′)(x′)]i ] = σ2vE [ [v]i · [φ′(g(T )(x))]i · [v]i · [φ′(g(T ′)(x′))]i\n] = E [ [φ′(g(T )(x))]i · [φ′(g(T ′)(x′))]i ] · E [[v]i [v]i]\n= σ2vVφ′ [ K(T+1,T+τ+1)(x,x′) ] .\nOtherwise it will end to either of two cases for some t′′ < T or T ′ and by the fifth rule in (28) we have:\nΣ ( δ(t ′′)(x), δ(T ′)(x) ) = Σ ( σw√ n W> ( φ′(g(t ′′)(x)) δ(t ′′+1)(x′) ) ,v φ′(g(T ′)(x)) ) = 0\nΣ ( δ(T )(x), δ(t ′′)(x) ) = Σ ( v φ′(g(T )(x)), σw√\nn W>\n( φ′(g(t ′′)(x′)) δ(t ′′+1)(x′) )) = 0.\nWithout loss of generality, from now on assume T ′ < T and T ′ − T = τ , the final formula for computing the backward gradients becomes:\nΠ(T,T+τ)(x,x′) = σ2vVφ′ [ K(T+1,T+τ+1)(x,x′) ] Π(t,t+τ)(x,x′) = σ2wVφ′ [ K(t+1,t+τ+1)(x,x′) ] Π(t+1,t+1+τ)(x,x′) t ∈ [T − 1]\nΠ(t,t ′)(x,x′) = 0 t′ − t 6= τ\nNow we have derived the single layer RNTK. Recall that θ = Vect [ {W,U,b,v} ] contains all of the network’s learnable parameters. As a result, we have:\n∇θfθ(x) = Vect [ {∂fθ(x) ∂W , ∂fθ(x) ∂U , ∂fθ(x) ∂b , ∂fθ(x) ∂v } ] .\nAs a result 〈∇θfθ(x),∇θfθ(x′)〉 = 〈 ∂fθ(x)\n∂W , ∂fθ(x\n′)\n∂W\n〉 + 〈 ∂fθ(x)\n∂U , ∂fθ(x\n′)\n∂U\n〉 + 〈 ∂fθ(x)\n∂b , ∂fθ(x\n′)\n∂b 〉 + 〈 ∂fθ(x)\n∂v , ∂fθ(x\n′)\n∂v\n〉\nWhere the gradients of output with respect to weights can be formulated as the following compact form:\n∂fθ(x)\n∂W = T∑ t=1 ( 1√ n δ(t)(x) ) · ( σw√ n h(t−1)(x) )> ∂fθ(x)\n∂U = T∑ t=1 ( 1√ n δ(t)(x) ) · ( σu√ m xt )> ∂fθ(x)\n∂b = T∑ t=1 ( σb√ n δ(t)(x) ) ∂fθ(x)\n∂v = σv√ n h(T )(x).\nAs a result we have:〈 ∂fθ(x)\n∂W , ∂fθ(x\n′)\n∂W\n〉 = T ′∑ t′=1 T∑ t=1 ( 1 n 〈 δ(t)(x), δ(t ′)(x′) 〉) · ( σ2w n 〈 h(t−1)(x),h(t ′−1)(x′) 〉)\n〈 ∂fθ(x)\n∂U , ∂fθ(x\n′)\n∂U\n〉 = T ′∑ t′=1 T∑ t=1 ( 1 n 〈 δ(t)(x), δ(t ′)(x′) 〉) · ( σ2u m 〈xt,x′t′〉 ) 〈 ∂fθ(x)\n∂b , ∂fθ(x\n′)\n∂b\n〉 = T ′∑ t′=1 T∑ t=1 ( 1 n 〈 δ(t)(x), δ(t ′)(x′) 〉)\n· σ2b〈 ∂fθ(x)\n∂v , ∂fθ(x\n′)\n∂v\n〉 = ( σ2v n 〈 h(T )(x),h(T ′)(x′) 〉) .\nRemember that for any two G-var E [[g]i[g′]i] is independent of index i. Therefore,\n1\nn\n〈 h(t−1)(x),h(t ′−1)(x′) 〉 → Vφ [ K(t,t ′)(x,x′) ]\nt > 1\n1\nn\n〈 h(0)(x),h(0)(x′) 〉 → σ2h.\nHence, by summing the above terms in the infinite-width limit we get\n〈∇θfθ(x),∇θfθ(x′)〉 → T ′∑ t′=1 T∑ t=1 Π(t,t ′)(x,x′) · Σ(t,t ′)(x′,x′) +K(x,x′). (32) Since Π(t,t ′)(x,x′) = 0 for t′ − t 6= τ it is simplified to\n〈∇θfθ(x),∇θfθ(x′)〉 = ( T∑ t=1 Π(t,t+τ)(x,x′) · Σ(t,t+τ)(x′,x′) ) +K(x,x′).\nMulti-dimensional output. For fθ(x) ∈ Rd, the i-th output for i ∈ [d] is obtained via\n[fθ(x)]i = σv√ n v>i h (T )(x),\nwhere vi is independent of vj for i 6= j. As a result, for The RNTK Θ(x,x′) ∈ Rd×d for multidimensional output we have\n[Θ(x,x′)]i,j = 〈 ∇θ [fθ(x)]i ,∇θ [fθ(x ′)]j 〉 For i = j, the kernel is the same as computed in (32) and we denote it as\n〈∇θ [fθ(x)]i ,∇θ [fθ(x ′)]i〉 = Θ (T,T ′)(x,x′).\nFor i 6= j, since vi is independent of vj , Π(T,T ′)(x,x′) and all the backward pass gradients become zero, so 〈 ∇θ [fθ(x)]i ,∇θ [fθ(x ′)]j 〉 = 0 i 6= j\nwhich gives us the following formula\nΘ(x,x′) = Θ(T,T ′)(x,x′)⊗ Id.\nThis concludes the proof for Theorem 1 for single-layer case." }, { "heading": "B.3 PROOF FOR THEOREM 1: MULTI-LAYER CASE", "text": "Now we drive the RNTK for multi-layer RNTK. We will only study single output case and the generalization to multi-dimensional case is identical as the single layer case. The set of equations for calculation of the output of a L-layer RNN for x = {xt}Tt=1 are\ng(`,t)(x) = σ`w√ n W(`)h(`,t−1)(x) + σ`u√ m U(`)xt + σ ` bb (`) ` = 1\ng(`,t)(x) = σ`w√ n W(`)h(`,t−1)(x) + σ`u√ n U(`)h(`−1,t)(x) + σ`bb (`) ` > 1\nh(`,t)(x) = φ ( g(`,t)(x) ) fθ(x) =\nσv√ n v>h(L,T )(x)\nThe forward pass kernels for the first layer is the same as calculated in B.2. For ` ≥ 2 we have:\nΣ(`,t,t ′)(x,x′) = Σ(g(`,t)(x), g(`,t ′)(x′))\n= Σ ( σ`w√ n W(`)h(`,t−1)(x), σ`w√ n W(`)h(`,t ′−1)(x′) ) + Σ ( σ`u√ n U(`)h(`−1,t)(x), σ`u√ n U(`)h(`−1,t ′)(x′) ) + Σin ( σ`bb (`), σ`bb (`) )\n= (σ`w) 2Vφ\n[ K(`,t,t ′)(x,x′) ]\n+ (σ`u) 2Vφ\n[ K(`−1,t+1,t ′+1)(x,x′) ]\n+ (σ`b) 2,\nwhere\nK(`,t,t ′)(x,x′) =\n[ Σ(`,t−1,t−1)(x,x) Σ(`,t−1,t ′−1)(x,x′)\nΣ(`,t−1,t ′−1)(x,x′) Σ(`,t ′−1,t′−1)(x′,x′)\n] ,\nand Σin is defined in (B.2). For the first first time step we have:\nΣ(`,1,1)(x,x′) = (σ`w) 2σ2h1(x=x′) + (σ ` u) 2Vφ [ K(`,2,2)(x,x′) ] + (σ`b) 2 ,\nΣ(`,t,1)(x,x′) = (σ`u) 2Vφ\n[ K(`,t+1,2)(x,x′) ] + (σ`b) 2 ,\nΣ(`,1,t ′)(x,x′) = (σ`u) 2Vφ [ K(`,2,t ′+1)(x,x′) ] + (σ`b) 2 .\nAnd the output layer\nK(x,x′) = σ2vVφ [ K(L,T+1,T ′+1)(x,x′) ] .\nNote that because of using new weights at each layer we get\nΣ(g(`,t)(x), g(` ′,t′))(x)) = 0 ` 6= `′\nNow we calculate the backward pass kernels in multi-layer RNTK. The gradients at the last layer is calculated via\nδ(L,T )(x) = σvv φ′(g(L,T )(x)). In the last hidden layer for different time steps we have\nδ(L,t)(x) = σLw√ n\n( W(L) )> ( φ′(g(L,t)(x)) δ(L,t+1)(x) ) t ∈ [T − 1]\nIn the last time step for different hidden layers we have\nδ(`,T )(x) = σ`+1u√ n\n( U(`+1) )> ( φ′(g(`,T )(x)) δ(`+1,T )(x) ) ` ∈ [L− 1]\nAt the end for the other layers we have\nδ(`,t)(x) = σ`w√ n\n( W(`) )> ( φ′(g(`,t)(x)) δ(`,t+1)(x) ) + σ`+1u√ n ( U(`+1) )> ( φ′(g(`,t)(x)) δ(`+1,t)(x) ) ` ∈ [L− 1], t ∈ [T − 1]\nThe recursive formula for the Π(L,t,t ′)(x,x′) is the same as the single layer, and it is non-zero for t′ − t = T ′ − T = τ . As a result we have Π(L,T,T+τ)(x,x′) = σ2vVφ′ [ K(L,T+1,T+τ+1) ] (x,x′)\nΠ(L,t,t+τ)(x,x′) = (σLw) 2Vφ′\n[ K(L,t+1,t+τ+1) ] (x,x′) · Π(L,t+1,t+1+τ)(x,x′) t ∈ [T − 1]\nΠ(L,t,t ′)(x,x′) = 0 t′ − t 6= τ\n(33) Similarly by using the same course of arguments used in the single layer setting, for the last time step we have Π(`,T,T+τ)(x,x′) = (σ`+1u ) 2Vφ′ [ K(`,T+1,T+τ+1) ] (x,x′) · Π(`+1,T+1,T+τ+1)(x,x′) ` ∈ [L− 1]\nFor the other layers we have\nΠ(`,t,t ′)(x,x′) = (σ`w) 2Vφ′ [ K(`,t+1,t+τ+1) ] (x,x′) · Π(`,t+1,t+1+τ)(x,x′)\n+ (σ`+1u ) 2Vφ′\n[ K(`,t+1,t+τ+1) ] (x,x′) · Π(`+1,t+1,t ′+1)(x,x′).\nFor t′ − t 6= τ the recursion continues until it reaches Π(L,T,t′′)(x,x′), t′′ < T ′ or Π(L,t ′′,T ′)(x,x′), t′′ < T and as a result based on (33) we get\nΠ(`,t,t ′)(x,x′) = 0 t′ − t 6= τ (34)\nFor t′ − t = τ it leads to Π(L,T,T ′)(x,x′) and has a non-zero value. Now we derive RNTK for multi-layer:\n〈∇θfθ(x),∇θfθ(x′)〉 = L∑ `=1 〈 ∂fθ(x) ∂W(`) , ∂fθ(x ′) ∂W(`) 〉 + L∑ `=1 〈 ∂fθ(x) ∂U(`) , ∂fθ(x ′) ∂U(`) 〉\n+ L∑ `=1 〈 ∂fθ(x) ∂b(`) , ∂fθ(x ′) ∂b(`) 〉 + 〈 ∂fθ(x) ∂v , ∂fθ(x ′) ∂v 〉 ,\nwhere〈 ∂fθ(x)\n∂W(`) , ∂fθ(x\n′)\n∂W(`)\n〉 = T ′∑ t′=1 T∑ t=1 ( 1 n 〈 δ(`,t)(x), δ(`,t ′)(x′) 〉) · ( (σ`w) 2 n 〈 h(`,t−1)(x),h(`,t ′−1)(x′) 〉)\n〈 ∂fθ(x)\n∂U(`) , ∂fθ(x\n′)\n∂U(`)\n〉 = T ′∑ t′=1 T∑ t=1 ( 1 n 〈 δ(`,t)(x), δ(`,t ′)(x′) 〉) · ( (σ`u) 2 m 〈xt,x′t′〉 ) ` = 1\n〈 ∂fθ(x)\n∂U(`) , ∂fθ(x\n′)\n∂U(`)\n〉 = T ′∑ t′=1 T∑ t=1 [( 1 n 〈 δ(`,t)(x), δ(`,t ′)(x′) 〉)\n· ( (σ`u) 2\nn\n〈 h(`−1,t)(x),h(`−1,t ′)(x′) 〉)]\n` > 1\n〈 ∂fθ(x)\n∂b(`) , ∂fθ(x\n′)\n∂b(`)\n〉 = T ′∑ t′=1 T∑ t=1 ( 1 n 〈 δ(`,t)(x), δ(`,t ′)(x′) 〉)\n· (σ`b)2〈 ∂fθ(x)\n∂v , ∂fθ(x\n′)\n∂v\n〉 = ( σ2v n 〈 h(T )(x),h(T ′)(x′) 〉)\nSumming up all the terms and replacing the inner product of vectors with their expectations we get\n〈∇θfθ(x),∇θfθ(x′)〉 = Θ(L,T,T ′) = L∑ `=1 T∑ t=1 T ′∑ t′=1 Π(`,t,t ′)(x,x′) · Σ(`,t,t ′)(x,x′) +K(x,x′). By (34), we can simplify to\nΘ(L,T,T ′) = ( L∑ `=1 T∑ t=1 Π(`,t,t ′)(x,x′) · Σ(`,t,t+τ)(x,x′) ) +K(x,x′).\nFor multi-dimensional output it becomes\nΘ(x,x′) = Θ(L,T,T ′)(x,x′)⊗ Id.\nThis concludes the proof for Theorem 1 for the multi-layer case." }, { "heading": "B.4 PROOF FOR THEOREM 3: WEIGHT-UNTIED RNTK", "text": "The architecture of a weight-untied single layer RNN is\ng(t)(x) = σw√ m W(t)h(t−1)(x) + σu√ n U(t)xt + σbb (t)\nh(t)(x) = φ ( g(t)(x) ) fθ(x) =\nσv√ n v>h(T )(x)\nWhere we use new weights at each time step and we index it by time. Like previous sections, we first derive the forward pass kernels for two same length data x = {xt}Tt=1,x = {x′t′}Tt′=1\nΣ(t,t)(x,x′) = σ2wVφ [ K(t,t)(x,x′) ] + σ2u m 〈xt,x′t〉+ σ2b .\nΣ(t,t ′)(x,x′) = 0 t 6= t′\nSince we are using same weight at the same time step, Σ(t,t)(x,x′) can be written as a function of the previous kernel, which is exactly as the weight-tied RNN. However for different length, it becomes zero as a consequence of using different weights, unlike weight-tied which has non-zero value. The kernel of the first time step and output is also the same as weight-tied RNN. For the gradients we have:\nδ(T )(x) = σvv φ′(g(T )(x))\nδ(t)(x) = σw√ n\n(W(t+1))> ( φ′(g(t)(x)) δ(t+1)(x) ) t ∈ [T − 1]\nFor t′ = t we have: Π(t,t)(x,x′) = σ2wVφ′ [ K(t+1,t+1)(x,x′) ] Π(t+1,t+1)(x,x′)\nΠ(t,t)(x,x′) = σ2vVφ′ [ K(T+1,T+τ+1)(x,x′) ] .\nDue to using different weights for t 6= t′, we can immediately conclude that Π(t,t′)(x,x′) = 0. This set of calculation is exactly the same as the weight-tied case when τ = T − T = 0. Finally, with θ = Vect [ {{W(t),U(t),b(t)}Tt=1,v} ] we have\n〈∇θfθ(x),∇θfθ(x′)〉 = T∑ t=1 〈 ∂fθ(x) ∂W(t) , ∂fθ(x ′) ∂W(t) 〉 + T∑ t=1 〈 ∂fθ(x) ∂U(t) , ∂fθ(x ′) ∂U(t) 〉\n+ T∑ t=1 〈 ∂fθ(x) ∂b(t) , ∂fθ(x ′) ∂b(t) 〉 + 〈 ∂fθ(x) ∂v , ∂fθ(x ′) ∂v , 〉\nwith 〈 ∂fθ(x)\n∂W(t) , ∂fθ(x\n′)\n∂W(t)\n〉 = ( 1\nn\n〈 δ(t)(x), δ(t)(x′) 〉) · ( σ2w n 〈 h(t−1)(x),h(t−1)(x′) 〉) 〈 ∂fθ(x)\n∂U(t) , ∂fθ(x\n′)\n∂U(t)\n〉 = ( 1\nn\n〈 δ(t)(x), δ(t)(x′) 〉) · ( σ2u m 〈xt,x′t〉 ) 〈 ∂fθ(x)\n∂b(t) , ∂fθ(x\n′)\n∂b(t)\n〉 = ( 1\nn\n〈 δ(t)(x), δ(t)(x′) 〉) · σ2b〈\n∂fθ(x) ∂v , ∂fθ(x\n′)\n∂v\n〉 = ( σ2v n 〈 h(T )(x),h(T ′)(x′) 〉) .\nAs a result we obtain\n〈∇θfθ(x),∇θfθ(x′)〉 = ( T∑ t=1 Π(t,t)(x,x′) · Σ(t,t)(x′,x′) ) +K(x,x′),\nsame as the weight-tied RNN when τ = 0. This concludes the proof for Theorem 3.\nB.5 ANALYTICAL FORMULA FOR Vφ[K] For any positive definite matrixK = [ K1 K3 K3 K2 ] we have:\n• φ = ReLU (Cho & Saul, 2009)\nVφ[K] = 1\n2π\n( c(π − arccos(c)) + √ 1− c2) )√ K1K2,\nVφ′ [K] = 1\n2π (π − arccos(c)).\nwhere c = K3/ √ K1K2\n• φ = erf (Neal, 1995)\nVφ[K] = 2\nπ arcsin\n( 2K3√\n(1 + 2K1)(1 + 2K3)\n) ,\nVφ′ [K] = 4 π √ (1 + 2K1)(1 + 2K2)− 4K23 ." }, { "heading": "C PROOF FOR THEOREM 2: RNTK CONVERGENCE AFTER TRAINING", "text": "To prove theorem 2, we use the strategy used in (Lee et al., 2019) which relies on the the local lipschitzness of the network Jacobian J(θ,X ) = ∇θfθ(x) ∈ R|X |d×|θ| at initialization.\nDefinition 1 The Jacobian of a neural network is local lipschitz at NTK initialization (θ0 ∼ N (0, 1)) if there is constant K > 0 for every C such that{\n‖J(θ,X )‖F < K ‖J(θ,X )− J(θ̃,X )‖F < K‖θ − θ̃‖ , ∀ θ, θ̃ ∈ B(θ0, R)\nwhere B(θ,R) := {θ : ‖θ0 − θ‖ < R}.\nTheorem 4 Assume that the network Jacobian is local lipschitz with high probability and the empirical NTK of the network converges in probability at initialization and it is positive definite over the input set. For > 0, there exists N such that for n > N when applying gradient flow with η < 2 (λmin(Θ(X ,X )) + λmax(Θ(X ,X ))−1 with probability at least (1− ) we have:\nsup s ‖θs − θ0‖2√ n , sup s ‖Θ̂s(X ,X )− Θ̂0(X ,X )‖ = O ( 1√ n ) .\nProof: See (Lee et al., 2019)\nTheorem 4 holds for any network architecture and any cost function and it was used in (Lee et al., 2019) to show the stability of NTK for MLP during training.\nHere we extend the results for RNTK by proving that the Jacobian of a multi-layer RNN under NTK initialization is local lipschitz with high probability.\nTo prove it, first, we prove that for any two points θ, θ̃ ∈ B(θ0, R) there exists constant K1 such that\n‖g(`,t)(x)‖2, ‖δ(`,t)(x)‖2 ≤ K1 √ n (35)\n‖g(`,t)(x)− g̃(`,t)(x)‖2, ‖δ(`,t)(x)− δ̃(`,t)(x)‖2 ≤ ‖θ̄ − θ̃‖ ≤ K1 √ n‖θ − θ̃‖. (36)\nTo prove (35) and (36) we use the following lemmas.3\nLemma 1 Let A ∈ Rn×m be a random matrix whose entries are independent standard normal random variables. Then for every t ≥ 0, with probability at least 1− e(−ct2) for some constant c we have:\n‖A‖2 ≤ √ m+ √ n+ t.\nLemma 2 Let a ∈ Rn be a random vector whose entries are independent standard normal random variables. Then for every t ≥ 0, with probability at least 1− e(−ct2) for some constant c we have:\n‖a‖2 ≤ √ n+ √ t.\nSetting t = √ n for any θ ∈ R(θ0, R). With high probability, we get:\n‖W(`)‖2, ‖U(`)‖2 ≤ 3 √ n, ‖b`‖2 ≤ 2 √ n, ‖h(`,0)(x)‖2 ≤ 2σh √ n.\nWe also assume that there exists some finite constant C such that\n|φ(x)| < C|x|, |φ(x)− φ(x′)| < C|x− x′|, |φ′(x)| < C, , |φ′(x)− φ′(x′)| < C|x− x′|.\nThe proof is obtained by induction. From now on assume that all inequalities in (35) and (36) holds with some k for the previous layers. We have\n‖g(`,t)(x)‖2 = ‖ σ`w√ n W(`)h(`,t−1)(x) + σ`u√ n U(`) h(`−1,t)(x) + σ`bb (`)‖2\n≤ σ ` w√ n ‖W(`)‖2‖φ\n( g(`,t−1)(x) ) ‖2 +\nσ`u√ n ‖U(`)‖2‖φ\n( g(`−1,t)(x) ) ‖2 + σ`b‖b(`)‖2\n≤ ( 3σ`wCk + 3σ ` uCk + 2σb )√ n.\nAnd the proof for (35) and (36) is completed by showing that the first layer is bounded\n‖g(1,1)(x)‖2 = ‖ σ1w√ n W(`)h(1,0)(x) + σ1u√ m U(`) x1 + σ 1 bb (1)‖2\n≤ (3σ1wσh + 3σu√ m ‖x1‖2 + 2σb) √ n.\nFor the gradient of first layer we have\n‖δ(L,T )(x)‖2 = ‖σvv φ′(g(L,T )(x))‖2 ≤ σv‖v‖2‖φ′(g(L,T )(x))‖∞ = 2σvC √ n.\nAnd similarly we have\n‖δ(`,t)(x)‖ ≤ (3σwCk′ + 3σuCk′) √ n.\n3See math.uci.edu/~rvershyn/papers/HDP-book/HDP-book.pdf for proofs\nFor θ, θ̃ ∈ B(θ0, R) we have\n‖g(1,1)(x)− g̃(1,1)(x)‖2 = ‖ σ1w√ n (W(1) − W̃(1))h(1,0)(x) + σ 1 u√ m (U(1) − Ũ(1))h(1,0)(x)‖2\n≤ (\n3σ1wσh + 3σ1u m ‖x1‖2\n) ‖θ − θ̃‖2 √ n.\n‖g(`,t)(x)− g̃(`,t)(x)‖2 ≤ ‖φ(g(`,t−1)(x))‖2‖ σ`w√ n (W(`) − W̃(`))‖2\n+ ‖ σ ` w√ n W̃(`)‖2‖φ(g(`,t−1)(x))− φ(g̃(`,t−1)(x))‖2\n+ ‖φ(g(`−1,t)(x))‖2‖ σ`u√ n (U(`) − Ũ(`))‖2\n+ ‖ σ ` u√ n Ũ(`)‖2‖φ(g(`−1,t)(x))− φ(g̃(`−1,t)(x))‖2 + σb‖b(`) − b̃(`)‖ ≤ (kσ`w + 3σ`wCk + kσ`u + 3σ`uCk + σb)‖θ − θ̃‖2 √ n.\nFor gradients we have\n‖δ(L,T )(x)− δ̃(L,T )(x)‖2 ≤ σv‖φ′(g(L,T ))‖∞‖(v − ṽ)‖2 + σv‖v‖2‖φ′(g(L,T )(x))− φ′(g(L,T )(x))‖2 ≤ (σvC + 2σvCk)‖θ − θ̃‖2 √ n.\nAnd similarly using same techniques we have\n‖δ(`,t)(x)− δ̃(`,t)(x)‖2 ≤ (σwC + 3σwCk + σuC + 3σuCk)‖θ − θ̃‖2 √ n.\nAs a result, there exists K1 that is a function of σw, σu, σb, L, T and the norm of the inputs.\nNow we prove the local Lipchitzness of the Jacobian\n‖J(θ,x)‖F ≤ L∑ `=2 T∑ t=1 ( 1 n ∥∥∥∥δ(`,t)(x)(σ`wh(`,t−1)(x))>∥∥∥∥ F\n+ 1\nn ∥∥∥∥δ(`,t)(x)(σ`uh(`,t−1)(x))>∥∥∥∥ F + 1√ n ∥∥∥δ(`,t)(x) · σ`b∥∥∥ F ) +\nT∑ t=1 ( 1 n ∥∥∥∥δ(1,t)(x)(σ1wh(1,t−1)(x))>∥∥∥∥ F\n+ 1√ nm ∥∥∥δ(1,t−1)(x) (σ1uxt)>∥∥∥ F + 1√ n ∥∥∥δ(1,t)(x) · σ1b∥∥∥ F ) + σv√ n ‖h(L,T )(x)‖F\n≤ ( L∑ `=2 T∑ t=1 (K21Cσ ` w +K 2 1Cσ ` u + σ ` bK1)\n+ T∑ t=1 (K21Cσ 1 w + K1σ 1 u√ m ‖xt‖2 + σ1bK1) + σvCK1 ) .\nAnd for θ, θ̃ ∈ B(θ0, R) we have ‖J(θ,x)− J̃(θ,x)‖F ≤ L∑ `=2 T∑ t=1 ( 1 n ∥∥∥∥δ(`,t)(x)(σ`wh(`,t−1)(x))> − δ̃(`,t)(x)(σ`wh̃(`,t−1)(x))>∥∥∥∥ F\n+ 1\nn ∥∥∥∥δ(`,t)(x)(σ`uh(`,t−1)(x))> − δ̃(`,t)(x)(σ`uh̃(`,t−1)(x))>∥∥∥∥ F\n+ 1√ n ∥∥∥δ(`,t)(x) · σ`b − δ̃(`,t)(x) · σ`b∥∥∥ F\n+ T∑ t=1 ( 1 n ∥∥∥∥δ(1,t)(x)(σ1wh(1,t−1)(x))> − δ̃(1,t)(x)(σ1w ˜h(1,t−1)(x))>∥∥∥∥ F\n+ 1√ nm ∥∥∥δ(1,t−1)(x) (σ1uxt)> − δ̃(1,t−1)(x) (σ1uxt)>∥∥∥ F\n+ 1√ n ∥∥∥δ(`,t)(x) · σ`b − δ̃(`,t)(x) · σ`b∥∥∥)+ σv√n‖h(L,T )(x)− ˜h(L,T )(x)‖F ≤ ( L∑ `=2 T∑ t=1 (4K21Cσ ` w + 4K 2 1Cσ ` u + σ ` bK1)\n+ T∑ t=1 (4K21Cσ 1 w + K1σ 1 u√ m ‖xt‖2 + σ1bK1) + σvCK1 ) ‖θ − θ̃‖2.\nThe above proof can be generalized to the entire dataset by a straightforward application of the union bound. This concludes the proof for Theorem 2." } ]
2,021
THE RECURRENT NEURAL TANGENT KERNEL